Tech

UMD researchers seek to reduce food waste and establish the science of food date labeling

image: Grocery Market

Image: 
Mohamed Mahmoud Hassan

Minimizing food waste is top of mind right now during the COVID-19 global pandemic, with the public concerned about the potential ramifications for our food supply chain. But even before COVID-19, given concerns about a rapidly growing population and hunger around the world, the Food and Agriculture Organization of the United Nations (FAO) issued a global call for zero tolerance on food waste. However, the lack of regulation, standardization, and general understanding of date labeling on food products (such as "best by" and "use by" dates) leads to billions of dollars per year in food waste in the United States alone. Many people don't realize that date labels on food products (with the exception of infant formula) are entirely at the manufacturer's discretion and are not supported by robust scientific evidence. To address this concern and combat global food waste, researchers at the University of Maryland have come together across departments in the College of Agriculture & Natural Resources with the goal of clarifying the science or lack thereof behind food date labels, highlighting the need for interdisciplinary research and global research trends in their new publication in Food Control.

"We have 50 different types of date labels that are currently used in the US because there is no regulation - best by, best if used by, use by - and we as consumers don't know what these things mean," says Debasmita Patra, assistant research professor in Environmental Science and Technology and lead author on the paper. "The labeling is the manufacturer's best estimation based on taste or whatever else, and it is not scientifically proven. But our future intention is to scientifically prove what is the best way to label foods. As a consumer and as a mom, a best by date might raise food safety concerns, but date labeling and food safety are not connected to each other right now, which is a wide source of confusion. And when billions of dollars are just going to the trash because of this, it's not a small thing."

According to the United States Department of Agriculture's Economic Research Service (USDA-ERS), Americans discard or waste about 133 billion pounds of food each year, representing $161 billion and a 31% loss of food at the retail and consumer level. According to the FDA, 90% of Americans say they are likely to prematurely discard food because they misinterpret date labels because of food safety concerns or uncertainty on how to properly store the product. This simple confusion accounts for 20% of the total annual food waste in the United States, representing more than 26 billion pounds per year and over $32 billion in food waste.

"Food waste is a significant threat to food security," adds Paul Leisnham, associate professor in Environmental Science and Technology and co-author. "Recognition of food waste due to confusion over date labeling is growing, but few studies have summarized the status of the research on this topic."

This was the goal of their latest publication, gathering support and background for their future work to reduce food waste, and providing guidance for future areas of research in this field. In order to achieve this, Patra enlisted Leisnham in her own department, but also relied on computational support and food quality and safety expertise from Abani Pradhan, associate professor in Nutrition and Food Science, and his postdoctoral fellow Collins Tanui, both co-authors on the paper.

"We wanted to see the trends and give some suggestions, because the paper shows that we are some of the very few who are thinking about truly interdisciplinary research connecting food labeling to food waste," says Patra. "In fact, we were joking because one major finding was that environmental sciences and food science departments don't seem to collaborate on this topic, so we are doing something unique here at UMD."

"Our paper underlined the fact that future research on food waste and date labeling needs to take an interdisciplinary approach to better explore the perspectives of multiple stakeholders, adds Leisnham. "Expertise from environmental science, food science, sociology, Extension education, and other disciplines can more effectively develop interventions to reduce behaviors that may increase food waste. This is an environmental issue, but involves the knowledge, attitudes, perceptions, and social behaviors of multiple stakeholders, including retailers, food-service providers, and diverse consumers."

The collaboration between environmental sciences and food sciences at UMD is an example of this collaboration in action, with the goal of establishing what science, if any, already underlies date labeling and connecting this to food quality and safety.

"Utilizing my expertise in experimental and mathematical modeling work, we aim to scientifically evaluate the quality characteristics, shelf life, and food spoilage risk of food products," says Pradhan. "This would help in determining if the food products are of good quality beyond the mentioned dates, rather than discarding them prematurely. We anticipate to reduce food waste through our ongoing and future research findings."

Patra stresses the importance of further collaboration through University of Maryland Extension (UME) to have maximum impact on food waste. "Where is the confusion coming from?," says Patra. "If we understand that, maybe we can better disseminate the information through our Extension work."

Patra adds, "Food is something that is involved in everybody's life, and so everyone needs to be a good food manager. But even now, there is no robust scientific evidence behind date labels, and yet those labels govern people's purchasing behavior. People look for something that has a longer 'best by' date thinking they are getting something better. And when you throw that food away, you are not only wasting the food, but also all the economics associated with that, like production costs, transportation from the whole farm to fork chain, and everything else that brought you that product just to be thrown away. Food safety, regulation, and education need to all combine to help solve this problem, which is why interdisciplinary collaboration is so important."

Credit: 
University of Maryland

NASA's ICESat-2 measures arctic ocean's sea ice thickness, snow cover

image: Scientists have used NASA's ICESat-2 to measure the thickness of Arctic sea ice, as well as the depth of snow on the ice. Here, ridges and cracks have formed in sea ice in the Arctic Ocean.

Image: 
Credits: NASA / Jeremy Harbeck

Arctic sea ice helps keep Earth cool, as its bright surface reflects the Sun's energy back into space. Each year scientists use multiple satellites and data sets to track how much of the Arctic Ocean is covered in sea ice, but its thickness is harder to gauge. Initial results from NASA's new Ice Cloud and land Elevation Satellite-2 (ICESat-2) suggest that the sea ice has thinned by as much as 20% since the end of the first ICESat mission (2003-2009), contrary to existing studies that find sea ice thickness has remained relatively constant in the last decade.

Arctic sea ice thickness dropped drastically in the first decade of the 21st Century, as measured by the first ICESat mission from 2003 to 2009 and other methods. The European Space Agency's CryoSat-2, launched in 2010, has measured a relatively consistent thickness in Arctic sea ice since then. With the launch of ICESat-2 in 2018, researchers looked to this new way of measuring sea ice thickness to advance the study of this data record.

"We can't get thickness just from ICESat-2 itself, but we can use other data to derive the measurement," said Petty. For example, the researchers subtract out the height of snow on top of the sea ice by using computer models that estimate snowfall. "The first results were very encouraging."

In their study, published recently in the Journal of Geophysical Research: Oceans, Petty and his colleagues generated maps of Arctic sea ice thickness from October 2018 to April 2019 and saw the ice thickening through the winter as expected.

Overall, however, calculations using ICESat-2 found that the ice was thinner during that time period than what researchers have found using CryoSat-2 data. Petty's group also found that small but significant 20% decline in sea ice thickness by comparing February/March 2019 ICESat-2 measurements with those calculated using ICESat in February/March 2008 - a decline that the CryoSat-2 researchers don't see in their data.

These are two very different approaches to measuring sea ice, Petty said, each with its own limitations and benefits. CryoSat-2 carries a radar to measure height, as opposed to ICESat-2's lidar, and radar mostly passes through snow to measure the top of the ice. Radar measurements like the ones from CryoSat-2 could be thrown off by seawater flooding the ice, he noted. In addition, ICESat-2 is still a young mission and the computer algorithms are still being refined, he said, which could ultimately change the thickness findings.

"I think we're going to learn a lot from having these two approaches to measuring ice thickness. They might be giving us an upper and lower bound on the sea ice thickness, and the right answer is probably somewhere in between," Petty said. "There are reasons why ICESat-2 estimates could be low, and reasons why CryoSat-2 could be high, and we need to do more work to understand and bring these measurements in line with each other."

ICESat-2 has a laser altimeter, which uses pulses of light to precisely measure height down to about an inch. Each second, the instrument sends out 10,000 pulses of light that bounce off the surface of Earth and return to the satellite and records the length of time it takes to make that round trip. The light reflects off the first substance it hits, whether that's open water, bare sea ice or snow that has accumulated on top of the ice, so scientists use a combination of ICESat-2 measurements and other data to calculate sea ice thickness.

By comparing ICESat-2 data with measurements from another satellite, researchers have also created the first satellite-based maps of the amount of snow that has accumulated on top of Arctic sea ice, tracking this insulating material.

"The Arctic sea ice pack has changed dramatically since monitoring from satellites began more than four decades ago," said Nathan Kurtz, ICESat-2 deputy project scientist at NASA's Goddard Space Flight Center in Greenbelt, Maryland. "The extraordinary accuracy and year-round measurement capability of ICESat-2 provides an exciting new tool to allow us to better understand the mechanisms leading to these changes, and what this means for the future."

With ICESat-2 and CryoSat-2 using two different methods to measure ice thickness - one measuring the top of the snow, the other the boundary between the bottom of the snow layer and the top of the ice layer - but researchers realized they could combine the two to calculate the snow depth.

"This is the first time ever that we can get snow depth across the entire Arctic Ocean's sea ice cover," said Ron Kwok, a sea ice scientist at NASA's Jet Propulsion Laboratory in Southern California and author of another study in JGR Oceans. "The Arctic region is a desert - but what snow we do get is very important in terms of the climate and insulating sea ice."

The study found that snow starts building up slowly in October, when newly formed ice has an average of about 2 inches (5 centimeters) of snow on it and multiyear ice has an average of 5.5 inches (14 cm) of snow. Snowfall picks up later in the winter in December and January and reaches its maximum depth in April, when the relatively new ice has an average of 6.7 inches (17 cm) and the older ice has an average of 10.6 inches (27 cm) of snow.

When the snow melts in the spring, it can pool up on the sea ice - those melt ponds absorb heat from the Sun and can warm up the ice faster, just one of the impacts of snow on ice.

Credit: 
NASA/Goddard Space Flight Center

CFC replacements are a source of persistent organic pollution in the Arctic

Substances used to replace ozone-depleting chlorofluorocarbons (CFCs) may be just as problematic as their predecessors, a new study shows.

In 1987, Canada implemented the Montreal Protocol, a global agreement to protect Earth's ozone layer by ceasing the use of substances like CFCs. Unfortunately, the CFC-replacement substances used to replace them are proving problematic as well, with accumulating levels of their degradation products recently found in the Canadian Arctic.

"In many ways, the degradation products from these substances may be just as concerning as the original chemical they were meant to replace," said Alison Criscitiello, director of the Canadian Ice Core Lab (CICL), housed in the University of Alberta's Faculty of Science. "We are seeing significant levels of these short-chain acids accumulating in the Devon Ice Cap, and this study links some of them directly to CFC replacement compounds."

An ice core drilled on the summit of Devon Ice Cap in the Canadian high Arctic shows a tenfold increase in short-chain perfluorocarboxylic acid (scPFCA) deposition between 1986 and 2014. scPFCAs form through atmospheric oxidation of several industrial chemicals, some of which are CFC replacement compounds. scPFCAs are highly mobile persistent organic pollutants and belong to the class of so-called "forever chemicals" because they do not break down. A few preliminary studies have shown toxicity of these substances to plants and invertebrates.

"This is the first multi-decadal temporal record of scPFCA deposition in the Arctic," explained Criscitiello. "Our results suggest that the CFC-replacement compounds mandated by the Montreal Protocol are the dominant source of some scPFCAs to remote regions."

Over the past four years, Criscitiello and colleagues drilled four ice cores across the eastern Canadian high Arctic. This interdisciplinary work is thanks to a strong collaboration between Criscitiello and the labs of York University atmospheric chemist Cora Young and Environment and Climate Change Canada research scientist Amila De Silva.

These same Canadian Arctic ice cores also contain significant levels of perfluoroalkyl acids (PFAAs). These results demonstrate that both perfluoroalkyl carboxylic acids (PFCAs) and perfluorooctane sulfonate (PFOS) have continuous and increasing deposition on the Devon Ice Cap despite North American and international regulations and phase-outs. This is the likely result of ongoing manufacture, use, and emissions of these persistent pollutants, as well as their precursors and other new compounds in regions outside of North America.

"These results show the need for a more holistic approach when deciding to ban and replace chemical compounds," explained Criscitiello. "Chemicals degrade, and developing a strong understanding of how they degrade in the environment, and what they degrade to, is vital."

Credit: 
University of Alberta

Electrolysis: Chemists have discovered how to produce better electrodes

Another step forward for renewable energies: The production of green hydrogen could be even more efficient in the future. By applying an unusual process step, chemists at Martin Luther University Halle-Wittenberg (MLU) have found a way to treat inexpensive electrode materials and considerably improve their properties during electrolysis. The group published their research results in the journal "ACS Catalysis".

Hydrogen is thought to be the solution to the storage problem of renewable energies. It can be produced in local electrolysers, stored temporarily and then very efficiently converted back into electricity in a fuel cell. It also serves as an important raw material in the chemical industry. However, the green production of hydrogen is still hampered by the poor conversion of the supplied electricity. "One reason is that the dynamic load of the fluctuating electricity from the sun and wind quickly pushes the materials to their limits. Cheap catalyst materials rapidly become less active," says Professor Michael Bron from the Institute of Chemistry at MLU, explaining the basic problem.

His research group has now discovered a method that significantly increases both the stability and the activity of inexpensive nickel hydroxide electrodes. Nickel hydroxide is a cheap alternative to very active, but also expensive catalysts, like iridium and platinum. The scientific literature recommends heating the hydroxide up to 300 degrees. This increases the stability of the material and partially converts it to nickel oxide. Higher temperatures would completely destroy the hydroxide. "We wanted to see this with our own eyes and gradually heated the material in the laboratory to 1,000 °C," says Bron.

As temperatures increased, the researchers observed the expected changes to the individual particles under the electron microscope. These particles were converted to nickel oxide, grew together to form larger structures and, at very high temperatures, formed patterns reminiscent of zebra crossings. However, electrochemical testing surprisingly showed a constantly high activity level of the particles, which should have no longer been usable in the electrolysis. As a rule, large surfaces and, hence, smaller structures are more active during electrolysis. "We therefore attribute the high level of activity of our much larger particles to an effect that, surprisingly, only occurs at high temperatures: the formation of active oxide defects on the particles," says Bron.

Using X-ray crystallography, the researchers discovered how the crystal structure of the hydroxide particles changes as temperatures increase. They concluded that when heated to 900 °C, a point at which the particles exhibit the highest level of activity, the defects undergo a transitioning process that is completed at 1,000 °C. At this point activity suddenly drops again.

Bron and his team are confident that they have found a promising approach since, even after repeated measurements after 6,000 cycles, the heated particles still generated 50 % more electricity than the untreated particles. Next the researchers want to use X-ray diffraction to better understand why these defects increase the activity so much. They are also looking for ways to produce the new material so that smaller structures are retained even after heat treatment.

Credit: 
Martin-Luther-Universität Halle-Wittenberg

Cancer cells' growth amid crowding reveals nuanced role for known oncogene

image: Losing their inhibitions: Cells prefer not to be crowded and stop proliferating once too densely packed. A technique called proximity ligation analysis reveals an interaction between two proteins (YAP and YY1) in the nuclei of human Schwann cells. YAP and YY1, working together, can shut off this behavior. The red dots represent the interaction, nuclei are stained in blue.

Image: 
Kissil Lab at Scripps Research

JUPITER, Fla.--MAY 14, 2020--Like subway com

muters on a crowded train, cells generally prefer not to be packed in too tightly. In fact, they have set up mechanisms to avoid this, a phenomenon called "contact inhibition." A hallmark of cancer cells is that they lack this contact inhibition, and instead become pushy, facilitating their spread.

Scientific understanding of the mechanism underlying this cell behavior change has had many gaps. A new paper from the lab of Joseph Kissil, PhD, professor of Molecular Medicine at Scripps Research in Florida, provides important new insights.

Writing in Cancer Research, a journal of the American Association for Cancer Research, Kissil and colleagues offer new details about how the "stop-growth" signal unfurls during cell-to-cell contact, and how disruption of that "stop-growth" signal can promote cancer.

A key player is a protein called YAP, a regulator of gene expression. YAP is a major effector of a pathway referred to as the Hippo pathway, so named after geneticists discovered that mutations to the HPO gene produced lumpy, hippo-like tissue overgrowth in fruit fly models.

Healthy cells and developing organs "know" when they should grow and when they should stop growing, based on multiple signaling molecules. These signals are transmitted by YAP and the Hippo pathway. Tracing out those signals is not only central to understanding our basic biology, but to finding new ways to attack cancers with precision therapies, Kissil explains.

Increasing cell density normally activates a change in cell signaling. It does this via an elevation of protein involved in initiating contact inhibition, p27. But a disrupted Hippo pathway interferes with normal YAP behavior and blocks the expected p27 surge.

Kissil says the team was surprised to find YAP in the uncharacteristic role of shutting down gene transcription. Previous studies suggested that YAP is an activator of genes that promote cell growth. The reality proved to be much more complex.

"What we show here is that YAP can also turn off genes, not just turn them on," Kissil says. "It shuts down genes that would otherwise prevent cells from proliferating."

In the end, like a broken down subway line, disruption of the Hippo pathway at any point can redirect the cell's behavior, away from contact inhibition. The discovery of YAP's dual role as both promotor and repressor of gene transcription provides important information in the efforts to make cancer drugs that act on YAP.

"When we target YAP in cancer, we are targeting its function as an activator of cancer, but we now know we also need to consider its suppressive functions, as well," Kissil said. "We have to consider both the activation and the repression."

Finding the players that both interact with YAP and have a functional role in promoting cancer growth required use of a genome-wide bioinformatics technique called ChIP-seq. The work involved collaboration with the labs of Matthew Pipkin, PhD, of Scripps Research in Florida, and Michael Kareta, PhD, of Sanford Research in Sioux Falls, South Dakota. They used ChIP-seq to map the overlapping relationships of the YAP-linked genes, which number in the thousands.

They worked specifically in human Schwann cells, which are peripheral nerve cells that produce the insulating myelin around nerves, but the findings should apply to other cancers, Kissil says.

The team looked at YAP in the context of cell crowding. They learned that YAP's role involves recruitment of other interacting proteins that include YY1, also known as Yin-Yang 1, EZH2 and a protein complex called PRC2. Those will also be important to study further, Kissel says, as well as the interaction of these players in the context of cancer drug resistance.

Credit: 
Scripps Research Institute

Duality Technologies researchers accelerate privacy-enhanced collaboration on genomic data

[Newark, New Jersey, May 14, 2020] Duality Technologies, a leading provider of Privacy-Enhancing Technologies (PETs), announced today the publication of an article in PNAS (Proceedings of the National Academy of Sciences of the United States of America), on Secure large-scale genome-wide association studies using homomorphic encryption.

The paper, co-authored by Prof. Shafi Goldwasser, Dr. Marcelo Blatt, and Dr. Yuriy Polyakov of Duality Technologies, along with Dr. Alexander Gusev of the Dana-Farber Cancer Institute and Harvard Medical School, presents a solution leveraging homomorphic encryption (HE) to perform large-scale Genome-Wide Association Studies on encrypted genetic and phenotype data for a dataset of over 25,000 individuals, yielding results 30 times faster than an alternative state-of-the-art method based on Multiparty Computation (Cho et al 2018 Nat Biotechnol). This work was partially supported by a grant from the National Human Genome Research Institute of the National Institutes of Health (NIH).

Genome-Wide Association Studies seek to identify genetic variants associated with a particular trait and have been crucial in understanding complex diseases. However, they depend on individual-level, highly-sensitive genomic variant data which is typically protected by privacy regulations - making it challenging to gather large volumes of data that are required for this type of research. HE technology enables collaboration on large-scale genomic and clinical studies while protecting the privacy of the individual participants. The performance breakthrough documented by the Duality researchers demonstrates that HE is a practical solution for privacy-preserving GWAS computations, with the ability to scale genome-wide analyses to hundreds of thousands of individuals.

GWAS can yield insights into a range of diseases and is currently being utilized to identify genetic variants associated with susceptibility or response to COVID-19. As health authorities and medical researchers seek to understand the true nature of the novel condition, homomorphic encryption enables researchers to pool individual-level data on a global scale to perform genomic analysis in a privacy-preserving manner. Duality's technology demonstrated by this advanced research is currently available to the healthcare industry.

The article details a privacy-preserving framework based on advances in HE and demonstrates that Duality's advanced solution can swiftly perform accurate GWAS analysis while keeping all individual data encrypted. The technology described in the paper is highly scalable: the researchers' extrapolations show that this method can perform GWAS on 100,000 individuals and 500,000 Single Nucleotide Polymorphisms (SNPs) in 5.6 hours on a single server node (or in 11 minutes on 31 server nodes running in parallel). These performance results are significantly faster than prior secure computation methods with a similar level of accuracy. The advances can also be applied to other branches of medical research such as clinical trials, drug repurposing and rare disease studies.

"Today, the critical barrier to personalized medicine is no longer just data availability, but data privacy. With high accuracy even for very subtle associations, HE-GWAS results can be used to derive polygenic risk scores for accurate patient stratification and prediction of future disease," said Dr. Alexander Gusev of the Dana-Farber Cancer Institute and Harvard Medical School. "Ongoing precision medicine studies can immediately benefit from these capabilities by enabling secure collaboration across clinical institutions without requiring complex data sharing agreements or compromising individual-level privacy. This technology can also empower patients to participate in research studies directly and receive personalized results knowing that their individual data will not be exposed."

"At this time of crisis, when the world's eyes turn towards the potential held by collaboration on sensitive individual medical data, our interdisciplinary research team of cryptographers and data scientists, in cooperation with a leading bioinformatician, demonstrated the feasibility of data privacy compliant global collaboration," said Prof. Shafi Goldwasser, Director of the Simons Institute for the Theory of Computing at UC Berkeley, Co-Founder and Chief Scientist of Duality Technologies, and co-author of the paper. "Our implementation shows that homomorphic encryption is practical, functional, and is significantly faster than existing options for secure data analysis. Duality Technologies is proud to collaborate with Dr. Alexander Gusev to highlight the use of its market-ready analytics platforms in scientific research, demonstrating that homomorphic encryption will play a central role in protecting privacy while alleviating the current global health crisis. This has significant implications for healthcare, and a variety of other vital fields."

“We are excited by the outcome of this research, demonstrating that complex data science computations are feasible at scale on homomorphically encrypted data,” said Dr. Marcelo Blatt, Head of Data Science at Duality Technologies. “These capabilities are now available in our SecurePlus Platform for the broader healthcare industry, enabling privacy-enhanced medical collaboration.”

“We have far exceeded what is considered state-of-the-art performance of homomorphic encryption computations,” added Dr. Yuriy Polyakov, Principal Scientist at Duality Technologies. “The secret behind our breakthrough is our unique combination of world-leading cryptographic and data science experts. It is gratifying to see these technological advances being used to address real-world challenges in a variety of regulated industries.”

Credit: 
Duality Technologies

Oyster farming and shorebirds likely can coexist

image: A red knot among a flock of migratory shorebirds foraging along the Delaware Bayshore.

Image: 
Brian Schumm

Oyster farming as currently practiced along the Delaware Bayshore does not significantly impact four shorebirds, including the federally threatened red knot, which migrates thousands of miles from Chile annually, according to a Rutgers-led study.

The findings, published in the journal Ecosphere, likely apply to other areas around the country including the West Coast and Gulf Coast, where oyster aquaculture is expanding, according to Rutgers experts who say the study can play a key role in identifying and resolving potential conflict between the oyster aquaculture industry and red knot conservation groups.

"Our research team represents a solid collaboration between aquaculture research scientists and conservation biologists, and we've produced scientifically robust and defensible results that will directly inform management of intertidal oysterculture along Delaware Bay and beyond," said lead author Brooke Maslo, an assistant professor and extension specialist in the Department of Ecology, Evolution, and Natural Resources in the School of Environmental and Biological Sciences at Rutgers University-New Brunswick.

Aquaculture is a burgeoning industry along the Delaware Bayshore, infusing millions of dollars into local economies annually, and the production of oysters within the intertidal flats on Delaware Bay has grown in the last decade.

Although a relatively small endeavor now, rising public interest in boutique oysters for the half-shell market (known as the Oyster Renaissance) and the "low-tech" nature of oyster tending make oyster farming an attractive investment for small-business entrepreneurs.

"Oyster farming has many ecological benefits and is widely recognized as one of the most ecologically sustainable forms of food production," said co-author David Bushek, a professor and director of Rutgers' Haskin Shellfish Research Laboratory in Port Norris, New Jersey. "Farmers appreciate the ecology around them as they depend on it to produce their crop. The idea that oyster farming might be negatively impacting a threatened species concerns them deeply, so they've voluntarily taken on many precautionary measures. They'd like to know which of these measures help and which don't as they all inhibit their ability to operate efficiently."

Delaware Bay is a critical stopover area for the red knot, a reddish, robin-sized sandpiper. Every spring, the species feasts on horseshoe crab eggs after journeying from wintering grounds at the southern tip of South America. Red knots then fly to breeding grounds in the Canadian Arctic.

Researchers assessed the impact of oyster aquaculture along the Delaware Bay on red knots and three other migratory birds of conservation concern: ruddy turnstones, semipalmated sandpipers and sanderlings. The scientists found that oyster tending reduced the probability of shorebird presence by 1 percent to 7 percent, while untended aquaculture structures had no detectable impact.

The study showed foraging rates were mostly influenced by environmental conditions, especially the presence of gulls or other shorebirds. None of the four bird species of concern substantially altered their foraging behavior due to the presence of tended or untended oyster aquaculture.

Next steps include investigating how oyster aquaculture may influence interactions between red knots and their main food source, horseshoe crab eggs, as well as examining how the expansion of oyster aquaculture along the Delaware Bay may affect the availability of foraging habitat at this globally important stopover site.

Credit: 
Rutgers University

Latin America's livestock sector needs emissions reduction to meet 2030 targets

image: A cow on grazing land in Huila, in the Colombian Andes.

Image: 
Neil Palmer / International Center for Tropical Agriculture

Livestock is a pivotal source of income for Latin American countries but the sector is one of the largest sources of greenhouse gas emissions (GHG) in the region. Agriculture in Latin America produces 20 percent of the region's emissions, 70 percent of which comes from livestock, according to research by the CGIAR Research Program on Climate Change, Agriculture and Food Security (CCAFS).

Reducing livestock's carbon footprint in Latin America is necessary if countries in the region are to meet emission-reduction goals under the Paris Agreement, researchers argue in a new analysis published May 14 in Frontiers.

The study, called "Ambition meets reality: achieving GHG emission reductions targets in the livestock sector of Latin America," found that the emission reduction targets set by Argentina, Brazil, Colombia, Costa Rica, Mexico, Peru and Uruguay may not be met without wider adoption of emissions-reduction techniques in the livestock sector.

"Widespread adoption of promising mitigation options remains limited, raising questions as to whether envisaged emission reduction targets are achievable," said the authors, who included scientists from the Alliance of Bioversity International and CIAT, CCAFS and research organizations and universities in Latin America.

The study analyzed the countries' Nationally Determined Contributions, or NDCS, which are the emissions-reductions goals voluntarily set by each country under the Paris Agreement. The researchers compared the targets with technologies and practices currently in use to reduce livestock emissions.

The researchers found that a range of practices has been adopted, including silvopastoral systems, dietary changes, improved forage quality, improved pasture management, and improvement of reproductive efficiency, among others. But adoption needs to be scaled up in order o meet NDCs, they determined.

Barriers to adoption remain

The livestock sector's emissions-reductions shortfalls reflect a general need across Latin American countries to have more ambitious goals for emissions reductions to help keep the planet below the Paris Agreement target of 1.5°C of warming. Greater inclusion of the livestock sector's potential to contribute to NDCs should be a greater part of the discussion, the scientists said.

Livestock producers need better access to inputs such as feed, capital and information and training related to climate-friendly farming practices.

"The first can be achieved through the development of formal systems of replication of grass and legume seeds," said Alejandro Ruden, a study co-author at the Alliance. "In Latin America, this strategy is still underdeveloped."

Certain grasses and legume plants, when incorporated into livestock systems, can lead to a significant reduction in greenhouse gas emissions in pastures while also increasing productivity.

A formal, subsidized credit system is essential to scale up different practices and technologies. These credits can be granted to small- and medium-sized producers through differentiated pricing for products from environmentally friendly systems or through payments for ecosystem services.

"Access to information can be achieved through efficient dissemination channels, as well as through the strengthening of effective and unified technical assistance and outreach systems," Ruden said.

Raising ambition

Ensuring that information generated in scientific and academic circles gets to producers is key to enabling the livestock sector to implement practices to reduce emissions and make production processes more sustainable.

"It requires the genuine commitment of decision-makers at the ministry level to put into action, in an articulated and efficient way, GHG mitigation strategies produced by scientific research in Latin America," said Juan Ku-Vera, a study co-author from the Autonomous University of Yucatan.

If national policies aim to improve productivity and reduce emissions intensity, sustainable livestock production will contribute to the achievement of the United Nations' Sustainable Development Goals, or SDGs. These include Zero Hunger, Climate Action, and Life on Land.

"It is necessary that the academic, research, business and public policy sectors can support and encourage the changes needed to raise the level of ambition and achieve the SDGs, considering actions from the farm to national scale," said Jacobo Arango, the study's lead author and the leader of CCAFS' LivestockPlus II research project.

Credit: 
The Alliance of Bioversity International and the International Center for Tropical Agriculture

The revolt of the plants: The arctic melts when plants stop breathing

image: The impact of stomatal closure on air temperature when CO2 level rises. Not only does the temperature of the continents rise, but the temperature in the Arctic region increases significantly where no vegetation exists

Image: 
Jong-Seong Kug (POSTECH)

The vapor that plants emit when they breathe serves to lower the land surface temperature, much like watering the yard on a hot day. Until now, the greenhouse effect has been blamed for the rise in global temperature. But an interesting study has shown that the Artic temperature rises when the moisture released by plants is reduced due to the increase of carbon dioxide (CO2 ) in the atmosphere.

The joint research team led by Professor Jong-Seong Kug and doctoral candidate So Won Park of POSTECH's Division of Environmental Science and Engineering, and Researcher Jin-Soo Kim of the University of Zurich has confirmed that the increase in atmospheric CO2 concentration closes the pores (stomata) of plants in high-latitude areas and reduces their transpiration, which ultimately accelerates Arctic warming. The findings, which were studied through the Earth system models (ESM) simulations, were recently published in Nature Communications, an authoritative journal in science.

Plants take in CO2 and emit oxygen through photosynthesis. During this process, the stomata of leaves open to absorb CO2 in the air and release moisture at the same time.

However, when the CO2 concentration rises, plants can absorb enough CO2 without opening their stomata widely. If the stomata open narrowly, the amount of water vapor released also decreases. When this transpiration of plants declines, the land temperature rapidly rises under greenhouse warming. Recently, such a decrease in transpiration has been cited as one of the reasons for the surge in heat waves in the northern hemisphere.

This response from the vegetation leads to the global climate change by controlling the exchange of energy between the surface and atmosphere, referred to as 'physiological forcing.' But so far, no study has confirmed the effects of physiological forcing on the Arctic climate system.

The joint research team analyzed the EMS simulation and confirmed that the increase in CO2 leads to stomatal closure in land vegetation causing land warming, which in turn remotely speeds up Artic warming through an atmospheric circulation and positive feedback in Earth systems process.

In addition, a quantitative estimate of the stomatal closure's effect on Arctic warming due to increased CO2 showed that about 10% of the greenhouse effect is caused by this physiological forcing.

Professor Jong-Seong Kug, who has studied Arctic warming in a variety of perspectives, commented, "The stomatal closure effect due to the increased CO2 levels is not fully counted in the future climate projection." He pointed out, "This means that Arctic warming can proceed much faster than current forecast." He also warned that "the increase in CO2 is accelerating global warming not only through the greenhouse effect that we all knew of, but also by changing the physiological function of plants."

Credit: 
Pohang University of Science & Technology (POSTECH)

Persistence of forages is dependent on harvest intervals

image: Alfalfa bloom after 35 days of growth in monoculture.

Image: 
doi:10.1002/cft2.20018

A successful forage program is one in which mass production is consistently high for a long period, and management practices are essential to reach this goal. In the northern United States, researchers commonly study alfalfa. Producers from the southern region are interested in growing alfalfa, yet they need more information to adequately manage this forage crop in their environment.

In a recent study published in Crop, Forage, and Turfgrass Management, researchers investigated the effect of harvest intervals on the persistence of alfalfa in the field, either as a monoculture or mixed with grasses (such as bermudagrass and tall fescue). Four harvest intervals were imposed on all species combinations.

The team found that longer alfalfa harvest intervals in the southeastern U.S. resulted in positive outcomes. They also observed that growing alfalfa in mixtures with tall fescue resulted in greatest forage mass and nutritive value.

This study highlights the importance of investigating management of alfalfa and alfalfa mixtures before producers attempt to incorporate the species into their operations. The results from this study suggest that harvesting alfalfa at 42-day intervals produces the maximum amount of alfalfa productivity and persistence.

Credit: 
American Society of Agronomy

Artificial intelligence helps researchers up-cycle waste carbon

image: Researchers from U of T Engineering and Carnegie Mellon University are using electrolyzers like this one to convert waste CO2 into commercially valuable chemicals. Their latest catalyst, designed in part through the use of AI, is the most efficient in its class.

Image: 
Daria Perevezentsev / University of Toronto Engineering

Researchers at University of Toronto Engineering and Carnegie Mellon University are using artificial intelligence (AI) to accelerate progress in transforming waste carbon into a commercially valuable product with record efficiency.

They leveraged AI to speed up the search for the key material in a new catalyst that converts carbon dioxide (CO2) into ethylene -- a chemical precursor to a wide range of products, from plastics to dish detergent.

The resulting electrocatalyst is the most efficient in its class. If run using wind or solar power, the system also provides an efficient way to store electricity from these renewable but intermittent sources.

"Using clean electricity to convert CO2 into ethylene, which has a $60 billion global market, can improve the economics of both carbon capture and clean energy storage," says Professor Ted Sargent, one of the senior authors on a new paper published today in Nature.

Sargent and his team have already developed a number of world-leading catalysts to reduce the energy cost of the reaction that converts CO2 into ethylene and other carbon-based molecules. But even better ones may be out there, and with millions of potential material combinations to choose from, testing them all would be unacceptably time-consuming.

The team showed that machine learning can accelerate the search. Using computer models and theoretical data, algorithms can toss out worst options and point the way toward more promising candidates.

Using AI to search for clean energy materials was advanced at a 2017 workshop organized by Sargent in collaboration with the Canadian Institute for Advanced Research (CIFAR). The idea was further elaborated in a Nature commentary article published later that year.

Professor Zachary Ulissi of Carnegie Mellon University was one of the invited researchers at the original workshop. His group specializes in computer modelling of nanomaterials.

"With other chemical reactions, we have large and well-established datasets listing the potential catalyst materials and their properties," says Ulissi.

"With CO2-to-ethylene conversion, we don't have that, so we can't use brute force to model everything. Our group has spent a lot of time thinking about creative ways to find the most interesting materials."

The algorithms created by Ulissi and his team use a combination of machine learning models and active learning strategies to broadly predict what kinds of products a given catalyst is likely to produce, even without detailed modeling of the material itself.

They applied these algorithms for CO2 reduction to screen over 240 different materials, discovering 4 promising candidates that were predicted to have desirable properties over a very wide range of compositions and surface structures.

In the new paper, the co-authors describe their best-performing catalyst material, an alloy of copper and aluminum. After the two metals were bonded at a high temperature, some of the aluminum was then etched away, resulting in a nanoscale porous structure that Sargent describes as "fluffy."

The new catalyst was then tested in a device called an electrolyzer, where the "faradaic efficiency" -- the proportion of electrical current that goes into making the desired product -- was measured at 80%, a new record for this reaction.

Sargent says the energy cost will need to be lowered still further if the system is to produce ethylene that is cost-competitive with that derived from fossil fuels. Future research will focus on reducing the overall voltage required for the reaction, as well as further reducing the proportion of side products, which are costly to separate.

The new catalyst is the first one for CO2-to-ethylene conversion to have been designed in part through the use of AI. It is also the first experimental demonstration of the active learning approaches Ulissi has been developing. Its strong performance validates the effectiveness of this strategy and bodes well for future collaborations of this nature.

"There are many ways that copper and aluminum can arrange themselves, but what the computations shows is that almost all of them were predicted to be beneficial in some way," says Sargent. "So instead of trying different materials when our first experiments didn't work out, we persisted, because we knew there was something worth investing in."

Credit: 
University of Toronto Faculty of Applied Science & Engineering

A deep look into the gut's hormones

image: A mini-intestine, or organoid, with dramatically increased numbers of enteroendocrine, or hormone-producing, cells. Different hormones are shown in red, purple and green.

Image: 
Joep Beumer, copyright Hubrecht Institute

Researchers from the Hubrecht Institute and Utrecht University generated an in-depth description of the human hormone-producing cells of the gut, in a large collaborative effort with other research teams. The results are presented in the scientific journal Cell on the 13th of May. These cells are hard to study, as they are very rare and unique to different species of animals. The researchers developed an extensive toolbox to study human hormone-producing cells in tiny versions of the gut grown in the lab, called organoids. These tools allowed them to uncover secrets of the human gut, for example which potential hormones can be made by the gut and how the secretion of these hormones is triggered. These findings offer potential new avenues for the treatment of diseases such as type 2 diabetes and obesity.

Did you ever wonder where that sudden feeling of hunger comes from when your empty stomach rumbles? Thousands of hormone-producing cells, or enteroendocrine cells, scattered throughout your stomach and intestine just released millions of tiny vesicles filled with the hunger hormone ghrelin into your bloodstream.

Intestinal hormones

Hormones such as ghrelin act as the gut's primary method of communication and coordination with other organs such as the pancreas and the brain. In response to food components and bacteria in the gut, different types of enteroendocrine cells produce different hormones. These hormones can induce hunger or satiety, coordinate movement of intestinal muscles and stimulate the repair of the intestine's protective cell layer.

Another effect to these hormones can be to increase the release of insulin from the pancreas, which is especially interesting in patients with type II diabetes. These patients are unable to produce sufficient insulin to stabilize their glucose levels on their own. One of the most successful treatments for type 2 diabetes is actually based on a gut hormone, called GLP1. With this treatment some patients are able to control their blood glucose without the need of insulin injections.

The gut hormone challenge

Enteroendocrine cells only comprise about 1% of the lining of the gut. To complicate this further, more than 20 different hormones are produced by at least 5 subtypes of these enteroendocrine cells of which some are even more rare. This would mean you would have to look at more than 1000 cells to find an enteroendocrine cell that makes a specific hormone. This makes it very difficult to study how these cells exactly respond to the food in your gut.

Most of our knowledge on enteroendocrine cells is derived from studies in mice. However, mice have a different diet and are therefore likely to sense other signals from their food. The differences are so striking that the counterparts of some human gut hormones do not even exist in mice.

A human gut hormone toolbox

The team in the research group of Hans Clevers at the Hubrecht Institute overcame these issues, the rarity of enteroendocrine cells and the differences between mice and humans, in a creative way. They used intestinal organoids, tiny intestines that they can grow in the lab from human stem cells, and increased the activity of a gene called neurogenin-3. This dramatically increased the number of enteroendocrine cells in these mini-intestines, while still generating all the different subtypes of these cells.

To be able to study all the specific types of enteroendocrine cells, the researchers used another trick that was recently developed in the group of Hans Clevers. Clevers: "In our lab, we have optimized genetic engineering of organoids. We were therefore able to label the hormones that are made by the enteroendocrine cells in different colors and create a biobank of mini-intestines, called the EEC-Tag biobank, in which different hormones are tagged with different colors." When an enteroendocrine cell starts producing a labeled hormone, that cell will appear in the corresponding color. The researchers can use the EEC-Tag biobank to study ten major hormones and different combinations of these hormones within the same organoid.

Joep Beumer (Hubrecht Institute): "Marking all major gut hormones with colors allows us to selectively collect any subset of enteroendocrine cells and study even the rarest enteroendocrine cell types. Combining the EEC-Tag biobank with other cutting-edge techniques allowed us to gain deep insights into the biology of hormone production in the human intestine."

New molecules

For the first time, the researchers described all the genes that are active in the different types of human enteroendocrine cells, using a technique called single-cell sequencing. Based on these data they created an atlas with the deepest insights to date into the different subtypes of human enteroendocrine cells. The atlas revealed the activity of genes pivotal to the function of enteroendocrine cells that were not described before. For example, new receptors were identified that these cells can use to sense food and release their hormones. Other discoveries include potential hormones that have never been identified in the gut before.

"With the EEC-Tag biobank we can measure hundreds of cells for each enteroendocrine cell subtype. The resulting atlas is a gold mine full of fascinating relationships between hormones, receptors and other genes used by well-defined subsets of enteroendocrine cells, which opens many new directions for future studies," says Jens Puschhof (Hubrecht Institute).

The key characteristic of enteroendocrine cells are the active hormones they secrete. To directly measure these hormones, the researchers collaborated with the group of Wei Wu at Utrecht University. The researchers in this group are specialists at mass spectrometry, a very sensitive method to identify different molecules. In the collection of molecules produced by the mini-intestines, they found many new molecules for which it was unknown that they are secreted in the intestine. These new molecules may have functions in our bodies' response to food that are so far unknown. This discovery underlines our limited knowledge of the hormones produced in our gut and will inspire more detailed studies into the functions of these molecules.

Wei Wu (Utrecht University): "Gut secretions contain a mix of hormones that can be either active or inactive. For the first time, we characterize this diversity in human mini-intestines, to reveal also if these hormones are processed into active functional pieces. Hormone activation is not determined by genes, but rather by the processing of the hormones afterwards. Therefore, this may also hint at an exciting route of intervention for broad-spectrum applications, such as controlling hunger or treating diabetes."

Large scale screens

With all these data and tools at hand, researchers can now do large scale screens to study which molecules in our food are sensed by which types of enteroendocrine cells, how the enteroendocrine cells actually sense these molecules, and which hormones are produced as a response. Clevers: "The combined ability to make large numbers of all human enteroendocrine cells and track the subtypes using fluorescent colors in organoids makes it possible to perform screens for drug targets that may harness the therapeutic potential of enteroendocrine cells". This work paves the way for further investigation of the largest hormone-producing organ in our body and may eventually lead to treatments for people affected by obesity and diabetes.

Credit: 
Hubrecht Institute

Low-income children and COVID-19

What The Viewpoint Says: Hardships faced by low-income children during the COVID-19 pandemic are discussed, and educational and policy reforms for the future are suggested in this article.

Authors: Danielle G. Dooley, M.D., M.Phil., of the Children's National Hospital in Washington, D.C., is the corresponding author.

To access the embargoed study: Visit our For The Media website at this link https://media.jamanetwork.com/

(doi:10.1001/jamapediatrics.2020.2065)

Editor's Note:  Please see the article for additional information, including other authors, author contributions and affiliations, conflicts of interest and financial disclosures, and funding and support.

Credit: 
JAMA Network

The makings of a crystal flipper

video: A crystal of azobenzene showing different patterns of flipping motion depending on the polarity of light. (Kageyama Y. et al., Chemistry - A European Journal. March 19, 2020. DOI: 10.1002/chem.202000701)

Image: 
Kageyama Y. et al., Chemistry - A European Journal. March 19, 2020. DOI: 10.1002/chem.202000701

Insights into a flipping crystal could further research into the development of autonomous robots.

Hokkaido University scientists have fabricated a crystal that autonomously flips back and forth while changing its flipping patterns in response to lighting conditions. Their findings, published in a Chemistry Europe's journal, bring scientists closer to understanding how to build molecular robots that can prosecute complex tasks.

A multitude of self-controlled functions, such as metabolism, goes on inside our bodies night and day. Scientists want to fabricate materials and molecular architectures that can similarly function on their own.

Hokkaido University physical chemist Yoshiyuki Kageyama and collaborators had previously observed a self-driven oscillating flipping motion in a crystal formed of azobenzene molecules and oleic acid. Azobenzene molecules are formed of two rings composed of carbon and hydrogen atoms, connected by a double nitrogen bond. These molecules receive incident light and convert the light energy to mechanical motion, leading to the repetitive flipping motion.

The scientists wanted to better understand what drives this autonomous motion, so they conducted intensive tests on crystals composed only of the azobenzene.

They found that the molecules inside the crystals were arranged in alternating sparse and dense layers. The dense layers hold the crystal together and prevent it from decomposing, while the sparse ones enable the photoreaction.

The group also found the crystal flipped differently, or didn't flip, when a polarized light -- which oscillate in a single direction -- was applied with different angles. This suggested azobenzene molecules play different roles depending on their position in the crystal; When they receive light, some molecules act as reaction centers to initiate the periodic behavior while other molecules modulate the motion.

"This autonomous behaviour represents a response to information contained in the energy source, the angle of polarized light in this case, results in a rich variety of motions," says Yoshiyuki Kageyama. "We hope our findings support further research into constructing self-governable molecular robots."

Credit: 
Hokkaido University

Genes of high temperature superconductivity expressed in 3D materials

image: Zinc-blende structure: the anion-cation complex is tetrahedral and the same ions form the face-centered cubic structure.

Image: 
©Science China Press

The problems of high temperature superconductor (HTSC) are important for the physics community and even the whole science area. Ever since its discovery, many researchers have conducted various studies, still lacking of widely accepted solutions. Recently, Prof. Jiangping Hu from Institute of Physics, Chinese of Science, proposed the "gene" theory of high temperature superconductivity based on the electronic properties of existing quasi-two dimensional HTSCs, and used it to search for new families of HTSCs. According to this theory, their group further explored the conditions of expressing of the "gene" in high symmetric three-dimensional structure and predicted cobalt compounds in the zinc-blende structure as new families of HTSC.

This work is titled as Unconventional high temperature superconductivity in cubic zinc-blende transition metal compounds, recently published in SCIENCE CHINA Physics, Mechanics & Astronomy. The researchers used the group theory, electronic band and slave-boson mean-field method to analysis the symmetries of electron motion and to predict the properties of superconductivity. This is a d-wave superconducting state spontaneously breaking time reversal symmetry and maintaining nodes in diagonal directions.

To be specific, the structure considered in this work is the zinc-blende structure as shown in Figure 1. Under this structure, the local symmetries are the same to the global symmetries, which keeps the degeneracy of the t2g orbitals from the three-dimensional representation of Td group. When the electronic configuration is close to d7, those three orbitals are isolated near the Fermi surface, with strong hopping and antiferromagnetic super-exchange in all the directions, triggering the expression of HTSC "gene". The pairing wave under this environment is d+id, breaking time reversal symmetry. This pairing wave vanishes at the bulk diagonal direction in the Brillouin zone, as the direct analogy of quasi-two-dimensional superconductivity in cuprate. By adopting band structure and mean-field calculation, the electron doped zinc-blende cobalt-oxygen-nitrogen compounds may realize such physics.

This result further enriched the prediction of families of HTSCs according to the "gene" theory. If verified by the following experiments, the "gene" theory would be justified and more HTSCs materials would be found.

Credit: 
Science China Press