Tech

Strategic interventions in dairy production in developing countries can help meet growing global demand for milk

image: Projected volumes of milk (ECM, International Energy Corrected Milk) that will be produced by region in 2030 and percentage increase over 2017 levels (International Farm Comparison Network, IFCN, 2018) CIS = Commonwealth of Independent States: Bulgaria, Estonia, Latvia, Lithuania, Romania, Russia, Ukraine.

Image: 
Adesogan, A.T. and G.E. Dahl. 2020. MILK Symposium Introduction: Dairy production in developing countries. J. Dairy Sci. 103:9677-9680. https://doi.org/10.3168/jds.2020-18313.

Philadelphia, October 15, 2020 - Low dairy consumption is common among low- and middle-income countries (LMICs); however, with the demand for milk in these countries projected to increase over the next few decades, there is an opportunity to improve the lives of millions of people from the nutritional benefits of dairy products. Feed the Future Innovation Lab for Livestock Systems hosted the "MILK Symposium: Improving Milk Production, Quality, and Safety in Developing Countries" at the 2019 American Dairy Science Association® Annual Meeting to address factors that cause low dairy consumption in LMICs and discuss strategies to address them. The Journal of Dairy Science invited speakers to submit articles on topics from the symposium to reach a wider audience.

"Dairy consumption levels are low in LMICs due to low affordability, accessibility, and availability caused by inadequate feeding, management, and genetics; poor transport, cooling, and processing infrastructure; unconducive policy environments; and sociocultural and demographic factors," explained Adegbola Adesogan, PhD, Director of the Feed the Future Innovation Lab for Livestock Systems at the University of Florida, Gainesville, FL, USA. "These papers collectively show how strategic interventions can lead to marked improvements in dairy production in developing countries."

The symposium started by reviewing the importance of dairy foods in diets of infants, adolescents, pregnant women, adults, and the elderly. It provided current research evidence that dairy foods consumption does not lead to an increased risk of cardiovascular disease and type 2 diabetes; rather, dairy products offer an important supply of nutrition and functionality that are of particular importance at certain life stages.

Animal-source foods provide a high-quality and bioavailable source of protein and micronutrients that can help alleviate child undernutrition. In Nepal, children older than 60 months who consumed milk were taller and had higher weight for their age, and children 24 to 60 months had larger head circumferences, which is used as a measure of cognition.

The symposium highlighted the importance of resources and education to improve the quality and safety of milk in developing countries. It reviewed the causes of foodborne diseases from milk and the health and economic implications, followed by a discussion of educational and technological solutions to improve the quality and safety of milk production.

A technology training package to control mastitis was implemented successfully on dairy farms in Nepal with outcomes that suggested scaling the training across smallholder farms beyond Nepal. Training-of-trainers workshops based on needs assessments were developed in Rwanda and Nepal to help improve productivity, quality, and safety of milk. In southern Ethiopia, an intervention was designed to improve the hygiene and handling of milk that resulted in an overall increase in knowledge of best practices of the participants.

The final presenter emphasized the sustainability and environmental impact of dairy production in low-income countries. Sustainable intensification is an important strategy to address food security and climate change simultaneously. Improving genetic potential, balanced animal nutrition, and quality of feed are all promising strategies.

"The growing demand for dairy products in LMICs presents a tremendous opportunity," Adesogan said. "These papers will ultimately contribute to meeting the growing global demand for milk and to achievement of the United Nations Sustainable Development Goals related to alleviation of hunger and poverty, improvement of education and employment, and environmental stewardship."

Credit: 
Elsevier

Does science have a plastic problem? Microbiologists take steps to reducing plastic waste

Led by Dr Amy Pickering and Dr Joana Alves, the lab replaced single-use plastics with re-useable equipment. Where alternatives were not available, the group decontaminated and re-used plastic equipment which would have usually been thrown away after one use. "We knew that we were using plastic daily in our research, but it wasn't until we took the time to quantify the waste that the volumes being used really hit home. That really emphasized the need for us to introduce plastic reducing measures," said Dr Pickering.

The lab developed a new scheme which focused on sustainability, moving away from the use of single-use plastics wherever possible. In some cases, the research group would use reusable wooden or metal items instead of plastic. If there were no alternatives, the group focused on reusing plastic equipment by chemically decontaminating the plastic tubes before a second level of decontamination under heat and pressure - known as autoclaving.

To determine the success of the scheme, the lab of seven researchers spent four weeks documenting the plastic waste produced in regular conditions. They then measured the amount of waste produced over the next seven weeks with new processes to reduce the consumption of single-use plastic. "Once the measures were in place it was quickly clear that large impacts were being seen. The most surprising thing for us was how resilient some plastics are to being autoclaved and therefore how many times they can be re-used. This means that we were able to save more plastic than we originally anticipated," said Dr Pickering. In implementing these replace and reuse practises, 1670 tubes and 1300 loops were saved during a four-week period. This led to a 43-kilogram reduction in waste.

The typical microbiology laboratory uses mostly disposable plastic, which is often not recycled due to biological contamination. In 2014, 5.5 million tonnes of plastic waste were generated in research laboratories worldwide. The Edinburgh lab works with dangerous disease-causing bacteria, due to the dangers of contamination, their waste must be autoclaved and incinerated at a high environmental and monetary cost.

Practices to reduce plastic waste in research labs is becoming increasingly popular in the UK, with researchers from the University of York decontaminating and re-using plastic flasks and researchers from a chemistry lab in Edinburgh recycling 1 million plastic gloves in 2019. "It's important to take some time identifying what plastic items you are using the most. This will allow you to identify both the easy wins, such as replacing plastic inoculation loops for re-useable metal ones, as well as the bigger tasks, such as re-using plastic tubes. That will help you to bring others on board and build momentum," said Dr Pickering.

The new protocols not only prevent plastic waste, but also save money according to Dr Pickering: "Over a 3-month period of implementing the protocols we will have saved over £400 of plastic tubes, inoculation loops, and cuvettes" she said.

The full details of the lab's new waste-reducing protocols are free to read in Access Microbiology.

Credit: 
Microbiology Society

Golden meat: Engineering cow cells to produce beta carotene

A group of researchers at Tufts University have genetically engineered cow muscle cells to produce plant nutrients not natively found in beef cells. Using the same carotenoid pathway exploited in golden rice, they coaxed bovine cells into producing beta carotene—a provitamin usually found in carrots and tomatoes.

In doing so, they demonstrated that cell-cultured meat might be able to surpass the nutritional profile of conventionally farmed meat.

"Cows don't have any of the genes for producing beta carotene," said Andrew Stout, lead author of the study and biomedical engineering PhD student at Tufts University. "We engineered cow muscle cells to produce this and other phytonutrients, which in turn allows us to impart those nutritional benefits directly onto a cultured meat product in a way that is likely infeasible through animal transgenics and conventional meat production."

These findings, published in the journal Metabolic Engineering, are proof of principle for using genetic engineering and cellular agriculture to create novel foods. Rather than simply mimicking meat currently found in the grocery store, cell-cultured meat products are capable of assuming different shapes, textures, nutritional profiles, and bioactivities.

One such feature is carcinogenicity, or rather, the lack thereof.

"We saw a reduction in lipid oxidation levels when we cooked a small pellet of these cells when they were expressing and producing this beta carotene," said Stout. "Because that lipid oxidation is one of the key mechanistic proposals for red and processed meats' link to diseases such as colorectal cancer, I think that there is a pretty compelling argument to be made that this could potentially reduce that risk."

Nutritionally enhancing cultured meat products might give the burgeoning cellular agriculture industry the leg up it needs to compete with conventional meat. Although cultured meat producers have exponentially lowered the cost of production over the last few years, the technology faces an uphill battle in competing with a heavily subsidized status quo.

"It will likely be challenging for cultured meat to be competitively competitively priced with factory farmed meat right out of the gate," said David Kaplan, Stern Family Professor of Engineering at the Tufts University School of Engineering and corresponding author of the study. "A value-added product which provides consumers with added health benefits may make them more willing to pay for a cultured meat product."

Credit: 
New Harvest

Plant genetic engineering to fight 'hidden hunger'

image: Poor people's diets are often dominated by staple foods. Rice cultivation in Indonesia.

Image: 
M Qaim

More than two billion people worldwide suffer from micronutrient malnutrition due to deficiencies in minerals and vitamins. Poor people in developing countries are most affected, as their diets are typically dominated by starchy staple foods, which are inexpensive sources of calories but contain low amounts of micronutrients. In a new Perspective article, an international team of scientists, involving the University of Göttingen, explains how plant genetic engineering can help to sustainably address micronutrient malnutrition. The article was published in Nature Communications.

Micronutrient malnutrition leads to severe health problems. For instance, vitamin A and zinc deficiency are leading risk factors for child mortality. Iron and folate deficiency contribute to anemia and physical and cognitive development problems. Often, the people affected are not aware of their nutritional deficiencies, which is why the term 'hidden hunger' is also used. The long-term goal is that all people are aware of healthy nutrition and have sufficient income to afford a balanced diet all year round. However, more targeted interventions are required in the short and medium term.

One intervention is to breed staple food crops for higher micronutrient contents, also known as 'biofortification'. Over the last 20 years, international agricultural research centres have developed biofortified crops using conventional breeding methods, including sweet potato and maize with vitamin A, as well as wheat and rice with higher zinc content. These crops were successfully released in various developing countries with proven nutrition and health benefits. However, conventional breeding approaches have certain limitations.

In the Perspective article, the scientists report how genetic engineering can help to further enhance the benefits of biofortified crops. "Transgenic approaches allow us to achieve much higher micronutrient levels in crops than conventional methods alone, thus increasing the nutritional efficacy. We demonstrated this for folates in rice and potatoes," says Professor Dominique Van Der Straeten from Ghent University, the article's lead author. "We also managed to reduce post-harvest vitamin losses significantly," she adds.

Another advantage of genetic engineering is that high amounts of several micronutrients can be combined in the same crop. "This is very important, as poor people often suffer from multiple micronutrient deficiencies," says co-lead author and 2016 World Food Prize Laureate Dr Howarth Bouis from the International Food Policy Research Institute.

Genetic engineering can also help to combine micronutrient traits with productivity-enhancing agronomic traits, such as drought tolerance and pest resistance, which are becoming ever more relevant with climate change. "Farmers should not have to make difficult choices between crops that either improve nutrition or allow productive and stable harvests. They need both aspects combined, which will also support widespread adoption," says co-author Professor Matin Qaim from the University of Göttingen.

The authors acknowledge that genetic engineering is viewed skeptically by many, despite the fact that the resulting crops have been shown to be safe for human consumption and the environment. One of the reasons for the public's reservations is that genetic engineering is often associated with large multinational companies. "Biofortified crops may possibly reduce some of the concerns, as these crops are developed for humanitarian purposes," state the authors. "Public funding is key to broader acceptance."

Credit: 
University of Göttingen

Stressed out volcanoes more likely to collapse and erupt, study finds

An international study led by Monash scientists has discovered how volcanoes experience stress. The study, published today in Scientific Reports, has implications for how the world might be better protected against future volcano collapses.

Volcanic collapse is the worst-case scenario during volcanic crises. It can trigger dangerous tsunamis or devastating pyroclastic flows (for example Mount Saint Helens).

"But, these events are very difficult to predict because we often don't know what is happening inside active volcanoes, and what forces might make them unstable," said lead study author Dr Sam Thiele, a recent PhD graduate from the Monash University School of Earth, Atmosphere and Environment.

"Research on volcano growth helps us to understand these internal processes and the associated forces that could trigger a deadly collapse or eruption," he said.

The research team used drones to create a cm-resolution map of the internal structure of a now dormant volcano on La Palma in the Canary Islands and measured the width of 100's of thousands of cracks through which magma flowed during past eruptions.

This allowed them to estimate the forces acting within the volcano, and show that these slowly build up over time, causing the volcano to become 'stressed' and potentially unstable.

By measuring the width of cracks in the volcano through which magma was transported they were able to estimate the forces involved, which helps to predict future volcanic eruptions.

The geological features that the research team mapped are formed when molten intrusions, called dykes, solidify to form a framework inside what is otherwise a comparatively weak structure comprising mostly layers of lava and ash.

"This is one of the first studies to look at the long-term effects of magma movement within a volcano," said study co-author Professor Sandy Cruden, from the Monash University School of Earth, Atmosphere and Environment.

"We found that volcanoes gradually become 'stressed' by repeated movement of this magma, potentially destabilising the whole volcano, influencing future collapses and eruptions," he said.

Credit: 
Monash University

Reelin-Nrp1 interaction regulates neocortical dendrite development

The research group of Takao Kohno (Nagoya City University) and Mitsuharu Hattori (Nagoya City University), Takahiko Kawasaki (National Institute of Genetics), and Kazunori Nakajima (Keio University School of Medicine) found a new mechanism by which superficial neurons correctly develop their dendrites. The results of this research were published in The Journal of Neuroscience, a journal published by the Society for Neuroscience.

The mammalian neocortex has an orderly and beautiful six-layer structure. Neurons in each layer develop the dendrites and form a normal network. Recently, it has been reported that dendritic abnormalities are found in patients with psychiatric disorders such as schizophrenia and autism. Therefore, understanding the mechanism by which dendrites are normally formed is important for the understanding those disorders. A large secretory glycoprotein called "Reelin" is essential for brain formation. Reelin plays various roles depending on the developmental stages, but its specific mechanism has not been understood. The research group has reported that the C-terminal region of Reelin, which is highly conserved among species, is required for dendrite development of superficial-layer neurons in the neocortex (Kohno et al., 2015). Here, Dr. Kohno and his colleagues identified neuropilin-1 (Nrp1) as a novel Reelin receptor. Previously, Reelin was found to undergo cleavage at 6 amino acids from the C-terminus (Kohno et al., 2015), but interestingly, the cleaved Reelin did not bind to Nrp1, indicating that the presence of 6 amino acid residues (0.17% of the total) regulates the binding between Nrp1 and Reelin.

In the mouse neocortex, Nrp1 and VLDLR were co-expressed in superficial-layer neurons and form a protein complex. Furthermore, Nrp1 enhanced the binding between Reelin and VLDLR, and the Reelin-Nrp1 interaction was necessary for the apical dendrite development in superficial neurons. These results suggested that a new mechanism mediated by the Reelin-Nrp1 interaction regulates the superficial-layer formation.

Recently, it has become clear that the dysfunction of Reelin is associated with the risk and onset of neuropsychiatric disorders. The research group has also reported that the tiny structural abnormalities in the brain are a risk of neuropsychiatric disorders-like behavior (Sakai et al., 2016). Our findings may lead to the understanding and therapeutic development of those disorders.

Credit: 
Nagoya City University

A new ultrafast control scheme of ferromagnet for energy-efficient data storage

image: A schematic illustration of the demonstrated ultrafast and energy efficient switching of ferromagnet driven by a single femtosecond laser pulse. The laser pulse demagnetizes the ferrimagnetic layer and generates a spin current, which travels through the nonmagnet and finally induces the switching of the ferromagnet. The lower image shows an observed magneto-optical Kerr effect micrograph showing the switching of the ferromagnetic layer.

Image: 
Shunsuke Fukami and Stéphane Mangin

The digital data generated around the world every year is now counted in zettabytes, or trillions of billions of bytes - equivalent to delivering data for hundreds of millions of books every second. The amount of data generated continues to grow. If existing technologies remained constant, all the current global electricity consumption would be devoted to data storage by 2040.

Researchers at the Université de Lorraine in France and Tohoku University reported on an innovative technology that leads to a drastic reduction in energy for data storage.

The established technology utilizes an ultrafast laser pulse whose duration is as short as 30 femto seconds - equal to 0.0000000000000003 seconds. The laser pulse is applied to a heterostructure consisting of ferrimagnetic GdFeCo, nonmagnetic Cu and ferromagnetic Co/Pt layers.

"Previous research, conducted by a subset of the current research group, observed magnetic switching of the ferromagnetic layer after the ferrimagnetic layer had been switched." This time, the researchers uncovered the mechanism accounting for this peculiar phenomena and found that a flow of electron spin, referred to as a spin current, accompanying the switching of ferrimagnetic GeFeCo plays a crucial role in inducing the switching of ferromagnetic Co/Pt (Fig. 1).

Based on this insight, they demonstrated a much faster and less energy consuming switching of the ferromagnet. This was driven by a single laser pulse without a switching of the ferrimagnetic layer. "This is very good news for future data-storage applications as this technology can provide an efficient scheme to write digital information to a magnetic medium, which is currently based on a magnetic-field-induced switching," says Shunsuke Fukami, co-author of the study.

Credit: 
Tohoku University

Concerns about violence increase in California amid COVID-19 pandemic

The COVID-19 pandemic has been linked to an estimated 110,000 firearm purchases in California and increases in individuals' worries about violence, according to a new study by the UC Davis Violence Prevention Program (VPRP). The study looked at the intersection of the coronavirus pandemic and violence-related harms in the state.

"We believe this is the first study using a representative sample of state residents to assess the near-term effects of the pandemic on individual perceptions, motivations and behaviors related to violence and firearm ownership," said Nicole Kravitz-Wirtz, an assistant professor with VPRP who led the study. "We wanted to capture individuals' lived experiences of violence in the context of the pandemic, along with information on pandemic-induced firearm acquisition and changes in firearm storage practices."

The coronavirus pandemic worsened many of the underlying conditions contributing to violence and its consequences, including poverty, unemployment, lack of resources, isolation, hopelessness and loss. These risks are compounded by a recently documented surge in firearm purchasing in the U.S., a risk factor for firearm-related injury and death.

Violence during the COVID-19 pandemic

The researchers used data from the 2020 California Safety and Wellbeing Survey (CSaWS). CSaWS is an ongoing statewide survey on firearm ownership and exposure to violence and its consequences in California. In July 2020, the researchers collected data from 2,870 adult California residents. The responses were weighted to establish estimates that are statistically representative of the adult population of the state.

They found that respondents' worry about violence happening to them significantly increased during the pandemic compared with before. This worry included multiple types of violence (such as robbery, assault, homicide, police violence, suicide and unintentional firearm injury) but did not extend to mass shootings.

The study also showed that more than one in 10 respondents -- representing an estimated four million California adults -- were concerned that someone they know might physically harm themselves on purpose. For some, this concern was because the other person had suffered a major loss due to the pandemic, such as losing a loved one, job or housing.

The researchers also estimated that 110,000 California adults had acquired a firearm in response to the pandemic, including 47,000 new firearm owners. Previous spikes in firearm purchasing have been associated with increases in firearm violence, and recent evidence suggests a similar relationship exists during the pandemic. The respondents who bought firearms mainly did so for self-protection, citing worries about lawlessness (76%), prisoner releases (56%) and the government going too far (49%).

The researchers indicated that approximately 55,000 California firearm owners who currently store at least one firearm loaded and not locked up had adopted this unsecure storage practice in response to the pandemic. Of those, approximately half lived in households with children or teens.

"Our findings add support to public health-oriented strategies designed to address the enduring psychological trauma associated with direct and indirect exposure to violence, as well as the underlying social and structural factors that contribute to violence-related harms," Kravitz-Wirtz said.

The researchers pointed to the potential value of short-term crisis interventions for reducing violence-related harm during the pandemic and following other acute societal shocks. These include temporary firearm storage outside the home, extreme risk protection orders and efforts involving community-based violence intervention workers.

Credit: 
University of California - Davis Health

Pandemic lockdowns caused steep and lasting carbon dioxide decline

Irvine, Calif., Oct. 14, 2020 -- An international team of climate experts, including Earth system scientists at the University of California, Irvine, today released an assessment of carbon dioxide emissions by industry, transportation and other sectors from January through June, showing that this year's pandemic lockdowns resulted in a 9 percent decline from 2019 levels.

The measurements, included in a study published today in Nature Communications and made available on the recently launched Carbon Monitor website, are an amendment to the 17 percent drop reported in another paper more than three months ago. By including estimates of day-by-day, sector-specific and country-level differences in CO2 emissions derived from frequently updated data sources - many on a near-real-time basis - the new study provides a clearer picture of the atmospheric impact of 2020's pandemic lockdowns.

"We were able to track the cascading effects of COVID-19-related disruptions of human activities from China in February to the United States and Europe in March through May," said co-author Steve Davis, UCI professor or Earth system science. "We've also been able to see the resumption of emissions in many regions, such as in China, where they're now back up to pre-pandemic levels."

He said that emissions in the Americas and Europe have been slower to recover, especially in the U.S., where COVID-19 hot spots continued to emerge throughout the summer and into the fall.

The drop in carbon emissions has been due mainly to transportation, the researchers found, as fewer people are driving to work and traveling by air. Ground transportation emissions fell by 19.2 percent from 2019 to 2020, and pollution from aviation plummeted by 37.4 percent over the same period.

The study authors said that even though lockdowns were easing around the world in late spring, transportation emissions in June were still 16 percent less than a year earlier, "indicating that human mobility and habits remain durably affected."

People still continued to use electricity despite the coronavirus lockdowns, so carbon emissions from the power sector declined only about 6 percent, while industry-connected emissions slipped by 6.5 percent.

Regionally, the largest decline took place in the U.S., which spewed 530 fewer metric tons of CO2 during the evaluation period, a 13 percent drop. China's emissions fell by 227 metric tons, or 5.4 percent, and India released just 177 metric tons, a nearly 16 percent reduction.

Davis said that he and his colleagues have been able to get a clearer picture of the release of carbon dioxide around the world by accessing more up-to-date data from an expanded list of sources.

"It's a big deal that our method allows us to monitor emissions on a practically daily basis. We can even see the effects of weekends and holidays for the first time," he said. "And going forward, this will be a powerful tool for tracking the greenness of the COVID-19 economic recovery initiatives and other efforts to reduce emissions."

Credit: 
University of California - Irvine

Results of an individual patient data pooled analysis reported at RCT Connect

NEW YORK - October 14, 2020 - An individual patient data pooled analysis comparing the use of bivalirudin versus heparin in heart attack patients undergoing percutaneous coronary intervention (PCI) found that bivalirudin use was associated with similar overall rates of 30-day mortality across all heart attack patients, but lower rates of serious bleeding events. Moreover, mortality was reduced in patients with ST-segment elevation myocardial infarction (STEMI) who were treated with a post-PCI bivalirudin infusion.

Findings were reported today at TCT Connect, the 32nd annual scientific symposium of the Cardiovascular Research Foundation (CRF). TCT is the world's premier educational meeting specializing in interventional cardiovascular medicine.

Numerous randomized trials have examined the outcomes of anticoagulation with bivalirudin vs. heparin in patients undergoing PCI. These studies have reported conflicting results given varying patient populations, study designs (including access site, use of GPIIb/IIIa inhibitors with heparin and varying regimens of post-PCI bivalirudin infusions), sample size, endpoints, and follow-up durations. Study-level meta-analyses have been unable to address these limitations, nor can they evaluate events over time, perform multivariable adjustment, or examine outcomes in important subgroups.

In this analysis, researchers pooled the individual patient data from all eight randomized clinical trials of bivalirudin vs. heparin in patients with myocardial infarction (MI) (STEMI, or non-STEMI [NSTEMI]) undergoing PCI that enrolled 1,000 or more patients: MATRIX, VALIDATE-SWEDEHEART, EUROMAX, BRIGHT, HEAT-PPCI, ISAR-REACT 4, ACUITY, and HORIZONS-AMI. The final study cohort included 27,409 patients (13,346 randomized to bivalirudin and 14,063 randomized to heparin); 15,254 had STEMI and 12,155 had NSTEMI.

The pre-specified primary effectiveness endpoint was the 30-day risk of all-cause mortality and the primary safety endpoint was the 30-day risk of serious bleeding (TIMI major or minor if available; alternatively, BARC type 3 or 5).

Overall, bivalirudin was associated with similar rates of 30-day mortality (1.9% vs. 2.1%, HR 0.91, 95% CI 0.75-1.10) and lower rates of serious bleeding (3.4% vs. 5.7%, HR 0.60, 95% CI 0.52-0.68). Further analyses were performed stratified by presentation (STEMI or NSTEMI). In STEMI patients, all-cause mortality was lower with bivalirudin use compared to heparin use (2.5% vs. 2.9%, HR 0.80, 95% CI, 0.64, 1.01). The mortality rates were substantially reduced with bivalirudin when a post-PCI bivalirudin infusion was used (HR 0.67, 95% CI, 0.50, 0.89). Serious bleeding was also lower with bivalirudin use (3.5% vs. 6.0%, HR 0.57, 95% CI, 0.47, 0.68). In NSTEMI patients, bivalirudin use and heparin use had similar rates of mortality (1.2% vs. 1.1%, HR 1.21, 95% CI, 0.84, 1.73), although bivalirudin use was also associated with lower rates of serious bleeding (3.3% vs. 5.3%, HR 0.63, 95% CI, 0.52, 0.76).

"This individual patient data pooled analysis aimed to determine the optimal anticoagulant to be used during PCI in patients with AMI," said Gregg W. Stone, MD. Dr. Stone is Director of Academic Affairs, Mount Sinai Heart Health System and Professor of Medicine at The Zena and Michael A. Wiener Cardiovascular Institute of the Icahn School of Medicine at Mount Sinai. "In patients with STEMI undergoing primary PCI, bivalirudin use was associated with reductions in the 30-day rates of mortality, serious bleeding and NACE, despite increased rates of MI and stent thrombosis compared with heparin. The mortality benefit of bivalirudin in STEMI was pronounced in patients treated with a post-PCI bivalirudin infusion (low-dose or high-dose); a high-dose infusion mitigated the MI and stent thrombosis risk."

"In patients with NSTEMI undergoing PCI, bivalirudin use was associated with a reduction in the 30-day rate of serious bleeding but similar rates of mortality, MI, and stent thrombosis compared with heparin," Dr. Stone added.

Credit: 
Cardiovascular Research Foundation

Turning excess noise into signal

image: a, The excess noise measured by a spectrometer is the superposition integral of the input excess noise and the spectrometer impulse response, which describes the spectral resolution. Excess noise correlations thus provide a simple way to assess spectral resolution of spectrometers, with appropriate assumptions. b, Visible light OCT spectrometers, aligned with the aid of excess noise correlations, show nearly uniform axial resolution versus imaging depth. The human imaging system shows only 3.4 decibel (dB) sensitivity rolloff over the first millimeter of imaging depth in air. Resulting mouse (c) and human (d) retinal images show fine outer retinal details, while revealing a very subtle outer retinal band (red arrow) in the mouse (c).

Image: 
by Aaron M. Kho, Tingwei Zhang, Jun Zhu, Conrad W. Merkle, Vivek J. Srinivasan

During the early 1800s, the first spectrometer was developed by spreading different colors of sunlight onto a screen. It was later noticed that certain dark bands in the solar spectrum, known as Fraunhofer lines, are associated with chemical elements in the solar atmosphere. Since then, scientists have applied spectroscopy to many different fields including astronomy, medicine, agriculture, chemistry, and security.

Typically composed of a diffractive element (such as a prism), a focusing element (such as a lens), and a detector, spectrometers measure emitted light intensity on a wavelength-by-wavelength basis to retrieve information about an object of interest. In a common design, the detector is a camera with pixels that measure different parts of the spectrum at the same time. Ideally, the ability to distinguish fine wavelength features, known as the spectral resolution, needs to be optimized. However, current approaches to characterize spectral resolution thoroughly require expensive additional components or intensive manual labor.

In a new paper published in Light: Science & Application, a team of scientists, led by Aaron Kho from the Department of Biomedical Engineering, University of California Davis, United States, have developed a simple method to comprehensively assess spectrometer performance within seconds using only incoherent excess noise, i.e. fluctuations of light in excess of fundamental shot noise.

"This application of excess noise was originally inspired by speckle. Also initially viewed as a nuisance after the invention of the laser, speckle was later discovered to be useful for measuring blood flow, particle size, and the diffraction limit. Similarly, we found that excess noise can be useful too," Mr. Kho said.

Here, Kho and colleagues propose that incoherent excess noise instills broadband light with high-resolution spectral encoding. With this insight, they devise and demonstrate an excess noise approach for rapidly characterizing the spectral resolution of broadband spectrometers. They also demonstrate a related, excess noise approach that is applied to two different spectrometers to create a precise one-to-one mapping between pixels that detect the same wavelength. The scientists call this procedure cross-calibration. Remarkably, both approaches are actually strengthened by incoherent excess noise, which degrades performance in most other photonics applications.

A major biomedical application of spectrometers is spectral domain Optical Coherence Tomography (OCT), an imaging modality where the spectrometer measures an interference pattern to produce fine depth-resolved retinal images of living subjects. To date, spectrometer performance has limited the development of OCT at visible wavelengths. The authors demonstrate the utility of their excess noise characterization approach by employing it to provide feedback during visible light OCT spectrometer alignment. Achieving improved spectral resolution uniformity across a broad wavelength range, they demonstrate high quality visible light OCT imaging of the human and mouse retinas. The improved uniformity of depth resolution and sensitivity help to reveal a new band in the mouse photoreceptor layer.

"The proposed characterization and cross-calibration approaches will improve the rigor and reproducibility of data in the many fields that use spectrometers. The approaches seem to work with both superluminescent diodes and some supercontinuum light sources, which are widespread," said Vivek J. Srinivasan, Associate Professor of Biomedical Engineering at University of California Davis and senior author on the study.

He continued to forecast: "The idea that the spectral encoding provided by excess noise can serve a useful purpose is also important. This insight could aid in the discovery of other applications where excess noise is similarly useful."

Credit: 
Light Publishing Center, Changchun Institute of Optics, Fine Mechanics And Physics, CAS

When Fock meets Landau: Topology in atom-photon interactions

image: a. FSL with site-varying coupling strengths. b. Energy spectrum of the FSL.

Image: 
@Science China Press

Since the discovery of the quantum Hall effect, topological phases of electrons have become a major research area in condensed matter physics. Many topological phases are predicted in lattices with specific engineering of electronic hopping between lattice sites. Unfortunately, the distance between neighboring sites in natural lattices (crystals) is on the order of a billionth of a meter, which makes such engineering extremely difficult. On the other hand, the photonic crystals have a much larger scale. The unit cells of photonic crystals for visible light are several thousand times larger than those of electrons. Therefore, it is not surprising that people resort to photonic analog of topological phases by digging out the similarity between the Maxwell and Schrodinger equations, and a research area named topological photonics flourished. However, photons and electrons are as different as dogs and cats. Photons are social by nature. They love to stay together (this is why we have lasers). Electrons hate each other. They have their own territories according to the Fermi exclusion principle. Topological photonics based on the analogue between the Maxwell and Schrodinger equations belongs to classical optics, i.e., a classical-wave simulation of the electronic band topology. It is natural to ask whether quantized light embeds new topological phases beyond the interpretation of classical optics. Recently, Han Cai and Da-Wei Wang from Zhejiang University revealed the topological phases in lattices of quantized states of light.

The energy of light can only exist in discrete packs, a non-negative integer plus one half of hν, where h is the Planck constant and ν is the frequency of light. The integer is the number of photons in that state, which is called the Fock state, and the one half is contributed by the vacuum fluctuations. This discreteness of light energy is the key to explain the spectra of the black-body radiation (e.g., in a furnace, higher temperature shifts the spectra to the blue side of a rainbow strip). Light quantization also has profound consequence in atom-photon interactions. When there are n photons in the light field, the probability for an excited atom to emit another photon is proportional to n+1 (remember that photons are social and they love new members to join in). When light is confined in a cavity, the energy emitted by the atom can be reabsorbed, which results in an oscillation of the atom between the excited and ground states, and the oscillation frequency is proportional to the square root of n+1. A spectrum of these discrete values of the oscillation frequencies can be observed when the atom is coupled with light in a superposition of Fock states, i.e., in the Jaynes-Cummings (JC) model, which has become a standard method in obtaining the quantum states of light.

It is not obvious that the JC model is related to topological phases, but this square-root-of-integer scaling of the energy spectrum is in reminiscence of the Landau levels of electrons in a graphene, which is a cradle of topological phases. The energy bands of electrons in a graphene touches at two points on the edge of the Brillouin zone, named the Dirac points, where the electrons obeying the two-dimensional Dirac equation have a linear relation between its energy and momentum. When a magnetic field is applied, the electrons make cyclotron motions with discrete frequencies scaling with the square root of integers, which correspond to discrete Landau levels. Cai and Wang established the connection between the three-mode JC model and the Dirac electrons in a magnetic field.

In a three-mode JC model where an atom is coupled to three cavity modes, the quantum states can be fully described by four integers (x, y, z, q), where x, y and z are the photon numbers in the three cavity modes, and q=0 and 1 for the ground and excited states of the atom. In the JC model, all the (N+1)^2 states satisfying x+y+z+q=N form a honeycomb lattice (see Fig.a, similar to a graphene and we call it the Fock-state lattice). Since the excited atom can emit a photon to one of the cavity modes, the state (x, y, z, 1) is coupled to three neighboring states, (x+1, y, z, 0), (x, y+1, z, 0) and (x, y, z+1, 0). However, the coupling strengths to the three cavity modes are proportional to the square root of their photon numbers. For each state (x, y, z, 1) there is a competition between the three cavities to obtain the photon emitted by the atom, and the cavities that contain more photons have an advantage, which can be understood as the majority principle of photons. This is equivalent to a graphene subjected to a strain which modifies the hopping coefficients of electrons from one site to its three neighbors. It turns out that when the coupling strength between the most populous cavity mode and the atom is larger than the summation of those of the other two modes, the two Dirac points merge and a band gap opens, which is a Lifshitz topological transition between a semimetal and a band insulator. In the semimetallic phase, the variation of the coupling strength is equivalent to a strain field which induces an effective magnetic field and leads to quantized Landau levels (see Fig.b), based on which the authors investigated the valley Hall effect and built a Haldane model in the three-mode JC model.

The authors also investigated the one-dimensional Fock-state lattices with only two cavity modes. They are intrinsic Su-Schriefer-Heeger models and host topological edge states. The model can be further extended to higher than three dimensions for topological phases unavailable in real lattices. The proposed topological phases are ready to be realized in superconducting circuits and are promising for applications in quantum information processing.

Credit: 
Science China Press

NIRS-IVUSimaging can help identify high-risk plaques that can lead to adverse outcomes

NEW YORK - October 14, 2020 - New data from the PROSPECT II study shows that NIRS-IVUS intracoronary imaging can help identify angiographically non-obstructive lesions with high-risk characteristics for future adverse cardiac outcomes.

Findings were reported today at TCT Connect, the 32nd annual scientific symposium of the Cardiovascular Research Foundation (CRF). TCT is the world's premier educational meeting specializing in interventional cardiovascular medicine.

Acute coronary syndromes most often arise from rupture and thrombosis of lipid-rich atherosclerotic plaques that develop in the setting of systemic inflammation. In the PROSPECT study, a large plaque burden or small lumen area assessed by intravascular ultrasound (IVUS) identified angiographically mild lesions at increased risk to cause future coronary events. While lipid-rich fibroatheromas identified by IVUS can be predictive, plaque interpretation with this technique is not as precise.

Lipid content may be more accurately measured by near-infrared spectroscopy (NIRS). Recent studies have suggested that lipid-rich plaques detected by intracoronary NIRS imaging are associated with adverse outcomes. Identifying these "vulnerable" plaques before they progress may help to inform pharmacologic or other strategies to stabilize the plaque.

PROSPECT II was an investigator-sponsored multicenter, prospective, natural history study conducted at 16 centers in Sweden, Denmark and Norway to validate the use of NIRS and IVUS imaging in identifying non-obstructive lipid-rich plaques that are prone to cause future cardiac events.

After successful treatment of all flow-limiting lesions in patients with recent myocardial infarction, imaging of all three coronary arteries was performed with a combination NIRS-IVUS catheter. The lipid content of "non-culprit" lesions was assessed by NIRS, and IVUS assessment was also performed. The primary outcome was the covariate-adjusted rate of major adverse cardiac events (the composite of cardiac death, myocardial infarction, unstable angina, or progressive angina) adjudicated to have arisen from non-culprit lesions. The relationship between plaques with high lipid content, large plaque burden, and small lumen areas and patient-level and lesion-level events were tested hierarchically.

A total of 3,629 untreated non-culprit lesions were prospectively characterized in 898 patients with myocardial infarction. Adverse events within four years occurred in 13.2% of patients, with a total of 8.0% of events arising from untreated non-culprit lesions with mean baseline diameter stenosis 46.9%. The presence of highly lipidic lesions was an independent predictor of patient-level non-culprit events (adjusted OR 2.27; 95% CI 1.25-4.13) and lesion-specific events (adjusted OR 7.83; 95% CI 4.12-14.89). Large plaque burden and small lumen areas were also independent predictors of adverse events.

"In PROSPECT II, lipid-rich plaques, as detected by NIRS, identified angiographically mild non-flow limiting lesions responsible for future coronary events," said David Erlinge, MD, PhD. Dr. Erlinge is Professor, Department of Cardiology, Clinical Sciences, Lund University at Skane University Hospital in Sweden.

"High lipid content, along with large plaque burden and small lumen area, may be added as prognostic indices of vulnerable plaques that put patients at risk for adverse outcomes. Further studies prospectively utilizing this information are warranted to improve outcomes for high-risk patients with coronary artery disease."

Credit: 
Cardiovascular Research Foundation

New scientific study shows brain injuries can be unbroken by innovative neuro-technologies

video: Media B-roll: Canadian veteran Trevor Greene continues his recovery from brain injury using innovative brain technologies. https://healthtechconnex.com/news/iron-soldier-new-studies/

Image: 
Mark Yuen

Surrey, British Columbia, Canada (October 14, 2020) - A recently published scientific study led by the Centre for Neurology Studies at HealthTech Connex and a research team from Simon Fraser University (SFU), reports the latest breakthroughs from Project Iron Soldier. Captain (retired) Trevor Greene, who was attacked with an axe to the head while serving in Afghanistan, continues to push conventional limits in brain health recovery.

The research study published in Frontiers of Human Neuroscience is led by neuroscientist Dr. Ryan D'Arcy, and involves tracking Capt. Greene's neuroplasticity and his physical, cognitive and PTSD improvements as he rewires his brain using the latest and most advanced brain technologies.

Capt. Greene and Dr. D'Arcy recounted their remarkable progress and showcased their mission to lead scientific breakthroughs in neuroplasticity through a recent TEDx talk (Link to TEDx video: https://bit.ly/3hHSeqW).

In 2006, retired Canadian soldier Capt. Greene survived a severe brain injury when he was attacked with an axe to the head, during his combat tour in Afghanistan. He spent years in various therapies and rehabilitation, and in 2009, he started working with Dr. D'Arcy. In 2015, the B.C. and Yukon Command of the Royal Canadian Legion helped outfit Trevor with a robotic exoskeleton, which helped him continue re-learning to walk. Called Project Iron Soldier, this exciting initiative was the inspiration to develop the Legion Veterans Village, a $312M Centre of Excellence for PTSD, mental health and rehabilitation dedicated to veterans and first responders (currently under construction in Surrey).

Capt. Greene and the Project Iron Soldier research team have continued with intensive daily rehabilitation, but the team experienced an extended plateau in progress using conventional therapy alone.

To breakthrough the plateau, the Centre for Neurology Studies launched an intensive 14-week trial using the Portable Neuromodulation Stimulator (or PoNS™). The PoNS is a neurostimulation technology that sends a series of small electrical impulses to the brain through the tongue (known as translingual neurostimulation) to safely facilitate neuroplasticity. The team tracked improvements in brain vital sign improvements using NeuroCatch® Platform (or NeuroCatch®). NeuroCatch is a rapid objective measure of cognitive brain function.

"When Trevor experienced a plateau in his rehabilitation, we tried intensive conventional treatment approaches, but to no avail," says Dr. Ryan D'Arcy, co-founder of HealthTech Connex, which operates the Centre for Neurology Studies, and an SFU professor. "It was only after combining in the PoNS with this rehabilitation therapy that we could break through these latest barriers and demonstrate significant improvements in his brain vital sign measurements."

Results of the study: The newly published results in Frontiers in Human Neuroscience demonstrate that PoNS neurostimulation, paired with intensive rehabilitation, may stimulate neuroplasticity to overcome an extended recovery plateau as objectively measured by NeuroCatch and other brain scanning technologies. The main findings were:

* Capt. Greene showed significant gains in clinical outcome measures for physical therapy, even after 14 years since the axe attack. Capt. Greene and his wife Debbie Greene also reported notable and lasting improvements in cognition and PTSD symptoms.

* Capt. Greene showed significant brain vital sign improvements in cognitive function, particularly in auditory sensation (as measured by the N100 response), basic attention (as measured by P300 response), and cognitive processing (as measured by N400 response).

Says Capt. Greene, "I first saw the power of neuroplasticity in the early days when Ryan showed me MRI images of my brain showing healthy brain tissue taking over for the damaged bits. Later on, I saw the full power of the PoNS device when I got demonstrably stronger, steadier and more coordinated after using it regularly for just a few weeks. It's really been a game changer for me and my family."

"Trevor's amazing progress is no doubt pushing the frontiers of medical science by overcoming perceived limits of brain recovery," says Dr. Shaun Fickling, the study's lead author who completed his PhD at Simon Fraser University. "These brain imaging results provide valuable insight into the importance of unleashing the power of neuroplasticity to inspire countless people impacted by brain and mental health conditions."

Dr. D'Arcy concludes, "These neuro-technology breakthroughs have considerable impacts to inspire many of us to push beyond conventional limits in neurological and mental health recovery."

Credit: 
Chiang Public Relations

A new land surface model to monitor global river water environment

Climate change and human activities, including heat emission, nitrogen (N) emission, and water management are altering the hydrothermal condition and N transport in the soil and river systems, thereby affecting the global nitrogen cycle and water environment. "We need to assess the impacts of these human activities on global river temperature and riverine N transport," said Prof. Zhenghui Zie with the Institute of Atmospheric Physics at the Chinese Academy of Sciences, "because quantitative assessment can not only improve our understanding of the material and energy cycle that occur in response to anthropogenic disturbances, but also contribute to protecting river ecosystems."

Xie and his collaborators from the Chinese Academy of Sciences incorporated the schemes of riverine dissolved inorganic nitrogen (DIN) transport, river water temperature, and human activity into a land surface model, and thus developed a land surface model CAS-LSM. They applied the model to explore the impacts of climate change and anthropogenic disturbances on global river temperature and DIN transport.

"We found that the water temperature of rivers in tropical zones increased at about 0.5oC per decade due to climate change from 1981 to 2010, and the heat emission of the once-through cooling system of thermal power plants further warmed the temperature. In Asia, power plants increased local river temperatures by about 60%." Said Dr. Shuang Liu, the lead author of the study published in Global and Planetary Change.

Climate change determined the interannual variability of DIN exports from land to oceans, and water management controlled the retention of DIN by affecting the water cycle and river thermal processes.

"From the perspective of anthropogenic N emission, we found the riverine DIN in the USA was affected primarily by N fertilizer use, the changes in DIN fluxes in European rivers was dominated by point source pollution, and rivers in China were seriously affected by both fertilization and point source emission." said Dr. Yan Wang, the lead author of the team's another study published in Journal of Advances in Modeling Earth Systems.

In general, the results indicated that incorporating schemes related to nitrogen transport and human activities into land surface models could be an effective way to monitor global river water quality and diagnose the performance of the land surface modeling.

This series of studies have been published in Global Change Biology, Global and Planetary Change, Journal of Advances in Modeling Earth Systems and other journals. One of the papers is highlights by Nature Climate Change.

Credit: 
Institute of Atmospheric Physics, Chinese Academy of Sciences