Culture

Peginterferon-lambda shows strong antiviral action to accelerate clearance of COVID-19

video: Anonymous participant receives antiviral drug

Image: 
Courtesy UHN

TORONTO (February 5, 2021) - A clinical study led by Dr. Jordan Feld, a liver specialist at Toronto Centre for Liver Disease, University Health Network (UHN), showed an experimental antiviral drug can significantly speed up recovery for COVID-19 outpatients - patients who do not need to be hospitalized.

This could become an important intervention to treat infected patients and help curb community spread, while COVID-19 vaccines are rolled out this year.

"This treatment has large therapeutic potential, especially at this moment as we see aggressive variants of the virus spreading around the globe which are less sensitive to both vaccines and treatment with antibodies," says Dr. Feld, who is also Co-Director of the Schwartz Reisman Liver Research Centre and the R. Phelan Chair in Translational Liver Research at UHN.

According to the study, published today in Lancet Respiratory Medicine, patients who received a single injection of peginterferon-lambda were over four times more likely to have cleared the infection within seven days, when compared to a group treated with placebo.

"People who were treated cleared the virus quickly, and the effect was most pronounced in those with the highest viral levels. We also saw a trend towards quicker improvement of respiratory symptoms in the treatment group," explains Dr. Feld - who translated his knowledge of peg-interferon lambda usage for viral hepatitis to research on COVID-19 treatment.

Participants with higher viral levels (above 1 million copies per mL) were much more likely to clear infection with treatment than placebo: 79% in the treatment arm compared to 38% in the placebo group; and virus levels decreased quickly in everyone in the treatment group.

Treatment benefits and public health impact

Rapid clearance has many benefits, particularly in those with high viral levels, as those cases are associated with more severe disease and a higher risk of transmission to others. Among the 60 patients followed in the study, five went to emergency rooms with deteriorating respiratory symptoms. Of those, four were in the placebo group, while only one was in the group which received the actual drug.

Bringing down the virus level quickly prevents people from getting worse and likely reduces the risk of spreading the disease to others. This may have important additional public health impact.

"If we can decrease the virus level quickly, people are less likely to spread the infection to others and we may even be able to shorten the time required for self-isolation," says Dr. Feld.

Interferon-lambda

Interferon-lambda is a protein produced by the body in response to viral infections. It has the ability to activate a number of cellular pathways to kill invading viruses.

The coronavirus that causes COVID-19 prevents the body from producing interferons, which is one way it avoids being controlled by the body's immune system. Treatment with interferon-lambda activates those same virus-killing pathways in the cells.

Because interferon activates many virus-killing pathways, resistance due to 'new strains' of the virus, which could be an issue with some therapies, is not a concern with interferon-lambda.

Interferon-lambda is different from other interferons because it uses a receptor that is only present in some tissues in the body. It is very active in the lung, the liver and the intestine, all places where the COVID-19 virus is able to replicate, but it is not active in other places leading to a lot fewer side effects than other interferons. In the trial, those treated with interferon-lambda had similar side effects to those who received placebo.

Peginterferon-lambda (used in this study) is a long-acting version of the drug developed by Eiger BioPharmaceuticals, which can be given as a single injection under the skin with a tiny needle (like insulin).

Next steps

This was an investigator initiated phase 2, double-blind randomized study, done in Toronto, with a total of 60 participants - 30 who received the drug while 30 received placebo. The study was conducted from May to November 2020, with referrals from six outpatient assessment centres.

Additional studies are ongoing at the University of Toronto, Harvard University and Johns Hopkins University with peginterferon-lambda in hospitalized patients, and in settings where it can be used to prevent infection in those who have been exposed.

Credit: 
University Health Network

Vegan diet better for weight loss and cholesterol control than Mediterranean diet

A vegan diet is more effective for weight loss than a Mediterranean diet, according to a groundbreaking new study that compared the diets head to head. The randomized crossover trial, which was published in the Journal of the American College of Nutrition, found that a low-fat vegan diet has better outcomes for weight, body composition, insulin sensitivity, and cholesterol levels, compared with a Mediterranean diet.

The study randomly assigned participants--who were overweight and had no history of diabetes--to a vegan diet or a Mediterranean diet in a 1:1 ratio. For 16 weeks, half of the participants started with a low-fat vegan diet that eliminated animal products and focused on fruits, vegetables, whole grains, and legumes. The other half started with the Mediterranean diet, which followed the PREDIMED protocol, which focuses on fruits, vegetables, legumes, fish, low-fat dairy, and extra virgin olive oil, while limiting or avoiding red meat and saturated fats. Neither group had a calorie limit, and participants did not change exercise or medication routines, unless directed by their personal doctors. As part of the crossover design, participants then went back to their baseline diets for a four-week washout period before switching to the opposite group for an additional 16 weeks.

The study found that within 16 weeks on each diet:

Participants lost an average of 6 kilograms (or about 13 pounds) on the vegan diet, compared with no mean change on the Mediterranean diet.

Participants lost 3.4 kg (about 7.5 pounds) more fat mass on the vegan diet.

Participants saw a greater reduction in visceral fat by 315 cm3 on the vegan diet.

The vegan diet decreased total and LDL cholesterol levels by 18.7 mg/dL and 15.3 mg/dL, respectively, while there were no significant cholesterol changes on the Mediterranean diet.

Blood pressure decreased on both diets, but more on the Mediterranean diet (6.0 mm Hg, compared to 3.2 mmHg on the vegan diet).

"Previous studies have suggested that both Mediterranean and vegan diets improve body weight and cardiometabolic risk factors, but until now, their relative efficacy had not been compared in a randomized trial," says study author Hana Kahleova, MD, PhD, director of clinical research for the Physicians Committee. "We decided to test the diets head to head and found that a vegan diet is more effective for both improving health markers and boosting weight loss."

The authors note that the vegan diet likely led to weight loss, because it was associated with a reduction in calorie intake, increase in fiber intake, decrease in fat consumption, and decrease in saturated fat consumption.

"While many people think of the Mediterranean diet as one of the best ways to lose weight, the diet actually crashed and burned when we put it to the test," says study author Neal Barnard, MD, president of the Physicians Committee. "In a randomized, controlled trial, the Mediterranean diet caused no weight loss at all. The problem seems to be the inclusion of fatty fish, dairy products, and oils. In contrast, a low-fat vegan diet caused significant and consistent weight loss."

"If your goal is to lose weight or get healthy in 2021, choosing a plant-based diet is a great way to achieve your resolution," adds Dr. Kahleova.

Credit: 
Physicians Committee for Responsible Medicine

Healthy oceans need healthy soundscapes

Rain falls lightly on the ocean's surface. Marine mammals chirp and squeal as they swim along. The pounding of surf along a distant shoreline heaves and thumps with metronomic regularity. These are the sounds that most of us associate with the marine environment. But the soundtrack of the healthy ocean no longer reflects the acoustic environment of today's ocean, plagued with human-created noise.

A global team of researchers set out to understand how human-made noise affects wildlife, from invertebrates to whales, in the oceans, and found overwhelming evidence that marine fauna, and their ecosystems, are negatively impacted by noise. This noise disrupts their behavior, physiology, reproduction and, in extreme cases, causes mortality. The researchers call for human-induced noise to be considered a prevalent stressor at the global scale and for policy to be developed to mitigate its effects.

The research(link is external), led by Professor Carlos M. Duarte, distinguished professor at King Abdullah University of Science and Technology (KAUST), and published in the journal Science, is eye opening to the global prevalence and intensity of the impacts of ocean noise. Since the Industrial Revolution, humans have made the planet, the oceans in particular, noisier through fishing, shipping, infrastructure development and more, while also silencing the sounds from marine animals that dominated the pristine ocean.

"The landscape of sound - or soundscape - is such a powerful indicator of the health of an environment," noted Ben Halpern(link is external), a coauthor on the study and director of the National Center for Ecological Analysis and Synthesis at UC Santa Barbara. "Like we have done in our cities on land, we have replaced the sounds of nature throughout the ocean with those of humans."

The deterioration of habitats, such as coral reefs, seagrass meadows and kelp beds with overfishing, coastal development, climate change and other human pressures, have further silenced the characteristic sound that guides the larvae of fish and other animals drifting at sea into finding and settling on their habitats. The call home is no longer audible for many ecosystems and regions.

The Anthropocene marine environment, according to the researchers, is polluted by human-made sound and should be restored along sonic dimensions, and along those more traditional chemical and climatic. Yet, current frameworks to improve ocean health ignore the need to mitigate noise as a pre-requisite for a healthy ocean.

Sound travels far, and quickly, underwater. And marine animals are sensitive to sound, which they use as a prominent sensorial signal guiding all aspects of their behavior and ecology. "This makes the ocean soundscape one of the most important, and perhaps under-appreciated, aspects of the marine environment," the study states. The authors' hope is that the evidence presented in the paper will "prompt management actions ... to reduce noise levels in the ocean, thereby allowing marine animals to re-establish their use of ocean sound."

"We all know that no one really wants to live right next to a freeway because of the constant noise," commented Halpern. "For animals in the ocean, it's like having a mega-freeway in your backyard."

The team set out to document the impact of noise on marine animals and on marine ecosystems around the world. They assessed the evidence contained across more than 10,000 papers to consolidate compelling evidence that human-made noise impacts marine life from invertebrates to whales across multiple levels, from behavior to physiology.

"This unprecedented effort, involving a major tour de force, has shown the overwhelming evidence for the prevalence of impacts from human-induced noise on marine animals, to the point that the urgency of taking action can no longer be ignored," KAUST Ph.D. student Michelle Havlik said. The research involved scientists from Saudi Arabia, Denmark, the U.S. and the U.K., Australia, New Zealand, the Netherlands, Germany, Spain, Norway and Canada.

"The deep, dark ocean is conceived as a distant, remote ecosystem, even by marine scientists," Duarte said. "However, as I was listening, years ago, to a hydrophone recording acquired off the U.S. West Coast, I was surprised to hear the clear sound of rain falling on the surface as the dominant sound in the deep-sea ocean environment. I then realized how acoustically connected the ocean surface, where most human noise is generated, is to the deep sea; just 1,000 m, less than 1 second apart!"

The takeaway of the review is that "mitigating the impacts of noise from human activities on marine life is key to achieving a healthier ocean." The KAUST-led study identifies a number of actions that may come at a cost but are relatively easy to implement to improve the ocean soundscape and, in so doing, enable the recovery of marine life and the goal of sustainable use of the ocean. For example, simple technological innovations are already reducing propeller noise from ships, and policy could accelerate their use in the shipping industry and spawn new innovations.

Deploying these mitigation actions is a low hanging fruit as, unlike other forms of human pollution such as emissions of chemical pollutants and greenhouse gases, the effects of noise pollution cease upon reducing the noise, so the benefits are immediate. The study points to the quick response of marine animals to the human lockdown under COVID-19 as evidence for the potential rapid recovery from noise pollution.

Using sounds gathered from around the globe, multimedia artist and study coauthor Jana Winderen created a six-minute audio track that demonstrates both the peaceful calm, and the devastatingly jarring, acoustic aspects of life for marine animals. The research is truly eye opening, or rather ear opening, both in its groundbreaking scale as well as in its immediacy.

Credit: 
University of California - Santa Barbara

Civil engineers find link between hospitals and schools key to community resilience

Health care and education systems are two main pillars of a community's stability. How well and how quickly a community recovers following a natural disaster depends on the resilience of these essential social services.

New research from the Colorado State University Department of Civil and Environmental Engineering, published in Nature Scientific Reports, has found hospitals and schools are interdependent, suggesting their collective recovery must be considered in order to restore a community in the wake of disaster.

Because hospitals and schools are so integral to a community's well-being, Associate Professor Hussam Mahmoud and Ph.D. graduate student Emad Hassan wanted to determine the correlation between them to understand their overall influence on community recovery following extreme events. They found extensive direct and indirect relationships between health care and education, indicating recovery of one system relies on recovery of the other.

"This quantification has never been done before, so it was exciting to show that they depend on each other quite a bit," Mahmoud said. "Synchronizing the recovery might actually be very important if you want to optimize the overall recovery for the community."

Checking in on Centerville

A community's health care and education systems are complex on their own. Each has its own facilities and functions, requiring infrastructure, staff and supplies. To examine the intricate interactions between these systems, Hassan and Mahmoud comprehensively modeled hospitals, schools, community-built environments and even community members.

They based their study on a virtual community called Centerville, complete with physical, social and economic sectors and 50,000 individuals. The model was so detailed that the imaginary residents had their own roles within the community and were able to interact, learn, adapt and make decisions.

"The study, with this level of resolution, enabled us to capture interdependencies between hospitals and schools at the family and individual levels, which surprisingly showed that the two systems are significantly related and have an enormous role in community recovery after disasters," said Hassan, who was awarded a grant from the American Geophysical Union to present the research at the AGU Fall Meeting 2020.

Working within the NIST Center for Risk-Based Community Resilience Planning at CSU was helpful to understand the nature of these complicated systems from different perspectives, he said.

The study revealed the compounded role of hospitals and schools in communities' social stability and allowed the researchers to apply different strategies to these systems that might accelerate the whole community recovery after disasters.

Now the modeling approach Hassan and Mahmoud developed can be used to investigate other systems subjected to various kinds of disasters.

How stable is your community?

In response to the high level of interdependence they uncovered between health care and education systems, Hassan and Mahmoud created a social services stability index, so policymakers and community leaders can measure the social services stability within their own communities based on the functionality of hospitals and schools combined.

Mahmoud hopes this tool and deeper understanding of how these interdependent systems function will help communities recover faster, rather than wither, following disaster. He points to Butte County, California, where the population has dropped by 11,000 in the aftermath of the Camp Fire, which badly damaged the only local hospital.

"Without schools and hospitals, society cannot function properly," Mahmoud said.

Credit: 
Colorado State University

US counties with more social capital have fewer COVID-19 infections and deaths

image: Correlation between COVID-19 Cases & Social Capital

Image: 
Christian Felix (CC-BY 4.0, https://creativecommons.org/licenses/by/4.0/)

US counties with more social capital have fewer COVID-19 infections and deaths - perhaps because these communities have greater concern for the health of others.

Credit: 
PLOS

Women's voices in the media still outnumbered by those of men - study

New research from Simon Fraser University shows that women's voices continue to be underrepresented in the media, despite having prominent female leaders across Canada and internationally. Researchers in SFU's Discourse Processing Lab found that men outnumber women quoted in Canadian news media about three to one. The findings from the team's Gender Gap Tracker study were published this week in the journal PLOS ONE.

The research team collected data from seven major Canadian media outlets from October 2018 to September 2020. Over the two-year period, 29 per cent of people quoted in media stories were women versus 71 per cent men. B.C. Provincial Health Officer Dr. Bonnie Henry, quoted more than 2,200 times, notably topped the list for women most quoted in the news--many others were also public health officials during the COVID-19 pandemic--but still had fewer quotes than the top three male voices, all politicians.

"What this study shows is that we are very far from parity in mainstream news," says SFU linguistics professor and lab director Maite Taboada. "This has profound implications, as we tend to look for role models in the media."

Politicians, both male and female, were most often quoted in the media, followed by sports figures for men, and healthcare professionals for women.

"We found that, although men and women politicians appear regularly, men are quoted far more often. This is the case even despite Canada's gender-balanced cabinet," says Taboada.

Former U.S. President Donald Trump was found to be quoted the most often - 15,746 times to be exact, followed by Canada's Prime Minister Justin Trudeau and Ontario's Premier Doug Ford. Other top women quoted were Ontario Health Minister Christine Elliott and Canada's Minister of Finance Chrystia Freeland.

The team used its Gender Gap Tracker software to analyze daily coverage from CBC, CTV, Global, HuffPost Canada, National Post, The Star and the Globe and Mail. Researchers used the power of large-scale text processing and big data storage to collect news stories daily and perform Natural Language Processing (NLP) to identify who is mentioned and who is quoted by gender.

"We are very proud of this team effort, as it highlights the potential of Natural Language Processing to contribute positively to society, in this case to show the gender gap in media," Taboada adds. "Natural Language Processing is a field at the intersection of computer science and linguistics that aims to analyze and extract information from large amounts of language data."

The researchers found that articles written by women quote more women (34 per cent for articles authored by women compared to 25 per cent for articles authored by men) and suggest part of the solution to addressing the gender gap in media includes hiring more women as reporters.

The study was conducted in partnership with Informed Opinions, which encourages media to diversify their sources and better reflect both genders. While the Gender Gap Tracker can only capture one kind of diversity, since it relies on names to assign gender to sources, the authors suggest considering other forms of diversity, given many other groups are underrepresented in the news.

The Gender Gap Tracker is available online (gendergaptracker.informedopinions.org) and updates every 24 hours.

Credit: 
Simon Fraser University

New microscopy concept enters into force

image: The separation of the islands is around half a millimetre.

Image: 
David Hälg and Shobhna Misra, ETH Zurich

The development of scanning probe microscopes in the early 1980s brought a breakthrough in imaging, throwing open a window into the world at the nanoscale. The key idea is to scan an extremely sharp tip over a substrate and to record at each location the strength of the interaction between tip and surface. In scanning force microscopy, this interaction is -- as the name implies -- the force between tip and structures on the surface. This force is typically determined by measuring how the dynamics of a vibrating tip changes as it scans over objects deposited on a substrate. A common analogy is tapping a finger across a table and sensing objects placed on the surface. A team led by Alexander Eichler, Senior Scientist in the group of Prof. Christian Degen at the Departement of Physics of ETH Zurich, turned this paradigm upside down. Writing in Physical Review Applied, they report the first scanning force microscope in which the tip is at rest while the substrate with the samples on it vibrates.

Tail wagging the dog

Doing force microscopy by 'vibrating the table under the finger' may look like making the entire procedure a whole lot more complicated. In a sense it does. But mastering the complexity of this inverted approach comes with great payoff. The new method promises to push the sensitivity of force microscopy to its fundamental limit, beyond what can be expected from further improvements of the conventional 'finger tapping' approach.

The key to the superior sensitivity is the choice of substrate. The 'table' in the experiments of Eichler, Degen and their co- workers is a perforated membrane made of silicon nitride, a mere 41 nm in thickness. Collaborators of the ETH physicists, the group of Albert Schliesser at the University of Copenhagen in Denmark, have established these low- mass membranes as outstanding nanomechanical resonators with extreme 'quality factors'. That is, that once the membrane is tipped on, it vibrates millions of times, or more, before coming to rest. Given these exquisite mechanical properties, it becomes advantageous to vibrate the 'table' rather than the 'finger'. At least in principle.

New concept put to practice

Translating this theoretical promise into experimental capability is the objective of an ongoing project between the groups of Degen and Schliesser, with theory support from Dr. Ramasubramanian Chitra and Prof. Oded Zilberberg of the Institute for Theoretical Physics at ETH Zurich. As a milestone on that journey, the experimental teams have now demonstrated that the concept of membrane- based scanning force microscopy works in a real device.

In particular, they showed that neither loading the membrane with samples nor bringing the tip to within a distance of a few nanometres compromises the exceptional mechanical properties of the membrane. However, once the tip approaches the sample even closer, the frequency or amplitude of the membrane changes. To be able to measure these changes, the membrane features not only an island where tip and sample interact, but also a second one -- mechanically coupled to the first -- from where a laser beam can be partially reflected, to provide a sensitive optical interferometer.

Quantum is the limit

Putting this setup to work, the team successfully resolved gold nanoparticles and tobacco mosaic viruses. These images serve as a proof of principle for the novel microscopy concept, but they do not yet push the capabilities into new territory. But the destination is just there. The researchers plan to combine their novel approach with a technique known as magnetic resonance force microscopy (MRFM), to enable magnetic resonance imaging (MRI) with a resolution of single atoms, thus providing unique insight, for example, into viruses.

Atomic-scale MRI would be another breakthrough in imaging, combining ultimate spatial resolution with highly specific physical and chemical information about the atoms imaged. For the realization of that vision, a sensitivity close to the fundamental limit given by quantum mechanics is needed. The team is confident that they can realise such a 'quantum-limited' force sensor, through further advances in both membrane engineering and measurement methodology. With the demonstration that membrane-based scanning force microscopy is possible, the ambitious goal has now come one big step closer.

Credit: 
ETH Zurich Department of Physics

Breakthrough in quantum photonics promises a new era in optical circuits

The modern world is powered by electrical circuitry on a "chip"--the semiconductor chip underpinning computers, cell phones, the internet, and other applications. In the year 2025, humans are expected to be creating 175 zettabytes (175trillion gigabytes) of new data. How can we ensure the security of sensitive data at such a high volume? And how can we address grand-challenge-like problems, from privacy and security to climate change, leveraging this data, especially given the limited capability of current computers?

A promising alternative is emerging quantum communication and computation technologies. For this to happen, however, it will require the widespread development of powerful new quantum optical circuits­; circuits that are capable of securely processing the massive amounts of information we generate every day. Researchers in USC's Mork Family Department of Chemical Engineering and Materials Science have made a breakthrough to help enable this technology.

While a traditional electrical circuit is a pathway along which electrons from an electric charge flow, a quantum optical circuit uses light sources that generate individual light particles, or photons, on-demand, one-at-a-time, acting as information carrying bits (quantum bits or qubits). These light sources are nano-sized semiconductor "quantum dots"-tiny manufactured collections of tens of thousands to a million atoms packed within a volume of linear size less than a thousandth of the thickness of typical human hair buried in a matrix of another suitable semiconductor.

They have so far been proven to be the most versatile on-demand single photon generators. The optical circuit requires these single photon sources to be arranged on a semiconductor chip in a regular pattern. Photons with nearly identical wavelength from the sources must then be released in a guided direction. This allows them to be manipulated to form interactions with other photons and particles to transmit and process information.

Until now, there has been a significant barrier to the development of such circuits. For example, in current manufacturing techniques quantum dots have different sizes and shapes and assemble on the chip in random locations. The fact that the dots have different sizes and shapes mean that the photons they release do not have uniform wavelengths. This and the lack of positional order make them unsuitable for use in the development of optical circuits.

In recently published work, researchers at USC have shown that single photons can indeed be emitted in a uniform way from quantum dots arranged in a precise pattern. It should be noted that the method of aligning quantum dots was first developed at USC by the lead PI, Professor Anupam Madhukar, and his team nearly thirty years ago, well before the current explosive research activity in quantum information and interest in on-chip single-photon sources. In this latest work, the USC team has used such methods to create single-quantum dots, with their remarkable single-photon emission characteristics. It is expected that the ability to precisely align uniformly-emitting quantum dots will enable the production of optical circuits, potentially leading to novel advancements in quantum computing and communications technologies.

The work, published in APL Photonics, was led by Jiefei Zhang, currently a research assistant professor in the Mork Family Department of Chemical Engineering and Materials Science, with corresponding author Anupam Madhukar, Kenneth T. Norris Professor in Engineering and Professor of Chemical Engineering, Electrical Engineering, Materials Science, and Physics.

"The breakthrough paves the way to the next steps required to move from lab demonstration of single photon physics to chip-scale fabrication of quantum photonic circuits," Zhang said. "This has potential applications in quantum (secure) communication, imaging, sensing and quantum simulations and computation."

Madhukar said that it is essential that quantum dots be ordered in a precise way so that photons released from any two or more dots can be manipulated to connect with each other on the chip. This will form the basis of building unit for quantum optical circuits.

"If the source where the photons come from is randomly located, this can't be made to happen." Madhukar said.

"The current technology that is allowing us to communicate online, for instance using a technological platform such as Zoom, is based on the silicon integrated electronic chip. If the transistors on that chip are not placed in exact designed locations, there would be no integrated electrical circuit," Madhukar said. "It is the same requirement for photon sources such as quantum dots to create quantum optical circuits."

The research is supported by the Air Force Office of Scientific Research (AFOSR) and the U.S. Army Research Office (ARO).

"This advance is an important example of how solving fundamental materials science challenges, like how to create quantum dots with precise position and composition, can have big downstream implications for technologies like quantum computing," said Evan Runnerstrom, program manager, Army Research Office, an element of the U.S. Army Combat Capabilities Development Command's Army Research Laboratory. "This shows how ARO's targeted investments in basic research support the Army's enduring modernization efforts in areas like networking."

To create the precise layout of quantum dots for the circuits, the team used a method called SESRE (substrate-encoded size-reducing epitaxy) developed in the Madhukar group in the early 1990s. In the current work, the team fabricated regular arrays of nanometer-sized mesas with a defined edge orientation, shape (sidewalls) and depth on a flat semiconductor substrate, composed of gallium arsenide (GaAs). Quantum dots are then created on top of the mesas by adding appropriate atoms using the following technique.

First, incoming gallium (Ga) atoms gather on the top of the nanoscale mesas attracted by surface energy forces, where they deposit GaAs. Then, the incoming flux is switched to indium (In) atoms, to in turn deposit indium arsenide (InAs) followed back by Ga atoms to form GaAs and hence create the desired individual quantum dots that end up releasing single photons. To be useful for creating optical circuits, the space between the pyramid-shaped nano-mesas needs to be filled by material that flattens the surface. The final chip where opaque GaAs is depicted as a translucent overlayer under which the quantum dots are located.

"This work also sets a new world-record of ordered and scalable quantum dots in terms of the simultaneous purity of single-photon emission greater than 99.5%, and in terms of the uniformity of the wavelength of the emitted photons, which can be as narrow as 1.8nm, which is a factor of 20 to 40 better than typical quantum dots," Zhang said.

Zhang said that with this uniformity, it becomes feasible to apply established methods such as local heating or electric fields to fine-tune the photon wavelengths of the quantum dots to exactly match each other, which is necessary for creating the required interconnections between different quantum dots for circuits.

This means that for the first time researchers can create scalable quantum photonic chips using well-established semiconductor processing techniques. In addition, the team's efforts are now focused on establishing how identical the emitted photons are from the same and/or from different quantum dots. The degree of indistinguishability is central to quantum effects of interference and entanglement, that underpin quantum information processing -communication, sensing, imaging, or computing.

Zhang concluded: "We now have an approach and a material platform to provide scalable and ordered sources generating potentially indistinguishable single-photons for quantum information applications. The approach is general and can be used for other suitable material combinations to create quantum dots emitting over a wide range of wavelengths preferred for different applications, for example fiber-based optical communication or the mid-infrared regime, suited for environmental monitoring and medical diagnostics," Zhang said.

Gernot S. Pomrenke, AFOSR Program Officer, Optoelectronics and Photonics said that reliable arrays of on-demand single photon sources on-chip were a major step forward.

"This impressive growth and material science work stretches over three decades of dedicated effort before research activities in quantum information were in the mainstream," Pomrenke said. "Initial AFOSR funding and resources from other DoD agencies have been critical in realizing the challenging work and vision by Madhukar, his students, and collaborators. There is a great likelihood that the work will revolutionize the capabilities of data centers, medical diagnostics, defense and related technologies."

Credit: 
University of Southern California

Arctic stew: Understanding how high-latitude lakes respond to and affect climate change

image: Nunavut, a vast region in northern Canada, plays a crucial role in understanding global climate change. New research from Soren Brothers details how lakes in the region could have a big impact on carbon dioxide levels in the atmosphere.

Image: 
Paul Sibley

To arrive at Nunavut, turn left at the Dakotas and head north. You can't miss it--the vast tundra territory covers almost a million square miles of northern Canada. Relatively few people call this lake-scattered landscape home, but the region plays a crucial role in understanding global climate change. New research from Soren Brothers, assistant professor in the Department of Watershed Sciences and Ecology Center, details how lakes in Nunavut could have a big impact on carbon dioxide levels in the atmosphere, and it's not all bad news--at least for now. Brothers examined 23 years of data from lakes near Rankin Inlet. He noted a peculiarity--as the lakes warmed, their carbon dioxide concentrations fell. Most lakes are natural sources of carbon dioxide, but these lakes were now mostly near equilibrium with the atmosphere.

This was odd. The expected pattern is that warmer temperatures should trigger larger releases of greenhouse gases from lakes. In places like Alaska, centuries of accumulated plant material in the permafrost release a hoard of carbon as they thaw, and are consumed by microbes. Experiments have also shown that as waters warm, carbon dioxide production by microbes increases more quickly than carbon dioxide uptake by plants, throwing the system out of balance. Together, these processes should increase atmospheric greenhouse gas emissions from waterways, in theory anyway. So why not in Nunavut? There is no question that the first step in this Rube Goldberg machine is engaged ... the climate is warming. Why then, are the lakes near Rankin Inlet not belching out carbon?

Pulling on good, thick parkas, Brothers and his team visited the lakes and came up with a few ideas as to why this is happening. First, they note that much of Nunavut is on the Canadian shield--an ancient granitic bedrock where thin soils are unlikely to contain--and thus release--the massive stores of organic matter entering waterways elsewhere in the Arctic. Second, longer ice-free seasons might be changing the water chemistry and biology in ways that actually lower carbon dioxide concentrations, including longer growing seasons for plants (which take up carbon dioxide), and potentially better growing conditions for algae on the bottom of these shallow, clear lakes.

Does this mean that nature has come to the climate rescue? Likely not--other lakes around the world may still increase carbon dioxide emissions with warming, and the lakes in Nunavut might eventually catch up with them too. More likely, Brothers suggests that the link between ice cover duration and carbon dioxide concentrations might be buying us some time, before stronger positive feedbacks are unleashed between the planet's warming and its ecosystems. It may be a complicated process, but understanding this complexity helps scientists predict variations in how lakes are responding to--and influencing--climate change. It's a view under the hood, making planetary feedbacks and tipping points a little more predictable. While the long-term trajectory of greenhouse gas emissions from lakes is not settled, these results are an important piece of the puzzle in climate change science.

Credit: 
S.J. & Jessie E. Quinney College of Natural Resources, Utah State University

RUDN University mathematicians reduced neural network size six times without post-training

image: A team of mathematicians from RUDN University found a way to reduce the size of a trained neural network six times without spending additional resources on re-training it. The approach is based on finding a correlation between the weights of neural connections in the initial system and its simplified version.

Image: 
RUDN University

A team of mathematicians from RUDN University found a way to reduce the size of a trained neural network six times without spending additional resources on re-training it. The approach is based on finding a correlation between the weights of neural connections in the initial system and its simplified version. The results of the work were published in the Optical Memory and Neural Networks journal.

The structures of artificial neural networks and neurons in a living organism are based on the same principles. Nodes in a network are interconnected; some of them receive a signal, and some transmit it by activating or suppressing the next element in the chain. The processing of any signal (for example, an image or a sound) requires a lot of network elements and connections coming from them. However, computer models have limited capacity and storage volume. To work with large data volumes, specialists have to invent different ways to lower capacity requirements, including the so-called quantization. It helps reduce the consumption of resources but requires system re-training. A team of mathematicians from RUDN University found out that the latter step could be avoided.

"Several years ago we carried out efficient and cost-effective quantization of weights in a Hopfield network. It is an associative memory network with symmetrical connections between elements that are formed following Hebb's rule. In the course of its operation, the activity of the network is reduced to a certain equilibrium state, and when it is reached, a task is considered solved. The insights obtained in that study were later applied to feedforward deep learning networks that are very popular in image recognition today. As a rule, these networks require re-training after quantization, but we found a way to avoid it," said Iakov Karandashev, PhD, an Assistant Professor at the Nikolskii Mathematical Institute, RUDN University.

The main idea behind the simplification of artificial neural networks is the so-called quantization of weights, i.e. reducing the number of bits per each weight. Quantization provides for the averaging of signal: for example, if it is applied to an image, all pixels representing different shades of the same color will become identical. Mathematically, it means that neural connections that are similar by certain parameters should have the same weight (or importance) expressed as a number.

A team of mathematicians from RUDN University carried out calculations and created formulae that effectively establish correlations between the weights in a neural network before and after quantization. Based on them, the scientists developed algorithms using which a trained neural network could classify images. In their experiment, the mathematicians used a text package of 50 thousand photos that could be divided into 1,000 groups. After training, the network was quantized using the new method and not re-trained. Then, the results were compared to other quantization algorithms.

"After quantization, the classification accuracy decreased by only 1%, but the required storage volume was reduced six times. Experiments show that our network doesn't need re-training due to a strong correlation between initial and quantized weights. This approach could help save resources when completing time-sensitive tasks or working on mobile devices," added Iakov Karandashev from RUDN University.

Credit: 
RUDN University

Critical flaw found in lab models of the human blood-brain barrier

NEW YORK, NY (Feb. 5, 2020)--Cells used to study the human blood brain barrier in the lab aren't what they seem, throwing nearly a decade's worth of research into question, a new study from scientists at Columbia University Vagelos College of Physicians and Surgeons and Weill Cornell Medicine suggests.

The team also discovered a possible way to correct the error, raising hopes of creating a more accurate model of the human blood-brain barrier for studying certain neurological diseases and developing drugs that can cross it.

The study was published online Feb. 4 in the Proceedings of the National Academy of Sciences (PNAS).

"The blood-brain barrier is difficult to study in humans and there are many differences between the human and animal blood-brain barrier. So it's very helpful to have a model of the human blood-brain barrier in a dish," says co-study leader Dritan Agalliu, PhD, associate professor of pathology and cell biology (in neurology) at Columbia University Vagelos College of Physicians and Surgeons.

The in vitro human blood-brain barrier model, developed in 2012, is made by coaxing differentiated adult cells, such as skin cells, into stem cells that behave like embryonic stem cells. These induced pluripotent stem cells can then be transformed into mature cells of almost any type--including a type of endothelial cell that lines the blood vessels of the brain and spinal cord and forms a unique barrier that normally restricts the entry of potentially dangerous substances, antibodies, and immune cells from the bloodstream into the brain.

Agalliu previously noticed that these induced human "brain microvascular endothelial cells," produced using the published approach in 2012, did not behave like normal endothelial cells in the human brain. "This raised my suspicion that the protocol for making the barrier's endothelial cells may have generated cells of the wrong identity," says Agalliu.

"At the same time the Weill Cornell Medicine team had similar suspicions, so we teamed up to reproduce the protocol and perform bulk and single-cell RNA sequencing of these cells."

Their analysis revealed that the supposed human brain endothelial cells were missing several key proteins found in natural endothelial cells and had more in common with a completely different type of cell (epithelial) that is normally not found in the brain.

The team also identified three genes that, when activated within induced pluripotent cells, lead to the creation of cells that behave more like bona fide endothelial cells. More work is still needed, Agalliu says, to create endothelial cells that produce a reliable model of the human blood-brain barrier. His team is working to address this problem.

"The misidentification of human brain endothelial cells may be an issue for other types of cells made from induced pluripotent cells such as astrocytes or pericytes that form the neurovascular unit," Agalliu says. The protocols to generate these cells were created before the advent of single-cell technologies that are better at uncovering a cell's identity. "Cell misidentification remains a major problem that needs to be addressed in the scientific community in order to develop cells that mirror those found in the human brain. This will allow us to use these cells to study the role of genetic risk factors for neurological disorders and develop drug therapies that target the correct cells that contribute to the blood-brain barrier."

Credit: 
Columbia University Irving Medical Center

Physical discipline and cognitive deprivation associated with specific types of developmental delay

Washington, DC, February 5, 2021 - A study in the Journal of the American Academy of Child and Adolescent Psychiatry (JAACAP), published by Elsevier, reports that in a diverse, cross-national sample of youth, physical discipline and cognitive deprivation had distinct associations with specific domains of developmental delay. The findings are based on the Multiple Indicator Cluster Surveys, which is an ongoing, international household survey initiative coordinated and assisted by the United Nations agency, UNICEF.

"Physical discipline and cognitive deprivation are well-established risks to child development. However, it is rare that these experiences are examined in relation to each other," said lead author Carmel Salhi, ScD, an Assistant Professor at the Department of Health Sciences, Northeastern University, Boston, MA, USA. "Our study allowed us to explore how these experiences co-occur in childhood internationally and whether they relate to different aspects of child development.

"This is important as recent research in neuroscience suggests that experiences, which provoke fear, have different effects on a child's neurodevelopment than cognitive deprivation."

A sample of 29,792 children between the ages of 3 - 6 years old and their caregivers, across 17 countries completed measures of physical discipline, cognitive deprivation and risk of developmental delay. Factors used to determine physical discipline included spanking or slapping on the arm, hand, or leg. Cognitive deprivation included not counting or reading with a child over the past 3 days and the absence of books in the home.

"To see if this framework has the potential to inform policy and public health interventions, we conducted the first large-scale epidemiological study using this conceptual framework," Dr. Salhi added.

Physical discipline was associated with 50 percent higher odds for risk of socioemotional delay, at least 2.5 times higher than the risk of any of the experiences of cognitive deprivation. Not counting or not reading with the child were associated with 47 percent and 62 percent higher odds, respectively, for risk of cognitive delay. Physical discipline did not confer any risk of cognitive delay.

These findings suggest that the distinction between fear and deprivation in child development, established in clinical neuroscience, is important to public health research and interventions. Furthermore, a large body of evidence links both physical discipline and experiences of cognitive deprivation with poverty and social marginalization. Taken together, this suggests that redistributive policies that alleviate socioeconomic strain can have demonstrably positive effects across a range of child developmental outcomes within a population.

Credit: 
Elsevier

SSRgenotyper: A new tool to digitally genotype simple sequence repeats

IMAGE: A workflow depicting the process of SSR discovery, DNA amplification, and read mapping. Once SSRs have been identified and mapped, that information can be exported as a SAM to SSRgenotyper...

Image: 
Lewis, D. H., D. E. Jarvis, and P. J. Maughan. 2020. SSRgenotyper: A simple sequence repeat genotyping application for whole-genome resequencing and reduced representational sequencing projects. Applications in Plant Sciences...

SSRgenotyper is a newly developed, free bioinformatic tool that allows researchers to digitally genotype sequenced populations using simple sequence repeats (SSRs), a task that previously required time-consuming lab-based methods.

Reporting in a recent issue of Applications in Plant Sciences, the tool's developers designed the program to seamlessly integrate with other applications currently used for the detection and analysis of SSRs.

Simple sequence repeats are short chains of repeating nucleotides that are prone to mutation. The variability of these DNA sequences makes them ideal for genetic analyses to distinguish between individuals and are often the marker of choice for paternity and forensic testing.

In research fields, SSRS have the added benefit of being selectively neutral, meaning they don't code for any physical traits and therefore aren't subject to most types of natural selection, making them an excellent tool to study populations without the obscuring effects of convergent evolution.

Recent advances in next-generation sequencing have helped streamline the process of SSR identification, especially in model organisms or groups with an available reference genome assembly. As technology continues to improve and sequencing costs decrease, sequencing large portions of a genome for the purposes of SSR analysis, even in non-model organisms, is becoming more feasible and widespread in the scientific literature.

However, the process of genotyping -- determining which individuals have which alleles -- still relies predominantly on visualizing amplified DNA on an electrophoresis gel, an involved and potentially hazardous process, as DNA fragments are often stained with carcinogenic chemicals.

It also has the added issue of alleles being measured based on the size of the resulting bands, which is an estimate for the number of nucleotides in the amplified DNA fragment. Because there may be slight variations in the flanking regions that surround the SSRs of interest, and because there is no standardized method of determining an allele's size using these methods, genotyping results from one experiment cannot be easily transferred or compared to those of another experiment.

The development of SSRgenotyper renders such lab-based efforts obsolete. By working in tandem with other bioinformatic programs that detect SSRs in reference DNA and programs that align sequence data from target populations with the corresponding SSR reference file, SSRgenotyper is able to quickly genotype all SSRs for each individually sequenced sample.

"SSRgenotyper goes the next step by genotyping SSRs within sequenced populations -- strictly from sequencing data (no PCR or electrophoresis)," said Jeff Maughan, a professor of Plant and Wildlife Sciences at Brigham Young University and senior author of the study. "The output from SSRgenotyper are files ready for population genetic analysis or linkage map formation."

Not only does the program reduce the amount of time and work required to genotype populations, it also solves the transferability problem inherent in electrophoresis estimates by directly counting the total number of base pairs in a given sequence repeat.

"Since the SSRs are genotyped based on the number of repeated motifs at the SSR locus and not on the PCR product size, the allele calls are standardized and transferable from project to project or from lab to lab," said Maughan.

The program, which is coded in Python 3, requires only three positional arguments to run, provides the option to specify several conditional arguments (such as percentage thresholds for heterozygosity, the size of the flanking regions, and for the removal of spurious alleles), and can be performed on a regular desktop computer.

Once complete, SSRgenotyper generates multiple file types, including basic summary and statistical files, as well as a .pop, a .map, and an alignment file formatted for use in additional programs to facilitate downstream analyses.

As a proof of concept, Maughan and his colleagues tested SSRgenotyper's accuracy at correctly determining an individual's genotype by running the program on publicly available sequences of quinoa (Chenopodium quinoa) and the oat species Avena atlantica. The resulting accuracy rate was 97% or greater, which increased with the inclusion of additional sequence reads.

With the continued development and efficiency of next-generation sequencing methods, tools like SSRgenotyper seem poised to reduce the amount of lab work required in genetic studies.

"Sequencing is already the method of choice in most genetic research projects," said Maughan. "As costs continue to drop and new bioinformatic tools are developed, it is highly likely that future population genetics studies will be based solely on next-generation sequencing -- completely avoiding the cumbersome tasks of PCR and electrophoresis."

Credit: 
Botanical Society of America

Imaging the first moments of a body plan emerging in the embryo

video: Left: Propagation of calcium ion waves in fertilized Ciona egg. (Selective Plane Illumination Microscopy [SPIM] movie showing fluorescence of calcium ion indicator, Rhod-dextran). Right: Autofluorescence image of the same egg to show the cytoplasmic movement. Each frame was taken every 2s. Replay speed, 10 frames/s.

Image: 
Hiro Ishii and Tomomi Tani

WOODS HOLE, Mass. -- Egg cells start out as round blobs. After fertilization, they begin transforming into people, dogs, fish, or other animals by orienting head to tail, back to belly, and left to right. Exactly what sets these body orientation directions has been guessed at but not seen. Now researchers at the Marine Biological Laboratory (MBL) have imaged the very beginning of this cellular rearrangement, and their findings help answer a fundamental question.

“The most interesting and mysterious part of developmental biology is the origin of the body axis in animals,” said researcher Tomomi Tani. An MBL scientist in the Eugene Bell Center at the time of the research, Tani is now with Japan’s National Institute of Advanced Industrial Science and Technology.

The work by Tani and Hirokazu Ishii, reported this week in Molecular Biology of the Cell, shows that both parents contribute to the body orientation of their offspring. For the animal species studied in the research (sea squirts), input from the mother sets the back-belly axis while that of the father does so for the head-tail axis.

“Both the maternal and the paternal cues are required to establish the body plan of the developing animal embryo,” stated Tani.

This research addresses fundamental questions in developmental biology and may also provide clues as to why things sometimes go wrong. Such knowledge could benefit fields as diverse as medicine and agriculture.
 

The prevalent theory of how the body axis is set has been that actin filaments inside the egg, which are involved in cell motion and contraction, power the rearrangement of cytoplasmic material in the egg after it has been fertilized. But seeing this happen has been a challenge because the onset of the process takes place rapidly and over very small distances within living cells.

To overcome these hurdles, Tani and Ishii used a fluorescence polarization microscope, a technology developed a few years ago at MBL by Tani, Shalin Mehta (now at Chan Zuckerberg Biohub) and MBL Senior Scientist Rudolf Oldenbourg, along with scientists at other institutions. This technology makes it possible to image events taking place at distances measured in nanometers, or thousands of times smaller than the diameter of a human hair. The methodology is also a familiar one to Tani and others.

“Using polarized light for looking at dynamics of molecular order is a tradition of MBL imaging,” Tani noted, one that began with pioneering live-cell studies by Shinya Inoué in the 1950s.

When polarized, light waves oscillate either partially or completely in only one direction: up/down, left/right, clockwise/counterclockwise, and so on. That’s why a filter will let polarized light through in one orientation, but block it when rotated.

Tani and Ishii attached fluorescent probe molecules, which glow when illuminated with the right light, to the actin in eggs of sea squirts (Ciona), a marine species often studied by researchers as a model for animal development. The probe-actin link was very rigid, Tani said, allowing the microscope to detect the orientation of the actin molecules by working with polarized light.
 

So, if the actin all pointed in one orientation, the researchers spotted it. If the actin was jumbled, they could see that too. When Tani and Ishii looked at unfertilized eggs, they saw a mostly random arrangement of actin. After fertilization, a calcium ion wave passed through the egg and the actin filaments lined up and contracted along the orientation that was at a right, or 90o, angle to the future back/belly axis. The cytoplasm then moved. This body plan formation process began just after fertilization.

The fertilized egg orientation research is being followed up with other investigations. One of the long-term goals of such imaging is to detect and understand the force in the developing embryo that shape its morphology, its form and structure.

“We hope that the molecular orders in the cytoskeleton tell us something like ‘field lines’ of mechanical forces that organize the morphology of multicellular organisms,” Tani said in discussing future efforts.

Credit: 
Marine Biological Laboratory

UTA engineers develop programming technology to transform 2D materials into 3D shapes

image: Kyungsuk Yum

Image: 
UT Arlington

University of Texas at Arlington researchers have developed a technique that programs 2D materials to transform into complex 3D shapes.

The goal of the work is to create synthetic materials that can mimic how living organisms expand and contract soft tissues and thus achieve complex 3D movements and functions. Programming thin sheets, or 2D materials, to morph into 3D shapes can enable new technologies for soft robotics, deployable systems, and biomimetic manufacturing, which produces synthetic products that mimic biological processes.

Kyungsuk Yum, an associate professor in the Materials Science and Engineering Department, and his team have developed the 2D material programming technique for 3D shaping. It allows the team to print 2D materials encoded with spatially controlled in-plane growth or contraction that can transform to programmed 3D structures.

Their research, supported by a National Science Foundation Early Career Development Award that Yum received in 2019, was published in January in Nature Communications.

"There are a variety of 3D-shaped 2D materials in biological systems, and they play diverse functions," Yum said. "Biological organisms often achieve complex 3D morphologies and motions of soft slender tissues by spatially controlling their expansion and contraction. Such biological processes have inspired us to develop a method that programs 2D materials with spatially controlled in-plane growth to produce 3D shapes and motions."

With this inspiration, the researchers developed an approach that can uniquely create 3D structures with doubly curved morphologies and motions, commonly seen in living organisms but difficult to replicate with man-made materials.

They were able to form 3D structures shaped like automobiles, stingrays, and human faces. To physically realize the concept of 2D material programming, they used a digital light 4D printing method developed by Yum and shared in Nature Communications in 2018.

"Our 2D-printing process can simultaneously print multiple 2D materials encoded with individually customized designs and transform them on demand and in parallel to programmed 3D structures," said Amirali Nojoomi, Yum's former graduate student and first author of the paper. "From a technological point of view, our approach is scalable, customizable, and deployable, and it can potentially complement existing 3D-printing methods."

The researchers also introduced the concept of cone flattening, where they program 2D materials using a cone surface to increase the accessible space of 3D shapes. To solve a shape selection problem, they devised shape-guiding modules in 2D material programming that steer the direction of shape morphing toward targeted 3D shapes. Their flexible 2D-printing process can also enable multimaterial 3D structures.

"Dr. Yum's innovative research has many potential applications that could change the way we look at soft engineering systems," said Stathis Meletis, chair of the Materials Science and Engineering Department. "His pioneering work is truly groundbreaking."

Credit: 
University of Texas at Arlington