Tech

Increased activity not always the best advice for neck and back pain

The study primarily includes industrial workers and employees in jobs with a lot of physical activity, such as nurses, cleaners and people in service professions who are required to stand a lot.

Moving more and sitting less has been the ongoing mantra, but this study comes to some different conclusions.

"Regular physical activity is still an important key to good health and disease prevention. Our message is that people who have physical work may benefit from taking rest breaks during the work day," says Cecilie K. Øverås. She is a PhD candidate at the Norwegian University of Science and Technology's (NTNU) Department of Public Health and at the University of Southern Denmark's Department of Sports Science and Clinical Biomechanics.

"This could reduce the risk of neck and back pain, which is one of the leading causes of disability and impaired quality of life," she says.

About 90 per cent of us experience one or more neck or back pain episodes in life. Some people experience protracted pain. Lower back pain is the leading cause of sick leave and disability in Europe.

The study at NTNU is part of the EU's Back-UP project, which is aimed at finding better and more individualized methods for treating neck and back pain.

Øverås and study co-authors did a systematic review of research in a field that has previously shown some inconsistent results. The literature search yielded ten articles which used objective measurements of physical activity. These were included in the study.

"Self-reporting of physical behaviour has proven to be unreliable. As a rule, we think that we sit less than we actually do. In the studies we looked at, objective measurements were taken in people's daily lives and included both work and leisure activity. The equipment used included pedometers and accelerometers that can measure energy consumption for various types of activities - like sitting, standing, or walking," says Øverås.

Other research results have shown that a high physical activity level at work is associated with increased sick leave. Øverås therefore believes it is important to find a good balance between activity and rest.

A nurse who's walked 20,000 steps during the workday may not need advice on taking a walk to help with her back pain in her time off. But perhaps strength training would be beneficial for her back?

Health care professionals who advise patients on activity have to talk to the patient so that recommendations are nuanced for the individual and take the overall burden into account.

The type of physical activity is key. A lot activities are good for the back, but others can put a strain on it.

"We're seeing that physical activity at work doesn't necessarily reduce the risk of neck and back pain - on the contrary. On the other hand, physical activity in people's leisure time seems to have a positive effect," says Øverås.

This discrepancy can be explained by the type of physical activity people do at work and in their leisure time.

"In jobs with a lot of physical activity, the pattern of movement is often repetitive and the intensity low - like repeated lifting, or standing and walking for long, continuous periods. Leisure activity often has greater variety, it's fun-filled, and you have control over the duration and intensity," she says.

"In order to safeguard our health, it's important to find the right balance between physical activity at work and in our free time," says Øverås, who is the lead author of the article.

The literature study included one article on workers with sedentary jobs. It showed that walking more in the course of a day to some extent reduced the risk of neck pain.

The HUNT 4 (the fourth health study in Nord-Trøndelag/Trøndelag county) includes objective measurements of physical activity, but the results from this round of the study are not yet available.

Credit: 
Norwegian University of Science and Technology

Topology sheds new light on synchronization in higher-order networks

image: Schematic representation of a simplicial complex capturing higher-order interactions and sustaining topological signals (on the left) and evidence for explosive synchronization in higher-order Kuramoto model (on the right).

Image: 
Professor Ginestra Bianconi, Queen Mary University of London

Like an orchestra playing in time without a conductor, the elements of a complex system can naturally synchronize with each other. This collective phenomenon, known as synchronization, occurs throughout nature, from neurons firing together in the brain to fireflies flashing in unison in the dark.

The Kuramoto model is used to study synchronization observed in complex systems. Complex systems are often mathematically represented by networks, where components in the system are represented as nodes, and the links between nodes show interactions between them.

Most studies of synchronization have focused on networks, where nodes host dynamic oscillators that behave like clocks, and couple with their neighbours along the links of the network. However, the vast majority of complex systems have a richer structure than networks and include 'higher-order' interactions that occur between more than two nodes. These higher-order networks are called simplicial complexes and have been studied extensively by Mathematicians working in discrete topology.

Now, research led by Professor Ginestra Bianconi, Professor of Applied Mathematics at Queen Mary University of London, proposes a novel 'higher-order' Kuramoto model that combines topology with dynamical systems and characterises synchronization in higher-order networks for the first time.

The study found that higher-order synchronization occurs abruptly, in an "explosive" way, which differs from the standard Kuramoto model where synchronization occurs gradually.

Mathematician Christiaan Huygens first identified synchronization in 1665 when he observed that two pendulum clocks suspended from the same wooden beam swung in time with each other. However, it wasn't until 1974 that a simple mathematical model to describe this collective phenomenon was proposed by Japanese physicist Yoshiki Kuramoto.

Kuramoto's model captures synchronisation in a large network where each node hosts a clock-like oscillator, which is coupled to other oscillators on neighbouring nodes. In the absence of links between the nodes each oscillator obeys its own dynamics and is unaffected by its neighbours. However, when the interaction among neighbour nodes switches to above a given value, the oscillators start to beat at the same frequency.

While the Kuramoto model describes synchronization of dynamics associated with the nodes of a network in simplicial complexes higher-order objects in the network, such as links or triangles, can also exhibit dynamic or 'topological' signals such as fluxes.

In the new study, the researchers propose a higher-order Kuramoto model that can describe synchronization of these topological signals. As topological signals, such as fluxes, can be found in the brain and in biological transport networks the researchers suggest this new model could reveal higher-order synchronization that has previously gone unnoticed.

Professor Bianconi, lead author of the study, said: "We combined Hodge theory, an important branch of topology, with the theory of dynamical systems to shed light on higher-order synchronization. With our theoretical framework we can treat synchronization of topological dynamical signals associated to links, like fluxes, or to triangles or other higher-order building blocks of higher-order networks. These signals can undergo synchronization, but this synchronization can be unnoticed if the correct topological transformations are not performed. What we propose here is the equivalent of a Fourier transform for topological signals that can reveal this transition in real systems such as the brain".

The discontinuous transition found by the study also suggests that the synchronization phenomenon is not only spontaneous but emerges abruptly, revealing how topology can induce dramatic changes of the dynamics at the onset of the synchronization transition.

Credit: 
Queen Mary University of London

The concept of creating &laquobrain-on-chip» revealed

image: FIG. 2 | Roadmap for memristive neuromorphic and neurohybrid systems

Image: 
Lobachevsky University

Lobachevsky University scientists in collaboration with their colleagues from Russia, Italy, China and the United States have proposed the concept of a memristive neurohybrid chip to be used in compact biosensors and neuroprostheses. The concept is based on the existing and forward-looking solutions at the junction of neural cellular and microfluidic technologies that make it possible to grow a spatially ordered living neural network. In combination with CMOS-compatible technologies for creating microelectrode matrices and arrays of memristive devices, this integrated approach will be used for registering, processing and stimulation of bioelectrical activity in real time.

According to Alexey Mikhaylov, head of the laboratory at the Lobachevsky University's Research Institute for Physics and Technology, the interaction of different subsystems is organized on a single crystal (chip) and is controlled by built-in analog-to-digital circuits. "The implementation of a biocompatible microelectronic system, along with the development of cellular technology, will provide a breakthrough in neuroprosthetics by offering an important competitive advantage: a miniature bioelectrical sensor based on micro- and nanostructures with an option to store and process signals in multiple manners, including feed-forward approach and feedback loops, may serve as an active neural interface for intelligent control and management of neuronal structures.

This potential (unattainable with the use of traditional neural interface architectures) can be extended to other types of bioelectric signals for registering signals of brain, heart and musular activity, as well as the state of the skin using portable signal processing and diagnostics systems," says Alexey Mikhaylov.

To develop and fabricate bidirectional neurointerfaces, scientists currently apply complex electronic circuits realising special mathematical models and neuromorphic principles of information processing. Such electronic systems use traditional components and cannot meet the requirements of energy efficiency and compactness for safe interaction with living cultures or tissues on the same chip.

"Memristors created by scientists from Russia and Italy have the unique property of nonlinear resistive memory and are promising elements for analog information processing systems, including those with a neuron-like structure. They can also serve as electrophysiological activity sensors performing at the same time the function of accumulation and non-volatile storage of information", Alexey Mikhaylov notes.

A schematic representation of the proposed neurohybrid system is shown in Fig.1A. It consists of several functional layers combined in one CMOS-integrated chip. The top layer is a part of the neuronal system represented here by a culture of dissociated hippocampal cells grown on a multielectrode array and functionally ordered by a special layout of microfluidic channels shown in Fig.1B.

The microelectrode layer serves for extracellular registration and stimulation of neurons in vitro. It is implemented on the top metallization layers of the CMOS layer together with an array of memristive devices (Fig.1D).

"The simplest task performed by memristive devices is the direct processing of spiking activity of the biological network (Fig.1C); however, self-learning neural network architectures based on fully connected cross-bar memristive arrays can be designed for adaptive decoding of spatiotemporal characteristics of bioelectric activity. The output of this artificial network (Fig. 1F) can be used to control the cellular network via gradual modulation of extracellular stimulation (Fig.1G) according to the given protocol. Analog and digital circuits for accessing and controlling the multielectrode array and memristive devices, amplifying, generating, and transmitting signals between layers should be implemented in the main CMOS layer (Fig.1E)", Alexey Mikhaylov explains.

To create a neurohybrid chip, collaborative design and optimization of all these elements at the levels of materials, devices, architectures, and systems will be required. Of course, this work must be in pace with the development of bio- and neurotechnologies to address a number of problems related primarily to biocompatibility, mechanical effects, geometry, location and miniaturization of microelectrodes and probes, and also to deal with the reaction of living culture/tissue on the interface with an artificial electronic subsystem.

In the words of Alexey Mikhaylov, the concept reveals the idea of creating a brain-on-chip system belonging to a more general class of memristive neurohybrid systems for next-generation robotics, artificial intelligence and personalized medicine.

To illustrate the proposed approaches and related products on a foreseeable time scale, a roadmap of memristive neuromorphic and neurohybrid systems has been proposed (Fig. 2). The key focus in the roadmap will be on the development and commecialization of specialized hardware using the architecture and principles of biological neural networks to support the development and mass introduction of artificial intelligence, machine learning, neuroprosthetics and neural interface technologies.

"We assume the roadmap had its starting point in 2008, just as the current wave of interest in memristors was getting underway, and this roadmap includes ongoing research and development in broad areas of neurobiology and neurophysiology," comments Alexey Mikhaylov.

The following product niches are envisaged by researchers in the roadmap at different stages of the work in this direction: neuromorphic computing devices; non-invasive neural interfaces; neuroimplants, neuroprostheses and invasive neural interfaces, etc.

"The unique properties of memristive devices determine their critical importance in the development of applied neuromorphic and neurohybrid systems for neurocomputing devices, brain-computer interfaces and neuroprosthetics. These areas will take a significant share of the world high technologies market worth trillions of dollars by 2030, given the speed of development and implementation of artificial intelligence technologies, the Internet of Things, "big data" and "smart city" technologies, robotics, and - in the near future - neuroprosthetics and instrumental correction / support / enhancement of human cognitive abilities", says Alexey Mikhaylov in conclusion.

Credit: 
Lobachevsky University

High tech printing makes checking banknotes possible in the blink of an eye

New '3D micro-optic' security features in banknotes enable the general public to detect counterfeits reliably within a fraction of a second, according to new research at the University of Birmingham.

During a typical cash transaction, people glance at banknotes for about a second, not giving them much time to check banknotes for authenticity.

The team, in the University of Birmingham's School of Psychology, tested the new security feature on bank notes designed by the US-based company, Crane Currency. Incorporating a specially-designed micro-optic lens that focuses on an icon or image underneath, the technology makes an image appear in 3D and animates it as the note is moved around.

Crane Currency has designed a number of banknotes incorporating 3D micro-optic security features, including currency in Uzbekestan and an award-winning note in circulation on the island of Aruba, in the Caribbean Sea. These have been in circulation for about a year and this study is the first to confirm the reliability of the new security feature from a user point of view.

In the study, 46 participants reviewed a series of 108 banknotes, each incorporating a single security feature and each with a specific denomination. They were asked to report each note's denomination and make a judgement about whether the banknote appeared authentic after seeing the note for just a split second. In a second phase of the study, participants were asked to review the notes under low light levels to see if the security feature was still easily distinguishable.

The results, presented at Optical Document Security 2020, a foremost industry conference, showed that participants were able to reliably pick out authentic notes from counterfeits when they had less than half a second to view the notes. Remarkably, this was also the case when lighting conditions were poor.

Professor Jane Raymond, Professor of Visual Cognition at the University of Birmingham, says: "Most people trust their banknotes, are usually in a hurry, and often handle cash in places where the lighting is bad. The big problem is that the security features on most banknotes from around the world only work well when people slow down and look carefully at them under good light. So, in lots of situations, it not so hard to miss a fake banknote. Security features need to give people fast, easy-to-see signals that work under all sorts of lighting conditions.

"Human perception can be extraordinarily sensitive - with the 3D features, our participants were able to pick out the fake bank notes from the real ones in a fraction of a second. This research shows the real potential of modern 3D technologies to reduce the circulation of counterfeit bank notes."

Credit: 
University of Birmingham

New marine molecules with therapy potential against Alzheimer's disease

image: These are primary neurons in culture labeled with an antibody.

Image: 
Albert Giralt / UNIVERSITY OF BARCELONA

An interdisciplinary research study of the University of Barcelona identified two potential candidates to treat Alzheimer's disease. These are two marine molecules, meridianine and lignarenone B, able to alter the activity of GSK3B activity, a protein associated with several neurodegenerative diseases.

The researchers used several biocomputational techniques to detect these so far unknown compounds, which were later validated with experiments in cultures of neuronal cells in mice. These results will allow researchers to better understand the functioning of the GSK3B molecule and build a promising starting point for the development of new drugs against Alzheimer's disease.

The paper, published in the journal Biomolecules, is the result of the collaboration between two UB research teams with the participation of Laura Llorach Pares and Conxita Àvila, from the Faculty of Biology and the Biodiversity Research Institute (IRBio) of the UB, and Ened Rodríguez, Albert Giralt and Jordi Alberch, from the Faculty of Medicine and Health Sciences and the Institute of Neurosciences of the UB (UBNeuro). Other participants were the technological company Molomics and the former company Mind the Byte.

A promising but delicate therapeutic target

The GSK3B is an abundant protein in the brain with an important role in the development of Alzheimer's disease and other neurodegenerative diseases, since changes in their activity affect negatively the basic synaptic signals in learning and memory and these can even be interrupted. This is why, over the last years, researchers made many efforts to design GSK3B inhibitors, although without enough results so far. "GSK3B has always been an appreciated molecule in the treatment for Alzheimer's disease. However, clinical trials with all potential inhibitors caused adverse effects, which were a disappointment. We are still far from any clinical application, the molecules we described have the potential to overcome the limitations of other inhibiting drugs", says Albert Giralt, also member of IDIBAPS and the Network Center for Biomedical Research in Neurodegenerative Diseases (CIBERNED).

Using biocomputing and molecular dynamic simulation techniques, researchers analysed the potential of a group of marine molecular families -isolated and characterized by the team of Conchita Avila- to inhibit the GSK3B activity. "These are meridianins, a family of alkaloids from marine benthic organisms from the Antartica, and lignarenones, obtained from a gastropod mollusc from the waters of the Mediterranean Sea", notes Àvila.

Impact on neuronal plasticity

Then, the researchers carried out an in vitro experimental validation on the inhibiting ability of these molecules using cultures of mice neurons. The results show both marine compounds do not cause neurotoxic effects and, in addition, they promote structural neuronal plasticity. "The new molecules do not have an excessive inhibition of GSK3B, which is interesting, since inhibiting it excessively could be the cause of some of the adverse effects described for other inhibitor drugs. Also, these induce the growth of the neuronal tree, an aspect of great interest in Alzheimer's disease, where atrophy and dysfunction play a more relevant role in the appearance of symptoms than in neuronal death", notes Albert Giralt.

According to the researchers, this is a relevant discovery, since it is not easy to find new molecules that can be therapeutic for Alzheimer's, specially when many therapeutic targets have been disappointing. However, Giralt says this is only the beginning: "To confirm the potential of these new molecules, the next step is to evaluate during the next years whether treatment with these drugs improves symptomatology in mice models with Alzheimer's, and if so, to try to conduct clinical studies with these molecules", concludes the researcher.

Credit: 
University of Barcelona

Hydropower plants to support solar and wind energy in West Africa

image: Solar street lighting in Niger.

Image: 
Sebastian Sterl

Hydropower plants can support solar and wind power, rather unpredictable by nature, in a climate-friendly manner. A new study in the scientific journal Nature Sustainability has now mapped the potential for such "solar-wind-water" strategies for West Africa: an important region where the power sector is still under development, and where generation capacity and power grids will be greatly expanded in the coming years. "Countries in West Africa therefore now have the opportunity to plan this expansion according to strategies that rely on modern, climate-friendly energy generation," says Sebastian Sterl, energy and climate scientist at Vrije Universiteit Brussel and KU Leuven and lead author of the study. "A completely different situation from Europe, where power supply has been dependent on polluting power plants for many decades - which many countries now want to rid themselves of."

Solar and wind power generation is increasing worldwide and becoming cheaper and cheaper. This helps to keep climate targets in sight, but also poses challenges. For instance, critics often argue that these energy sources are too unpredictable and variable to be part of a reliable electricity mix on a large scale.

"Indeed, our electricity systems will have to become much more flexible if we are to feed large amounts of solar and wind power into the grid. Flexibility is currently mostly provided by gas power plants. Unfortunately, these cause a lot of CO2 emissions," says Sebastian Sterl, energy and climate expert at Vrije Universiteit Brussel (VUB) and KU Leuven. "But in many countries, hydropower plants can be a fossil fuel-free alternative to support solar and wind energy. After all, hydropower plants can be dispatched at times when insufficient solar and wind power is available."

The research team, composed of experts from VUB, KU Leuven, the International Renewable Energy Agency (IRENA), and Climate Analytics, designed a new computer model for their study, running on detailed water, weather and climate data. They used this model to investigate how renewable power sources in West Africa could be exploited as effectively as possible for a reliable power supply, even without large-scale storage. All this without losing sight of the environmental impact of large hydropower plants.

"This is far from trivial to calculate," says Prof. Wim Thiery, climate scientist at the VUB, who was also involved in the study. "Hydroelectric power stations in West Africa depend on the monsoon; in the dry season they run on their reserves. Both sun and wind, as well as power requirements, have their own typical hourly, daily and seasonal patterns. Solar, wind and hydropower all vary from year to year and may be impacted by climate change. In addition, their potential is spatially very unevenly distributed."

West African Power Pool

The study demonstrates that it will be particularly important to create a "West African Power Pool", a regional interconnection of national power grids. Countries with a tropical climate, such as Ghana and the Ivory Coast, typically have a lot of potential for hydropower and quite high solar radiation, but hardly any wind. The drier and more desert-like countries, such as Senegal and Niger, hardly have any opportunities for hydropower, but receive more sunlight and more wind. The potential for reliable, clean power generation based on solar and wind power, supported by flexibly dispatched hydropower, increases by more than 30% when countries can share their potential regionally, the researchers discovered.

All measures taken together would allow roughly 60% of the current electricity demand in West Africa to be met with complementary renewable sources, of which roughly half would be solar and wind power and the other half hydropower - without the need for large-scale battery or other storage plants. According to the study, within a few years, the cost of solar and wind power generation in West Africa is also expected to drop to such an extent that the proposed solar-wind-water strategies will provide cheaper electricity than gas-fired power plants, which currently still account for more than half of all electricity supply in West Africa.

Better ecological footprint

Hydropower plants can have a considerable negative impact on local ecology. In many developing countries, piles of controversial plans for new hydropower plants have been proposed. The study can help to make future investments in hydropower more sustainable. "By using existing and planned hydropower plants as optimally as possible to massively support solar and wind energy, one can at the same time make certain new dams superfluous," says Sterl. "This way two birds can be caught with one stone. Simultaneously, one avoids CO2 emissions from gas-fired power stations and the environmental impact of hydropower overexploitation."

Global relevance

The methods developed for the study are easily transferable to other regions, and the research has worldwide relevance. Sterl: "Nearly all regions with a lot of hydropower, or hydropower potential, could use it to compensate shortfalls in solar and wind power." Various European countries, with Norway at the front, have shown increased interest in recent years to deploy their hydropower to support solar and wind power in EU countries. Exporting Norwegian hydropower during times when other countries undergo solar and wind power shortfalls, the European energy transition can be advanced.

Credit: 
KU Leuven

Using electrical stimulus to regulate genes

image: A team of researchers led by ETH professor Martin Fussenegger has succeeded in using an electric current to directly control gene expression for the first time. Their work provides the basis for medical implants that can be switched on and off using electronic devices outside the body.

Image: 
Illustration: Katja Schubert / after Krawczyk K et al., Science 2020

This is how it works. A device containing insulin-?producing cells and an electronic control unit is implanted in the body of a diabetic. As soon as the patient eats something and their blood sugar rises, they can use an app on their smartphone to trigger an electrical signal, or they can preconfigure the app do this automatically if the meal has been entered in advance. A short while afterwards, the cells release the necessary amount of insulin produced to regulate the patient's blood sugar level.

This might sound like science fiction but it could soon become reality. A team of researchers led by Martin Fussenegger, ETH Professor of Biotechnology and Bioengineering at the Department of Biosystems Science and Engineering in Basel, have presented their prototype for such an implant in a new paper in the journal Science. Their study is the first to examine how gene expression can be directly activated and regulated using electrical signals. When testing their approach in mice, the researchers established that it worked perfectly.

The Basel-?based scientists have a wealth of experience in developing genetic networks and implants that respond to specific physiological states of the body, such as blood lipid levels that are too high or blood sugar levels that are too low. Although such networks respond to biochemical stimuli, they can also be controlled by alternative, external influences like light. "We've wanted to directly control gene expression using electricity for a long time; now we've finally succeeded," Fussenegger says.

A circuit board and cell container hold the key

The implant the researchers have designed is made up of several parts. On one side, it has a printed circuit board (PCB) that accommodates the receiver and control electronics; on the other is a capsule containing human cells. Connecting the PCB to the cell container is a tiny cable.

A radio signal from outside the body activates the electronics in the implant, which then transmits electrical signals directly to the cells. The electrical signals stimulate a special combination of calcium and potassium channels; in turn, this triggers a signalling cascade in the cell that controls the insulin gene. Subsequently, the cellular machinery loads the insulin into vesicles that the electrical signals cause to fuse with the cell membrane, releasing the insulin within a matter of minutes.

Coming soon: the Internet of the Body

Fussenegger sees several advantages in this latest development. "Our implant could be connected to the cyber universe," he explains. Doctors or patients could use an app to intervene directly and trigger insulin production, something they could also do remotely over the internet as soon as the implant has transmitted the requisite physiological data. "A device of this kind would enable people to be fully integrated into the digital world and become part of the Internet of Things - or even the Internet of the Body," Fussenegger says.

When it comes to the potential risk of attacks by hackers, he takes a level-?headed view: "People already wear pacemakers that are theoretically vulnerable to cyberattacks, but these devices have sufficient protection. That's something we would have to incorporate in our implants, too," he says.

As things stand, the greatest challenge he sees is on the genetic side of things. To ensure that no damage is caused to the cells and genes, he and his group need to conduct further research into the maximum current that can be used. The researchers must also optimise the connection between the electronics and the cells.

And a final hurdle to overcome is finding a new, easier and more convenient way to replace the cells used in the implant, something that must be done approximately every three weeks. For their experiments, Fussenegger and his team of researchers attached two filler necks to their prototype in order to replace the cells; they want to find a more practical solution.

Before their system can be used in humans, however, it must still pass a whole series of clinical tests.

Credit: 
ETH Zurich

New drug combinations help overcome resistance to immunotherapy

FINDINGS

A new study from researchers at the UCLA Jonsson Comprehensive Cancer Center helps explain how disruptions in genes can lead to the resistance to one of the leading immunotherapies, PD-1 blockade, and how new drug combinations could help overcome resistance to the anti-PD-1 therapy in a mechanistically-based way.

The team found changing the tumor microenvironment with toll-like receptor 9 agonists made up of sequences of nucleic acids that mimic a bacterial infection, as well as another immunotherapy drug NKTR-214 that stimulates a natural killer cell response, can help induce a potent immune reaction that enables the immune system to more effectively attack resistant tumors. When these drugs were given in combination with PD-1 blockade, they were able to overcome genetic immunotherapy resistance in preclinical models.

BACKGROUND

The development of immunotherapies, like PD-1 blockade, has changed the landscape of cancer therapy. It is extremely effective for a substantial number of patients, even those with lethal tumors. Despite its success in treating people with deadly forms of cancer, there are still many people who do not benefit from the treatment or eventually experience a relapse of their cancer. Various combinations of PD-1 blockade with other therapies are being investigated, but there is currently no easy way to identify which therapeutic agents can best improve immune response on underlying mechanisms involving resistance to PD-1 blockade. UCLA researchers have been seeking ways to better understand the biology of resistance mechanisms to develop rationally advanced combinatorial therapies to overcome this resistance.

METHOD

Using CRISPR/Cas9 genome editing, the team created genetic resistant models of JAK1, JAK2 and B2M mutations by gene knockout in human and murine cell lines. They studied functional mechanisms involved in interferon-gamma signaling changes in human melanoma cell lines and in mouse models of cancer that were found to led to resistance to anti-PD-1 therapy. Based on the molecular understanding of these pathways, the team then tested strategies to overcome resistance in two mouse models of anti-PD-1 immunotherapy. They then tested in mice rationally designed combinatorial treatments to describe the best choices of combined therapy based on these mechanisms of acquired resistance.

IMPACT

Identifying how to improve the immune response on underlying mechanisms involving immunotherapy resistance has the potential to improve the antitumor activity of cancer immunotherapy and provide more therapies to more patients with hard-to-treat cancers. The combination therapies of PD-1 blockade with NKTR-214 or toll-like receptor 9 that were identified in the study are now being assessed in human clinical trials for patients whose tumors have not responded to anti-PD-1 therapy.

Credit: 
University of California - Los Angeles Health Sciences

gnomAD Consortium releases its first major studies of human genetic variation

For the last eight years, the Genome Aggregation Database (gnomAD) Consortium (and its predecessor, the Exome Aggregation Consortium, or ExAC), has been working with geneticists around the world to compile and study more than 125,000 exomes and 15,000 whole genomes from populations around the world.

Now, in seven papers published in Nature, Nature Communications, and Nature Medicine, gnomAD Consortium scientists describe their first set of discoveries from the database, showing the power of this vast collection of data. Together the studies:

1. present a more complete catalog and understanding of a class of rare genetic variation called loss-of-function (LoF) variants, which are thought to disrupt genes' encoded proteins;

2. introduce the largest comprehensive reference map of an understudied yet important class of genetic variation called structural variants;

3. show how tools that account for unique forms of variation and variants' biological context can help clinical geneticists when trying to diagnose patients with rare genetic disease; and

4. illustrate how population-scale datasets like gnomAD can help evaluate proposed drug targets.

Researchers at the Broad Institute of MIT and Harvard and Massachusetts General Hospital (MGH) served as co-first or co-senior authors on all of the studies, with scientists from Imperial College London in the United Kingdom, the direct-to-consumer genetics company 23andMe, and other institutions contributing to individual papers. More than 100 scientists and groups internationally have provided data and/or analytical effort to the consortium.

"These studies represent the first significant wave of discovery to come out of the gnomAD Consortium," said Daniel MacArthur, scientific lead of the gnomAD project, a senior author on six of the studies, an institute member in the Program in Medical and Population Genetics at Broad Institute, and now director of Centre for Population Genomics at the Garvan Institute of Medical Research and Murdoch Children's Research Institute in Australia. "The power of this database comes from its sheer size and population diversity, which we were able to reach thanks to the generosity of the investigators who contributed data to it, and of the research participants in those contributing studies."

"In a sense, gnomAD is the product of a consortium of consortia, in that the underlying data represents the work and contributions of many groups who have been collecting exome and genome sequences as a way of understanding human biology," said Konrad Karczewski, first author on the collection's flagship paper in Nature and a computational biologist at Broad and MGH's Analytic and Translational Genetics Unit. "Each of these papers represents someone bringing a new angle to the dataset, saying, 'I have an idea on how we can put all of this to work,' and creating a new resource for the genetics community. It was amazing to see it unfold."

GNOMAD LOOKBACK

MacArthur and his colleagues at Broad and MGH built ExAC and then gnomAD to expand on the work of the 1000 Genomes Project, the first large-scale international effort to catalog human genetic variation, and other projects.

"In 2012, my lab was sequencing the genomes of patients with rare disease, and found that existing catalogs of normal variation weren't large or diverse enough to help us interpret the genetic changes we were seeing," MacArthur recalled. "At the same time, our colleagues around the world had sequenced tens of thousands of people for studies of common, complex disorders. So we set about bringing these datasets together to create a reference dataset for rare disease research."

The ExAC consortium released its first collection of whole exome data in October 2014. It then started gathering whole genome data, evolving into the gnomAD Consortium and releasing gnomAD v1.0 in February 2017.

Subsequent gnomAD releases focused on increasing the numbers of exomes and genomes, the volume of variants highlighted in the data, and the diversity of the dataset.

The new papers are based on the gnomAD v2.1.1 dataset, which includes genomes and exomes from more than 25,000 people of East and South Asian descent, nearly 18,000 of Latino descent, and 12,000 of African or African-American descent.

COMPREHENSIVE CATALOG

Two of the seven papers show how large genomic datasets can help researchers learn more about rare or understudied types of genetic variants.

The flagship study, led by Karczewski and MacArthur and published in Nature, describes gnomAD and maps loss-of-function (LoF) variants: genetic changes that are thought to completely disrupt the function of protein-coding genes. The authors identified more than 443,000 LoF variants in the gnomAD dataset, dramatically exceeding all previous catalogs. By comparing the number of these rare variants in each gene with the predictions of a new model of the human genome's mutation rate, the authors were also able to classify all protein-coding genes according to how tolerant they are to disruptive mutations -- that is, how likely genes are to cause significant disease when disrupted by genetic changes. This new classification scheme pinpoints genes that are more likely to be involved in severe diseases such as intellectual disability.

"The gnomAD catalog gives us our best look so far at the spectrum of genes' sensitivity to variation, and provides a resource to support gene discovery in common and rare disease," Karczewski explained.

While Karczewski and MacArthur's study focused on small variants (point mutations, small insertions or deletions, etc.), graduate student Ryan Collins, Broad associated scientist Harrison Brand, institute member Michael Talkowski, and colleagues used gnomAD to explore structural variants. This class of genomic variation includes duplications, deletions, inversions, and other changes involving larger DNA segments (generally greater than 50-100 bases long). Their study, also published in Nature, presents gnomAD-SV, a catalog of more than 433,000 structural variants identified within nearly 15,000 of the gnomAD genomes. The variants in gnomAD-SV represent most of the major known classes of structural variation and collectively form the largest map of structural variation to date.

"Structural variants are notoriously challenging to identify within whole genome data, and have not previously been surveyed at this scale," noted Talkowski, who is also a faculty member in the Center for Genomic Medicine at MGH. "But they alter more individual bases in the genome than any other form of variation, and are well established drivers of human evolution and disease."

Several surprising findings came out of their survey. For instance, the authors found that at least 25 percent of all rare LoF variants in the average individual genome are actually structural variants, and that many people carry what should be deleterious or harmful structural alterations, but without the phenotypes or clinical outcomes that would be expected.

They also noted that many genes were just as sensitive to duplication as to deletion; that is, from an evolutionary perspective, gaining one or more copies of a gene can be just as undesirable as losing one.

"We learned a great deal by building this catalog in gnomAD, but we've clearly only scratched the surface of understanding the influence of genome structure on biology and disease," Talkowski said.

TOOLS FOR BETTER DIAGNOSIS

Three of the papers reveal how gnomAD's deep catalogs of different types of genetic variation and the cellular context in which variants arise can help clinical geneticists more accurately determine whether a given variant might be protective, neutral, or harmful in patients.

In a Nature paper, Beryl Cummings, a former Broad/MGH graduate student now at Maze Therapeutics, MacArthur, and colleagues found that tissue-based differences in how segments of a given gene are expressed can change the downstream effects of variants within those segments on biology and disease risk. The team combined data from gnomAD and the Genotype Tissue Expression (GTEx) project to develop a method that uses these differences to assess the clinical significance of variants.

In Nature Communications, MacArthur, graduate student Qingbo Wang, and collaborators surveyed multinucleotide variants -- ones consisting of two or more nearby base pair changes that are inherited together. Such variants can have complex effects, and this study represents the first attempt to systematically catalog these variants, examine their distribution throughout the genome, and predict their effects on gene structure and function.

And in a separate Nature Communications study, MacArthur, Nicola Whiffin and James Ware of Imperial College London, and colleagues explored the impact of DNA variants arising in the 5-prime untranslated regions of genes, which are located just ahead of where the cell's transcriptional machinery starts reading a gene's protein code. Variants in these regions can trick a cell to start reading a gene in the wrong place, but haven't previously been well-documented.

"Clinical laboratories use gnomAD every day," said Heidi Rehm, a clinical geneticist; an institute member in Broad's MPG and medical director of the Clinical Research Sequencing Platform at Broad; chief genomics officer in the MGH Department of Medicine; and co-chair with Broad institute member Mark Daly of the gnomAD steering committee. "The methods in these studies are already helping us better interpret a patient's genetic test results."

GUIDING DRUG DEVELOPMENT

The remaining two gnomAD studies describe how diverse, population-scale genetic data can help researchers assess and pick the best drug targets.

In 2018, Broad associated scientist Eric Minikel mused on his research blog about whether genes with naturally-occuring predicted LoF variants could be used to assess the safety of targeting those genes with drugs. He wrote that if a gene that's naturally inactivated doesn't seem to have harmful effects, perhaps that gene could be safely inhibited with a drug. That blog post became the basis of a Nature paper in which Minikel, MacArthur, and colleagues applied the gnomAD dataset to probe this question. They suggest ways to incorporate insights about LoF variants into the drug development process.

Leveraging the expertise at Broad, The Michael J. Fox Foundation initiated a collaboration between Imperial College's Whiffin, MacArthur, Broad postdoctoral fellow Irina Armean, 23andMe's Aaron Kleinman and Paul Cannon, and others to use LoF variants cataloged in gnomAD, UK Biobank, and 23andMe to study the potential safety liabilities of reducing the expression of a gene called LRRK2, which is associated with risk of Parkinson's disease. In Nature Medicine, they use these data to predict that drugs that reduce LRRK2 protein levels or partially block the gene's activity are unlikely to have severe side effects.

"We've cataloged large amounts of gene-disrupting variation in gnomAD," MacArthur said. "And with these two studies we've shown how you can then leverage those variants to illuminate and assess potential drug targets."

GROWING IMPACT

Public sharing of all data has been a core principle of the gnomAD project from its inception. The data behind these seven papers were publicly released via the gnomAD browser without usage or publication restrictions in 2016.

"The wide-ranging impact this resource has already had on medical research and clinical practice is a testament to the incredible value of genomic data sharing and aggregation," MacArthur said. "More than 350 independent studies have already made use of gnomAD for research on cancer predisposition, cardiovascular disease, rare genetic disorders, and more since we made the data available.

"But we are very far from saturating discoveries or solving variant interpretation," he added. "The next steps for the consortium will be focused on increasing the size and population diversity of these resources, and linking the resulting massive-scale genetic data sets with clinical information."

Credit: 
Broad Institute of MIT and Harvard

NASA looks at Inland Rainfall from Post Tropical Cyclone Bertha

image: The GPM's core satellite passed over Bertha and analyzed its rainfall rates on May 28 at 1:21 a.m. EDT (0521UTC). GPM found heaviest rainfall over south central West Virginia, where rain was falling at rates of 1 inch (25 mm) per hour. A large area of light rain northwest of the center was falling at around 0.2 inches (less than 5 millimeters) per hour.

Image: 
NASA/NRL

NASA's GPM core satellite analyzed rainfall generated from post-tropical cyclone Bertha as it continues to move toward the Great Lakes.

Bertha formed into a tropical storm on May 27, about 30 miles off the South Carolina coast. By 9:30 a.m. EDT, Bertha made landfall along the coast of South Carolina, east of Charleston. Data from NOAA and CORMP buoys showed that maximum sustained winds increased to near 50 mph (80 kph) before landfall. By 2 p.m. EDT, Bertha weakened to a tropical depression and heavy rainfall spread across the Carolinas. By 11 p.m. EDT, heavy rainfall spread across western North Carolina and southwest Virginia into West Virginia. At that time, the center of Bertha was located about 95 miles (150 km) south-southwest of Roanoke, Virginia. By May 28, Bertha had become a post-tropical cyclone.

A Post-Tropical Storm is a generic term for a former tropical cyclone that no longer possesses sufficient tropical characteristics to be considered a tropical cyclone. Former tropical cyclones that have become fully extratropical, subtropical, or remnant lows, are three classes of post-tropical cyclones. In any case, they no longer possess sufficient tropical characteristics to be considered a tropical cyclone. However, post-tropical cyclones can continue carrying heavy rains and high winds.

On May 28, flash flood watches were in effect for central West Virginia and a small part of coastal North Carolina through early morning. NOAA's National Weather Service Weather Prediction Center in College Park Md. noted, "Bertha is expected to produce total rain accumulations of around one inch from West Virginia through eastern Ohio, southern and western Pennsylvania and far western New York, and 1 to 2 inches from South Carolina across eastern North Carolina into southeast Virginia. Isolated maximum storm total amounts of 4 inches are possible in southern Pennsylvania and parts of the Carolinas and southeast Virginia. This rainfall may produce life threatening flash flooding, aggravate and prolong ongoing river flooding, and produce rapid out of bank rises on smaller rivers."

The Global Precipitation Measurement mission or GPM satellite provided a look at Bertha's rainfall rates on May 28 at 1:21 a.m. EDT (0521UTC). GPM found heaviest rainfall over south central West Virginia, where rain was falling at rates of 1 inch (25 mm) per hour. A large area of light rain northwest of the center was falling at around 0.2 inches (less than 5 millimeters) per hour.

At 5 a.m. EDT (0900 UTC), the center of Post-Tropical Cyclone Bertha was located near latitude 38.3 degrees north and longitude 80.8 degrees west, about 80 miles (130 km) north-northwest of Roanoke, Virginia. The post-tropical cyclone is moving toward the north near 28 mph (44 kph) and this motion is expected to continue through midday Thursday, May 28, followed by a turn to the north-northeast. Maximum sustained winds are near 25 mph (35 kph) with higher gusts. The estimated minimum central pressure is 1012 millibars.

Bertha is expected to weaken and dissipate by Thursday evening as it crosses the eastern Great Lakes.

Tropical cyclones/hurricanes are the most powerful weather events on Earth. NASA's expertise in space and scientific exploration contributes to essential services provided to the American people by other federal agencies, such as hurricane weather forecasting.

GPM is a joint mission between NASA and the Japan Aerospace Exploration Agency, JAXA.

By Rob Gutro
NASA's Goddard Space Flight Center

Credit: 
NASA/Goddard Space Flight Center

Configurable circuit technology poised to expand silicon photonic applications

image: The researchers have developed a wafer-scale prober that is being tested at the University of Southampton (left). The prober can autonomously and accurately perform optical and electrical device testing along with laser annealing at an average speed of less than 30 seconds per device. Images on the right show a closer look at the software driven positioning stage for autonomous measurements (top-right) and the input/output fibers positioned on top of the 8-inch wafer (bottom-right).

Image: 
Xia Chen, University of Southampton

WASHINGTON -- Researchers have developed a new way to build power efficient and programmable integrated switching units on a silicon photonics chip. The new technology is poised to reduce production costs by allowing a generic optical circuit to be fabricated in bulk and then later programmed for specific applications such as communications systems, LIDAR circuits or computing applications.

"Silicon photonics is capable of integrating optical devices and advanced microelectronic circuits all on a single chip," said research team member Xia Chen from the University of Southampton. "We expect configurable silicon photonics circuits to greatly expand the scope of applications for silicon photonics while also reducing costs, making this technology more useful for consumer applications."

In The Optical Society (OSA) journal Optics Express, researchers led by Graham Reed demonstrate the new approach in switching units that can be used as building blocks to create larger chip-based, programmable photonic circuits.

"The technology we developed will have a wide range of applications," said Chen. "For example, it could be used to make integrated sensing devices to detect biochemical and medical substances as well as optical transceivers for connections used in high-performance computing systems and data centers."

Erasable components

The new work builds on earlier research in which the investigators developed an erasable version of an optical component known as a grating coupler by implanting germanium ions into silicon. These ions induce damage that changes silicon's refractive index in that area. Heating the local area using a laser annealing process can then be used to reverse the refractive index and erase the grating coupler.

In the Optics Express paper, the researchers describe how they applied the same germanium ion implantation technique to create erasable waveguides and directional couplers, components that can be used to make reconfigurable circuits and switches. This represents the first time that sub-micron erasable waveguides have been created in silicon.

"We normally think about ion implantation as something that will induce large optical losses in a photonic integrated circuit," said Chen. "However, we found that a carefully designed structure and using the right ion implantation recipe can create a waveguide that carries optical signals with reasonable optical loss."

Building programmable circuits

They demonstrated the new approach by designing and fabricating waveguides, directional couplers and 1 X 4 and 2 X 2 switching circuits, using the University of Southampton's Cornerstone fabrication foundry. Photonic devices from different chips tested both before and after programming with laser annealing showed consistent performance.

Because the technique involves physically changing the routing of the photonic waveguide via a one-time operation, no additional power is needed to retain the configuration when programmed. The researchers have also discovered that electrical annealing, using a local integrated heater, as well as laser annealing can be used to program the circuits.

The researchers are working with a company called ficonTEC to make this technology practical outside the laboratory by developing a way to apply the laser and/or electrical annealing process at wafer scale, using a conventional wafer prober (wafer testing machine), so that hundreds or thousands of chips could be programmed automatically. They are currently working on integrating the laser and electrical annealing processes into such a wafer-scale prober -- an instrument found in most electronic-photonic foundries -- being testing at the University of Southampton.

Credit: 
Optica

Environmental groups moving beyond conservation

Although non-governmental organizations (NGOs) have become powerful voices in world environmental politics, little is known of the global picture of this sector. A new study shows that environmental groups are increasingly focused on advocacy in climate change politics and environmental justice. How they do their work is largely determined by regional disparities in human and financial resources.

To understand what these groups are doing and why, researchers from McGill University, the University of Georgia, and the Leibniz Centre of Tropical Marine Research analyzed data from 679 environmental NGOs worldwide in a study for PLOS ONE.

These organizations are usually thought to focus on environmental protection and conservation. However, in examining the mission statements of these groups, the researchers found that the importance of climate politics (engagement on climate change) and environmental justice (respect for nature and human rights) had been grossly underestimated in previous research. They calculated a power index for the NGOs based on their human and financial resources and found that more than 40% of the most powerful organizations focus on these areas in their mission.

"There are more powerful organizations working on climate issues than on issues of biodiversity loss or land degradation," says co-author Klara Winkler, a postdoctoral researcher from McGill University. "It is important to be aware that some environmental issues garner more attention than others because it means that these other issues risk being neglected or even forgotten."

The study also shows regional disparities in human resources and financial capacity. Environmental NGOs in Africa and Oceania have the lowest median number of employees and African NGOs have the lowest median annual budgets. While organizations in North America and Europe have the highest median financial capacity, Latin America and the Caribbean has the highest median number of employees.

According to the researchers, these differences likely reflect both labor costs and financial flows, where environmental NGOs in the Global South employ more people with less money while groups in the Global North handle more money with fewer employees. This disparity is also indicative of a global division of labor where Northern environmental NGOs act as donors or coordinators for large projects, while Southern organizations are subcontracted for implementation.

"The findings give us an indication of how feasible it is for NGOs to advocate and implement their agendas in practice. Seeing where the disparities and limitations are in different regions can help us better understand observed differences in environmental policies and politics," says co-author Stefan Partelow from the Leibniz Centre for Tropical Marine Research in Germany.

Credit: 
McGill University

'Distance' from the brightest stars is key to preserving primordial discs

image: This image shows the sparkling centerpiece of Hubble's 25th anniversary tribute. Westerlund 2 is a giant cluster of about 3000 stars located 20 000 light-years away in the constellation Carina.

Hubble's near-infrared imaging camera pierces through the dusty veil enshrouding the stellar nursery, giving astronomers a clear view of the dense concentration of stars in the central cluster.

Image: 
NASA, ESA, the Hubble Heritage Team (STScI/AURA), A. Nota (ESA/STScI), and the Westerlund 2 Science Team

The NASA/ESA Hubble Space Telescope was used to conduct a three-year study of the crowded, massive and young star cluster Westerlund 2. The research found that the material encircling stars near the cluster's centre is mysteriously devoid of the large, dense clouds of dust that would be expected to become planets in a few million years. Their absence is caused by the cluster's most massive and brightest stars that erode and disperse the discs of gas and dust of neighbouring stars. This is the first time that astronomers have analysed an extremely dense star cluster to study which environments are favourable to planet formation.

This time-domain study from 2016 to 2019 sought to investigate the properties of stars during their early evolutionary phases and to trace the evolution of their circumstellar environments [1]. Such studies had previously been confined to the nearest, low-density, star-forming regions. Astronomers have now used the Hubble Space Telescope to extend this research to the centre of one of the few young massive clusters in the Milky Way, Westerlund 2, for the first time.

Astronomers have now found that planets have a tough time forming in this central region of the cluster. The observations also reveal that stars on the cluster's periphery do have immense planet-forming dust clouds embedded in their discs. To explain why some stars in Westerlund 2 have a difficult time forming planets while others do not, researchers suggest this is largely due to location. The most massive and brightest stars in the cluster congregate in the core. Westerlund 2 contains at least 37 extremely massive stars, some weighing up to 100 solar masses. Their blistering ultraviolet radiation and hurricane-like stellar winds act like blowtorches and erode the discs around neighbouring stars, dispersing the giant dust clouds.

"Basically, if you have monster stars, their energy is going to alter the properties of the discs," explained lead researcher Elena Sabbi, of the Space Telescope Science Institute in Baltimore, USA. "You may still have a disc, but the stars change the composition of the dust in the discs, so it's harder to create stable structures that will eventually lead to planets. We think the dust either evaporates away in 1 million years, or it changes in composition and size so dramatically that planets don't have the building blocks to form."

Westerlund 2 is a unique laboratory in which to study stellar evolutionary processes because it's relatively nearby, is quite young, and contains a rich stellar population. The cluster resides in a stellar breeding ground known as Gum 29, located roughly 14 000 light-years away in the constellation of Carina (The Ship's Keel). The stellar nursery is difficult to observe because it is surrounded by dust, but Hubble's Wide Field Camera 3 can peer through the dusty veil in near-infrared light, giving astronomers a clear view of the cluster. Hubble's sharp vision was used to resolve and study the dense concentration of stars in the central cluster.

"With an age of less than about two million years, Westerlund 2 harbours some of the most massive, and hottest, young stars in the Milky Way," said team member Danny Lennon of the Instituto de Astrofísica de Canarias and the Universidad de La Laguna. "The ambient environment of this cluster is therefore constantly bombarded by strong stellar winds and ultraviolet radiation from these giants that have masses of up to 100 times that of the Sun."

Sabbi and her team found that of the nearly 5000 stars in Westerlund 2 with masses between 0.1 and 5 times the Sun's mass, 1500 of them show dramatic fluctuations in their luminosity, which is commonly accepted as being due to the presence of large dusty structures and planetesimals. Orbiting material would temporarily block some of the starlight, causing fluctuations in brightness. However, Hubble only detected the signature of dust particles around stars outside the central region. They did not detect these dips in brightness in stars residing within four light-years of the centre.

"We think they are planetesimals or structures in formation," Sabbi explained. "These could be the seeds that eventually lead to planets in more evolved systems. These are the systems we don't see close to very massive stars. We see them only in systems outside the centre."

Thanks to Hubble, astronomers can now see how stars are accreting in environments that are like the early Universe, where clusters were dominated by monster stars. So far, the best known nearby stellar environment that contains massive stars is the starbirth region in the Orion Nebula. However, Westerlund 2 is a richer target because of its larger stellar population.

"Westerlund 2 gives us much better statistics on how mass affects the evolution of stars, how rapidly they evolve, and we see the evolution of stellar discs and the importance of stellar feedback in modifying the properties of these systems," said Sabbi. "We can use all of this information to inform models of planet formation and stellar evolution."

This cluster will also be an excellent target for follow-up observations with the upcoming NASA/ESA/CSA James Webb Space Telescope, an infrared observatory. Hubble has helped astronomers identify the stars that have possible planetary structures. With the Webb telescope, researchers will be able to study which discs around stars are not accreting material and which discs still have material that could build up into planets. Webb will also study the chemistry of the discs in different evolutionary phases and watch how they change, to help astronomers determine what role the environment plays in their evolution.

"A major conclusion of this work is that the powerful ultraviolet radiation of massive stars alters the discs around neighbouring stars," said Lennon. "If this is confirmed with measurements by the James Webb Space Telescope, this result may also explain why planetary systems are rare in old massive globular clusters."

Credit: 
ESA/Hubble Information Centre

A simple method to print planar microstructures of polysiloxane

image: Concept of embedded ink writing (EIW). A polysiloxane ink is printed by a direct ink writing (DIW) 3D printer in Newtonian fluids as embedding media. The surrounding liquid media allow the inks to maintain larger contact angles (> 100o) on the substrates. Inks for EIW can be functionalized by suspending functional microparticles (e.g., embedding thermochromic leuco dye microparticles).

Image: 
SUTD

Polysiloxane is an elastic polymer which is widely used in fluidics, optics, and biomedical engineering. It offers desirable properties for microfabrication due to its castable and curable properties.

To produce small scale structures consisting of polysiloxane, soft lithography is used as a standard technique in academic research laboratories.

Recent advances in digital fabrication, in particular 3D printing, have enabled direct patterning of polysiloxane albeit with strict requirements for the properties of the printing inks. Suitable inks are usually highly viscous and fast-curing. For 3D printing, the yield stress or photocurable characteristics of the polysiloxane resins are required to allow them to retain the printed shape.

The low viscosity of the additive-curing polysiloxane makes them incompatible for printing with direct ink writing (DIW) 3D printers. While the low viscosity of the polysiloxane resin such as Sylgard 184 facilitates easy extrusion through the nozzles, the reflow of the patterned resin can compromise the print fidelity.

Researchers from Singapore University of Technology and Design's (SUTD) Soft Fluidics Lab developed a simple method to fabricate reproducible planar microstructures consisting of polysiloxane using commercially available liquid polysiloxane resins without changing their properties.

In this newly developed approach, curable liquid polysiloxane with the viscosity in the range of 1-100 Pa.s was dispensed in a liquid immiscible with the resins such as methanol, ethanol, and isopropanol. The contact angle of the dispensed polysiloxane on the substrate increased from 20o in the air to 100o in alcohols. The increase in the contact angles allowed maintaining the structures of patterned polysiloxane until curing, after which the embedding liquid was readily removed by evaporation. The method was termed as embedded ink writing (EIW) (refer to image).

"With EIW, polysiloxane inks can be patterned on different soft and rigid substrates without compromising the adhesion of the printed polysiloxane with the substrate," explained lead author Dr. Rahul Karyappa from SUTD.

"The presence of embedding media did not hamper the bonding of the polysiloxane filaments in both lateral and vertical arrangements, allowing this technology to be effective especially in fabricating flexible devices and microfluidic devices using commercially available PDMS resin," added principal investigator, Assistant Professor Michinao Hashimoto from SUTD.

Credit: 
Singapore University of Technology and Design

Exploring the use of 'stretchable' words in social media

image: The tree of laughter. This spelling tree for stretched versions of the word 'ha' shows many of the different ways these words get spelled as they get stretched. The patterns of the tree represent the spellings of the words, with the initial 'h' at the root, and the following letters branching right for an 'a' and left for an 'h'. Thicker paths represent more dominant patterns, with many words stopping at an internal node after a few branchings. A few of the longer patterns reaching a terminal node are annotated with stars.
The inset plot shows how frequent different stretched versions of 'ha' are based on how long they are stretched. A few points are annotated with example stretched versions of that length, but the point represents all stretched versions of that length. Points for an even number of characters tend to be higher because of the tendency to perfectly alternate 'h' and 'a' as in 'hahaha...'.

Image: 
Gray et al, 2020

An investigation of Twitter messages reveals new insights and tools for studying how people use stretched words, such as "duuuuude," "heyyyyy," or "noooooooo." Tyler Gray and colleagues at the University of Vermont in Burlington present these findings in the open-access journal PLOS ONE on May 27, 2020.

In spoken and written language, stretched words can modify the meaning of a word. For instance, "suuuuure" can imply sarcasm, while "yeeessss" may indicate excitement. Stretched words are rare in formal writing, but the rise of social media has opened up new opportunities to study them.

Gray and colleagues have now completed the most comprehensive study to date of "stretchable" words in social media. They developed a new, more thorough strategy for identifying stretched words in tweets and used it to analyze a randomly selected dataset of about 10 percent of all tweets generated between September 2008 and December 2016--totaling about 100 billion tweets.

The researchers identified thousands of "stretchable" words in the tweets, including "ha" (e.g., "hahaha" or "haaahaha"), "awesome" (e.g., "awesssssommmmmeeeeee") and "goal) (e.g., ggggoooooaaaaallllll).

They also identified two key ways of measuring the characteristics of stretchable words: balance and stretch. Balance refers to the degree to which different letters tend to be repeated. For instance, "ha" has a high degree of balance because when it is stretched, the "h" and the "a" tend to be repeated just about equally. "Goal" is less balanced, with "o" repeated more than any other letter in the word.

Stretch refers to how long a word tends to be stretched. For instance, short words or sounds like "ha" have a high degree of stretch because people often repeat them many times (e.g., "hahahahahahahaha"). Meanwhile, regular words like "infinity" have lower stretch, often with just one letter repeated: "infinityyyy."

For this analysis, the researchers developed various tools and methods that could be used in future research of stretchable words, such as investigations of mis-typings and misspellings. The tools could also be applied to improve natural language processing, search engines, and spam filters

The authors add: "We were able to comprehensively collect and count stretched words like 'gooooooaaaalll' and 'hahahaha', and map them across the two dimensions of overall stretchiness and balance of stretch, while developing new tools that will also aid in their continued linguistic study, and in other areas, such as language processing, augmenting dictionaries, improving search engines, analyzing the construction of sequences, and more."

Credit: 
PLOS