Tech

New Corona test developed

image: Researchers Stephan Riesenberg (left) and Lukas Bokelmann (right) in the lab at the Max Planck Institute for Evolutionary Anthropology.

Image: 
MPI f. Evolutionary Anthropology

Quantitative real-time polymerase chain reaction (qPCR) is the most widely used diagnostic method to detect RNA viruses such as SARS-CoV-2. However, it requires expensive laboratory equipment and global shortages of reagents for RNA purification has increased the need to find simple but reliable alternatives. One alternative to the qPCR technology is RT-LAMP (reverse transcription loop-mediated isothermal amplification). This test amplifies the desired target sequences of the virus at a constant temperature, using minimal equipment compared to qPCR. In 2020, it was adapted to the detection of SARS-CoV-2. It was also shown that instead of a swab, which many people find unpleasant, it can be performed on gargle lavage samples.

First author Lukas Bokelmann and colleagues have now developed an improved colorimetric RT-LAMP assay, called Cap-iLAMP (capture and improved loop-mediated isothermal amplification), which extracts and concentrates viral RNA from a pool of gargle lavage samples. After a short incubation, the test result - orange/red for negative, bright yellow for positive - can be interpreted visually or by using a freely available smartphone app.

The improved testing method outperforms previous similar methods. "Cap-iLAMP drastically reduces false positives and single infected samples can be detected in a pool among 25 uninfected samples, thus reducing the technical cost per test to only about 1 Euro per individual", says senior author Stephan Riesenberg, a researcher at the Max Planck Institute for Evolutionary Anthropology. "Our method overcomes problems associated with standard RT-LAMP and could also be applied to numerous other pathogens."

Credit: 
Max Planck Institute for Evolutionary Anthropology

Uncovering hidden forever chemicals

Researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) found large quantities of previously undetectable compounds from the family of chemicals known as PFAS in six watersheds on Cape Cod using a new method to quantify and identify PFAS compounds. Exposures to some PFAS, widely used for their ability to repel heat, water, and oil, are linked to a range of health risks including cancer, immune suppression, diabetes, and low infant birth weight.

The new testing method revealed large quantities of previously undetected PFAS from fire-retardant foams and other unknown sources. Total concentrations of PFAS present in these watersheds were above state maximum contaminant levels (MCLs) for drinking water safety.

"We developed a method to fully capture and characterize all PFAS from fire-retardant foams, which are a major source of PFAS to downstream drinking water and ecosystems, but we also found large amounts of unidentified PFAS that couldn't have originated from these foams," said Bridger Ruyle, a graduate student at SEAS and first author of the study. "Traditional testing methods are completely missing these unknown PFAS."

The research will be published in Environmental Science & Technology.

PFAS -- per- and polyfluoroalkyl substances -- are present in products ranging from fire retardant foams to non-stick pans. Nicknamed "forever chemicals" due to their long lifespan, PFAS have been building up in the environment since they were first used in the 1950s.

Despite the associated health risks, there are no legally enforceable federal limits for PFAS chemicals in drinking water. The Environmental Protection Agency's provisional health guidelines for public water supplies only cover PFOS and PFOA, two common types of PFAS. Massachusetts, along with a few other states, has gone further by including six PFAS in their new MCLs in drinking water. But there are thousands of PFAS chemical structures known to exist, several hundred of which have already been detected in the environment.

"We're simply not testing for most PFAS compounds, so we have no idea what our total exposure is to these chemicals and health data associated with such exposures are still lacking," said Elsie Sunderland, the Gordon McKay Professor of Environmental Chemistry at SEAS and senior author of the paper.

The standard testing methods used by the EPA and state regulatory agencies only test for 25 or fewer known compounds. The problem is the overwhelming majority of PFAS compounds are proprietary and regulatory agencies can't find what they don't know exist.

The new method developed by Sunderland and her team can overcome that barrier and account for all PFAS in a sample.
CSI: PFAS

PFAS are made by combining carbons and fluorine atoms to form one of the strongest bonds in organic chemistry. Fluorine is one of the most abundant elements on earth but naturally occurring organic fluorine is exceedingly rare -- produced only by a few poisonous plants in the Amazon and Australia. Therefore, any amount of organofluorine detected in the environment is sure to be human made.

PFAS compounds found in the environment come in two forms: a precursor form and a terminal form. Most of the monitored PFAS compounds, including PFOS and PFOA, are terminal compounds, meaning they will not degrade under normal environmental conditions. But precursor compounds, which often make up the majority of PFAS chemicals in a sample, can be transformed through biological or environmental processes into terminal forms. So, while the EPA or state agencies may monitor PFAS concentrations, they still are not detecting much of the huge pool of PFAS precursors.

That's where this new method comes in.

The researchers first measure all the organofluorine in a sample. Then, using another technique, they oxidize the precursors in that sample and transform them into their terminal forms, which they can then measure. From there, the team developed a method of statistical analysis to reconstruct the original precursors, fingerprint their manufacturing origin, and measure their concentration within the sample.

"We're essentially doing chemical forensics," said Sunderland.

Using this method, Sunderland and her team tested six watersheds on Cape Cod as part of a collaboration with the United States Geological Survey and a research center funded by the National Institutes of Health and led by the University of Rhode Island that focuses on the sources, transport, exposure and effects of PFAS.

The team focused on identifying PFAS from the use of fire-retardant foams. These foams, which are used extensively at military bases, civilian airports, and local fire departments, are a major source of PFAS and have contaminated hundreds of public water supplies across the US.

The research team applied their forensic methods to samples collected between August 2017 and July 2019 from the Childs, Quashnet, Mill Creek, Marstons Mills, Mashpee and Santuit watersheds on Cape Cod. During the collection process, the team members had to be careful what they wore, since waterproof gear is treated with PFAS. The team ended up in decades-old waders to prevent contamination.

The sampling sites in the Childs, Quashnet and Mill Creek watersheds are downstream from a source of PFAS from fire retardant foams -- the Quashnet and Childs from The Joint Base Cape Cod military facility and Mill Creek from Barnstable County Fire Training Academy.

Current tests can only identify about 50 percent of PFAS from historical foams -- products that were discontinued in 2001 due to high levels of PFOS and PFOA -- and less than 1 percent of PFAS from modern foams.

Using their new method, Sunderland and her team were able to identify 100 percent of all PFAS compounds in the types of fire-retardant foams that were used for decades at Joint Base Cape Cod and Barnstable County Fire Training Academy.

"Our testing method was able to find these missing compounds that have been used by the chemical industry for more than 40 years," said Sunderland.

The tests also revealed huge quantities of PFAS from unknown sources.

"Our accounting of PFAS from firefighting foams could not explain 37 to 77 percent of the organofluorine that we measured," said Ruyle. "This has huge ramifications for not only our understanding of human exposure but also for how much PFAS is discharging into the ocean and accumulating in marine life."

To follow up on these findings, Ruyle is currently working with NIH to identify some of the health impacts of PFAS from contemporary firefighting foams using toxicology studies. Sunderland's team is continuing to study the unknown PFAS to better identify their sources and potential for accumulation in abundant marine food webs on Cape Cod.

Credit: 
Harvard John A. Paulson School of Engineering and Applied Sciences

Taking 2D materials for a spin

image: Schematic diagram of the MoS2 transistor in an ESR sample tube.

Image: 
University of Tsukuba

Tsukuba, Japan and Warsaw, Poland - Scientists from the University of Tsukuba and a scientist from the Institute of High Pressure Physics detected and mapped the electronic spins moving in a working transistor made of molybdenum disulfide. This research may lead to much faster computers that take advantage of the natural magnetism of electrons, as opposed to just their charge.

Spintronics is a new area of condensed matter physics that attempts to use the intrinsic magnetic moment of electrons, called "spins," to perform calculations. This would be a major advance over all existing electronics that rely solely on the electron charge. However, it is difficult to detect these spins, and there are many unknowns regarding materials that can support the transport of spin-polarized electrons.

Now, an international research team led by the Division of Materials Science at the University of Tsukuba has successfully used electron spin resonance (ESR) to monitor the number and location of unpaired spins coursing through a molybdenum disulfide transistor. ESR uses the same physical principle as the MRI machines that create medical images. The spins are subject to a very strong magnetic field, which creates an energy difference between electrons with spins aligned and anti-aligned with the field. The absorbance of photons that match this energy gap can be measured to determine the presence of unpaired electron spins.

The experiment required the sample to be cooled to just four degrees above absolute zero, and the transistor to be in operation while the spins are being measured. "The ESR signals were measured simultaneously with the drain and gate currents," corresponding author Professor Kazuhiro Marumoto says. "Theoretical calculations further identified the origins of the spins," coauthor Professor Małgorzata Wierzbowska says. Molybdenum disulfide was used because its atoms naturally form a nearly flat two-dimensional structure. The molybdenum atoms form a plane with a layer of sulfide ions above and below.

The team found that charging the system with the additional electrons in a process called n-type doping was important for creating the spins. "In contrast with previous work on other 2D materials, the n-type doping allowed us to achieve better control of the electronic spins," Professors Marumoto and Wierzbowska explain. The scientists believe that molybdenum disulfide will prove to be an important testbed for spintronic devices as the technology advances towards future consumer products.

Credit: 
University of Tsukuba

Key task in computer vision and graphics gets a boost

image: Application to human body data. The leftmost shape is deformed so that it overlaps the target shape. The rightmost shape is the result of applying the previous method reported by the author. The four shapes indicated by "BCPD++" are the results of the proposed method; the approximation for speeding up is improved from left to right. Despite the approximate computation, the third and fourth shapes among the four are roughly the same as that obtained by the previous method. Although not shown in the figure, the runtime is noticeably reduced compared to the previous method.

Image: 
Kanazawa University

Kanazawa, Japan - Non-rigid point set registration is the process of finding a spatial transformation that aligns two shapes represented as a set of data points. It has extensive applications in areas such as autonomous driving, medical imaging, and robotic manipulation. Now, a method has been developed to speed up this procedure.

In a study published in IEEE Transactions on Pattern Analysis and Machine Intelligence, a researcher from Kanazawa University has demonstrated a technique that reduces the computing time for non-rigid point set registration relative to other approaches.

Previous methods to accelerate this process have been computationally efficient only for shapes described by small point sets (containing fewer than 100,000 points). Consequently, the use of such approaches in applications has been limited. This latest research aimed to address this drawback.

The proposed method consists of three steps. First, the number of points in each point set is reduced through a procedure called downsampling. Second, non-rigid point set registration is applied to the downsampled point sets. And third, shape deformation vectors--mathematical objects that define the desired spatial transformation--are estimated for the points removed during downsampling.

"The downsampled point sets are registered by applying an algorithm known as Bayesian coherent point drift," explains author Osamu Hirose. "The deformation vectors corresponding to the removed points are then interpolated using a technique called Gaussian process regression."

The researcher carried out a series of experiments to compare the registration performance of their method with that of other approaches. They considered a wide variety of shapes, some described by small point sets and others by large point sets (containing from 100,000 to more than 10 million points). These shapes included, for example, that of a dragon, a monkey, and a human.

The results demonstrate that the proposed technique is efficient even for point sets with more than 10 million points, shown in Fig. 2. They also show that the computing times of this method are noticeably shorter than those of a state-of-the-art approach for point sets with more than a million points.

"Although the new technique provides accelerated registration, it is relatively sensitive to artificial disturbances in small data sets," says Hirose. "Such sensitivity indicates that the approach is best suited for large point sets, as opposed to small, noisy ones."

Given that non-rigid point set registration has a wide range of applications, the method established in this study could have far-reaching implications. The source code of the proposed method is distributed by the author at https://github.com/ohirose/bcpd.

Credit: 
Kanazawa University

Light in concert with force reveals how materials become harder when illuminated

image: Schematic illustration of how light affects the nucleation (birth) of dislocations (slippages of crystal planes) and dislocation motion, when the sample is also placed under mechanical loading. The Nagoya University/Technical University of Darmstadt research collaboration has found clear evidence that propagation of dislocations in semiconductors is suppressed by light. The likely cause is interaction between dislocations and electrons and holes excited by the light.

Image: 
Atsutomo Nakamura

Semiconductor materials play an indispensable role in our modern information-oriented society. For reliable performance of semiconductor devices, these materials need to have superior mechanical properties: they must be strong as well as resistant to fracture, despite being rich in nanoscale structures.

Recently, it has become increasingly clear that the optical environment affects the structural strength of semiconductor materials. The effect can be much more significant than expected, especially in light-sensitive semiconductors, and particularly since due to technological constraints or fabrication cost many semiconductors can only be mass-produced in very small and thin sizes. Moreover, laboratory testing of their strength has generally been performed on large samples. In the light of the recent explosion in emerging nanoscale applications, all of this suggests that there is an urgent need for the strength of semiconductor materials to be reappraised under controlled illumination conditions and thin sample sizes.

To this end, Professor Atsutomo Nakamura's group at Nagoya University, Japan, and Dr. Xufei Fang's group at the Technical University of Darmstadt have developed a technique for quantitatively studying the effect of light on nanoscale mechanical properties of thin wafers of semiconductors or any other crystalline material. They call it a "photoindentation" method. Essentially, a tiny, pointy probe indents the material while it is illuminated by light under controlled conditions, and the depth and rate at which the probe indents the surface can be measured. The probe creates dislocations - slippages of crystal planes - near the surface, and using a transmission electron microscope the researchers observe the effect of light at a range of wavelengths on dislocation nucleation (the birth of new dislocations) and dislocation mobility (the dislocations' gliding or sliding away from the point where they were created). The nucleation and mobility are measured separately for the first time and is one of the novelties of the photoindentation technique.

The researchers have discovered that while light has a marginal effect on the generation of dislocations under mechanical loading, it has a much stronger effect on the motion of dislocations. When a dislocation occurs, it is energetically favorable for it to expand and join up (nucleate) with others, and the imperfection gets bigger. Illumination by light does not affect this: the electrons and holes excited in the semiconductor by the light (the "photo-excited carriers") do not affect the strain energy of the dislocation, and it is this energy that determines the "line tension" of the dislocation that controls the nucleation process.

On the other hand, dislocations can also move in a so-called "glide motion", during which photo-excited carriers are dragged by dislocations via electrostatic interaction. The effect of photo-excited carriers on this dislocation motion is much more pronounced: if enough carriers are produced, the material becomes much stronger.

This effect is strikingly demonstrated when the same experiment is carried out in complete darkness and then under illumination with light at a wavelength that matches the semiconductor band gap (which produces an increased number of photo-excited carriers). When indented, any solid material initially undergoes "plastic deformation" - changing shape without springing back, somewhat like putty - until the load becomes too great, upon which it cracks. The Nagoya University research group demonstrated that the inorganic semiconductor zinc sulfide (ZnS) in total darkness behaves somewhat like putty, deforming by a huge 45% under shear strain without cracking or falling apart. However, when illuminated at the correct wavelength, it becomes quite hard. At other wavelengths it becomes not quite as hard.

The new findings demonstrate that purely plastic deformation without crack formation in semiconductor materials occurs at the nanoscale. With regards to mechanical behaviour, these semiconductors therefore resemble metallic materials. This newly established, robust experimental protocol makes it possible to evaluate the effect of light on the strength of even non-semiconducting materials that are very thin. Professor Nakamura notes: "One particularly important aspect is that non-semiconductors can exhibit semiconducting properties near the surface, due to oxidation, for instance, and since the starting point of deformation or fracture is often the surface, it is of great significance to establish a method for accurately measuring the strength of materials under controlled illumination conditions at the very surface, on a nanoscale."

The hardening effect that electron-hole pairs freed by light illumination have on material strength - by suppressing the propagation of dislocations, particularly near the surface - is part of a paradigm shift in the science of material strength. Conventionally, when considering the strength of a material, the atomic arrangement was the smallest unit. In other words, there was a premise that the strength of the material could be understood from the atomic arrangement and elasticity theory. However, recent studies have reported that the strength characteristics of materials change significantly due to external influences such as light and an electric field. Therefore, Professor Nakamura notes, "it is becoming more and more accepted that other viewpoints must be added to the theory of material strength which include the motion of electrons and holes that are smaller than atoms."

"This study reaffirms the quantum-level effect on the strength of such materials. In this respect, it can be said that this research has achieved one milestone in the paradigm shift in the field of material strength that is currently occurring."

Dr. Xufei Fang adds: "Now that the creation of devices on the true nanoscale is becoming a reality, the impact of light on the structural strength of various inorganic semiconductors is an issue to be considered."

Credit: 
Nagoya University

Compression or strain - the material expands always the same

image: Regardless if strained or compressed, the new material always expands.

Image: 
Thomas Heine et al.

If you stretch an elastic band, it becomes thinner - a physical behavior that applies to most "common" materials. Since the 20th century, an opposite behavior has been known in materials research: The so-called auxetic (from ancient Greek auxetos, meaning 'stretchable') materials expand in the direction orthogonal to the strain. Likewise, they shrink when they are compressed. Scientifically, they are characterized by a negative Poisson's ratio. Probably the best known and oldest application of unusual Poisson's ratios is the bottle cork, which has a Poisson's ratio of zero. This has the effect that the cork can be put into the thinner neck of the bottle.

Due to their special properties, auxetic materials allow for completely new functionalities and design solutions for a variety of innovative products with adjustable functional properties, including applications in medical technology or in the development of protective equipment such as bicycle helmets or safety jackets.

Thomas Heine, Professor of Theoretical Chemistry at TU Dresden, and his team have now discovered a previously unknown phenomenon. Based on borophene, an atomically thin configuration of the element boron, a stable form was found by adding palladium, yielding the chemical composition PdB4. The computational modelling shows that this material behaves like an auxetic material under strain, but expands like an ordinary material under compression. In other words, regardless if it is strained or compressed, the material always expands.

"In addition to thorough characterization in terms of stability, mechanical and electronic properties of the material, we have identified the origin of this half-auxetic character and believe that this mechanism can be used as a design concept for new half-auxetic materials," explains Prof. Heine, "These novel materials could lead to innovative applications in nanotechnology, for example in sensing or magneto-optics. A transfer to macroscopic materials is equally conceivable."

Credit: 
Technische Universität Dresden

Charting our changing cities

image: Left (top-bottom): SMU Associate Professor of Science, Technology and Society Winston Chow; SMU Associate Professor Humanities Orlando Woods.

Right: SMU President and Lee Kong Chian Chair Professor of Social Sciences Lily Kong.

The global outbreak has resulted in remarkable shifts in city living. Researchers can take the lead in kickstarting conversations on public policies for cities of the future, say SMU professors.

Image: 
Singapore Management University

SMU Office of Research & Tech Transfer - For most of human history, populations across the world lived in low-density, rural settings. Over the past few centuries, however, this changed dramatically with the trend of urbanisation. Today, more than four billion people live in urban settings worldwide; by 2050, about two thirds of the world's population are expected to live in cities.

Despite their rapid growth, cities do not spring up fully formed, but are shaped by evolving human constructs including government policy, legal frameworks and emerging technologies. It is precisely because much of human life is now led in cities that it is essential to examine these constructs and their implications on city living, said SMU President Professor Lily Kong at the New Cities in the New Normal workshop on 22 January 2021.

The workshop was coordinated by the SMU Cities Research Cluster (CRC), which comprises the School of Social Sciences (SOSS) faculty Associate Professor Winston Chow, Assistant Professor Ishani Mukherjee, Associate Professor Orlando Woods and Assistant Professor Fiona Williamson.

Building better cities

Cities today are guilty of contributing to many challenges humanity collectively faces, Professor Kong said in her opening remarks, citing the example of climate change and the corresponding need for sustainable living. This concept of sustainable living goes "far beyond" environmental issues, she qualified. It touches on social issues too - for example, ageing populations and how to sustain an elderly population in an urban environment.

This is where universities come in, providing research and education that can turn interdisciplinary, complex theories into tangible policy actions, Professor Kong explained. Concurring with Professor Kong, Professor Chow pointed out that with the recent pandemic altering countless aspects of city living, it has become even more crucial for university researchers to examine these changes in government policy, legal frameworks and emerging technologies - so as to understand their impact on cities of the future.

Agreeing that the COVID-19 outbreak has reshaped city life, Professor Shenjing He of the University of Hong Kong emphasised the issue of limited physical and social mobility. The latter is an important consideration as it is highly related to quality of life and wellbeing in cities, she said.

Moving through physical and virtual worlds

In her keynote speech, Professor He observed that as the pandemic was curtailing physical mobility - with people's movements largely restricted to their nearby communities and neighbourhoods - it was also reducing social mobility. "It's a public health crisis, but also a huge economic and social crisis... you see unemployment and economic hardship on the rise and it's hitting disadvantaged people the hardest," she said.

Since the concept of mobility also pertains to travel between cities, Professor He brought up the issue of pandemic travel bans and countries working to develop travel bubbles. "This creates a selective mobility only available to certain countries and particular [privileged] groups," she shared. "It actually signifies a more segregated and divided world."

Professor Woods further advanced this dialogue on inequality and equitable access in the next panel discussion on New Cities in the New Normal, which featured Associate Professor Hallam Stevens from NTU Institute of Science and Technology for Humanity (NISTH), and Professor Stephen Cairns, a Programme Director at the Future Cities Laboratory.

Examining the issue from a digital angle, Professor Stevens said that post-pandemic cities have grown to encompass more than the physical spaces they take up. "[They are] intertwined with all of these online spaces that we exist in at the same time. And we spend so much time in these online spaces," he said. As social distancing measures continue, human life and interactions are increasingly playing out on virtual spaces. This underscores a new need to include the novel kinds of digital rights in discussions of equitable access, Professor Stevens concluded.

Collaboration and communication

When it comes to public discussions on inequality and sustainable living, researchers can play a role in kickstarting much-needed conversations, said Professor Chow, who hosted a second panel session on urban sustainability featuring Dr Olivia Jensen, Lead Scientist for Environment and Climate at the Lloyd's Register Foundation (LRF) Institute for the Public Understanding of Risk; Dr Limin Hee, Director of Research at the Centre for Liveable Cities; and Dr Hui Mien Lee, Vice President for Sustainable Solutions at Mandai Park Development and Wildlife Reserves Singapore.

Adding to the discussion on the role of research in inspiring policy changes, the panellists concurred that much like the sprawling nature of a city, the myriad dimensions of such research would necessitate collaborations with multiple players in the ecosystem. "Academics, agencies, practitioners, developers... in coming up with integrated, systems-level solutions, we all have different roles to play," Dr Hee said.

Apart from establishing partnerships with other organisations, Dr Jensen also emphasised the importance of effective communication in turning academic studies into tangible action. "Who are the people you need to know? What are the barriers to them using it? And how can you, as a researcher, enable that process?" she asked the audience. "If you've done a great piece of work collecting and analysing data, then when it comes to communicating the results be as strategic and analytical as you would be in the research itself," she advised.

As cities continue to evolve, so too should the conversations about sustainability research, and workshops such as these, held virtually in light of the ongoing pandemic, play an important role by bringing more people into the discussion and broadening our understanding.

Credit: 
Singapore Management University

What can stream quality tell us about quality of life?

image: Virginia Tech researchers are using stream quality data to find new insights into the interactions between the health of our natural spaces and human well-being. Photo by Brad Klodowski, Virginia Tech.

Image: 
Virginia Tech

As the source of most of the water we drink and a place where we often go to recreate and enjoy nature, streams represent a crucial point-of-contact between human beings and the environment.

Now researchers in the College of Natural Resources and Environment and the Department of Biological Systems Engineering are using stream quality data to find new insights into the interactions between the health of our natural spaces and human well-being.

Their findings, published in the journal Ecological Indicators, reveal that demographics such as race and population density, as well as health indices such as cancer rates and food insecurity, show strong correlations with water quality across the Commonwealth of Virginia.

"We started off wanting to explore the general, intuitive relationship between human well-being and ecosystem health," explained Paul Angermeier, professor in the Department of Fish and Wildlife Conservation and assistant unit leader of the Virginia Cooperative Fish and Wildlife Research Unit for the U.S. Geological Survey. "Many of us intuit that healthy ecosystems produce benefits that accrue to people, but that outcome isn't well documented in a quantitative way."

To document that relationship, the research team had to break from the environmental quality management processes that too often separate the natural world from human experiences.

"When we consider natural resources, we tend to think about whether we're managing an environment for nonhuman considerations or human ones," said Associate Professor Leigh-Anne Krometis, of the Department of Biological Systems Engineering, which is in both the College of Engineering and the College of Agriculture and Life Sciences. "For instance, at the state level we have a department of environmental quality and a department of health, which both deal with the subject of water quality, but in different ways. What we wanted to see was how those two perspectives converge."

To find correlations across the state, the researchers used two key data sets: water quality measurements provided by the Virginia Department of Environmental Quality and county-level demographics data from the U.S. Census Bureau. They considered 13 indicators of human well-being, four demographic metrics, and two indicators of stream health.

"We had large data sets that we had to organize and process," explained Professor Marc Stern of the Department of Forest Resources and Environmental Conservation. "Our expectations on finding meaningful relationships between stream health and human factors weren't that high. The fact that they showed up so distinctly was a surprise."

What the researchers found is that there is a strong correlation between ecosystem health and human demographics, particularly along the lines of race. Stream conditions were found to be better in counties with higher percentages of white residents. More polluted streams were correlated with higher degrees of overall mortality.

"The term environmental justice is important to bring into our discussion," noted Stern, a senior fellow in the Center for Leadership in Global Sustainability. "These findings relate to the broader issue of systemic prejudices and the reality that our institutions and social systems do not favor marginalized communities. They get caught up in a cycle of being left behind, and while it's not impossible to break that pattern, it's going to take work."

Virginia is a suitable microcosm for revealing such dimensions: the state has high-density urban cities, suburban and rural areas, coastal and mountain geographies, and a broad socio-economic diversity that make it a useful starting point for broader research into the subject of human-environment interactions.

A crucial next step for the researchers is understanding how people are interacting with natural environments.

"We still don't have hard data on how people are interacting with nature," said Angermeier, who, along with Krometis, is an affiliate of Virginia Tech's Global Change Center housed in the Fralin Life Sciences Institute. "For instance, we found that mortality rates for people are correlated with contamination levels in fish. What does that mean? Are people eating contaminated fish, are they merely sharing a polluted water source, or is it something else? A better understanding of the mechanisms by which people are interacting with water will help us draw clearer conclusions about health outcomes."

Credit: 
Virginia Tech

New method facilitates development of antibody-based drugs

In recent years, therapeutic antibodies have transformed the treatment of cancer and autoimmune diseases. Now, researchers at Lund University in Sweden have developed a new, efficient method based on the genetic scissors CRISPR-Cas9, that facilitates antibody development. The discovery is published in Nature Communications.

Antibody drugs are the fastest growing class of drug, and several therapeutic antibodies are used to treat cancer. They are effective, often have few side effects and benefit from the body's own immune system by identifying foreign substances in the body. By binding to a specific target molecule on a cell, the antibody can either activate the immune system, or cause the cell to self-destruct.

However, most antibody drugs used today have been developed against an antibody target chosen beforehand. This approach is limited by the knowledge of cancer we have today and restricts the discovery of new medicines to currently known targets.

"Many antibody drugs currently target the same molecule, which is a bit limiting. Antibodies targeting new molecules could give more patients access to effective treatment", says Jenny Mattsson, doctoral student at the Department of Hematology and Transfusion Medicine at Lund University.

Another route - that pharmaceutical companies would like to go down - would be to search for antibodies against cancer cells without being limited to a pre-specified target molecule. In this way, new, unexpected target molecules could be identified. The problem is that this method (so-called "phenotypic antibody development") requires that the target molecule be identified at a later stage, which has so far been technically difficult and time-consuming.

"Using the CRISPR-Cas9 gene scissors, we were able to quickly identify the target molecules for 38 of 39 test antibodies. Although we were certain that the method would be effective, we were surprised that the results would be this precise. With previous methods, it has been difficult to find the target molecule even for a single antibody", says Jenny Mattsson.

The research project is a collaboration between Lund University, BioInvent International and the Foundation for Strategic Research. The researchers' method has already been put into practical use in BioInvent's ongoing research projects.

"We believe the method can help antibody developers and hopefully contribute to the development of new antibody-based drugs in the future", concludes Professor Björn Nilsson, who led the project.

Credit: 
Lund University

Built to last: New copolymer binder to extend the life of lithium ion batteries

image: The BP copolymer offers several advantages that put it miles ahead of the conventional PVDF binder in terms of stability and durability

Image: 
Noriyoshi Matsumi from JAIST

Anyone who has owned a smartphone for over a year is most likely aware that its built-in lithium (Li)-ion battery does not hold as much charge as when the device was new. The degradation of Li-ion batteries is a serious issue that greatly limits the useful life of portable electronic devices, indirectly causing huge amounts of pollution and economic losses. In addition to this, the fact that Li-ion batteries are not very durable is a massive roadblock for the market of electric vehicles and renewable energy harvesting. Considering the severity of these issues, it is no surprise that researchers have been actively seeking ways to improve upon the state-of-the-art designs of Li-ion batteries.

One of the major causes for the drop in capacity over time in Li-ion batteries is the degradation of the widely used graphite anodes--the negative terminals in batteries. The anode, together with the cathode (or the positive terminal) and the electrolyte (or the medium that carries the charge between two terminals), provide an environment where the electrochemical reactions for the charging and discharging of the battery can take place. However, graphite requires a binder to prevent it from falling apart with use. The most widely adopted binder today, poly(vinylidene fluoride) (PVDF), has a series of drawbacks that render it far from an ideal material.

To tackle these issues, a team of researchers from Japan Advanced Institute of Science and Technology (JAIST) are investigating a new type of binder made from a bis-imino-acenaphthenequinone-paraphenylene (BP) copolymer. Their latest study, published in ACS Applied Energy Materials, was led by Professor Noriyoshi Matsumi and also involved Professor Tatsuo Kaneko, Senior Lecturer Rajashekar Badam, PhD student Agman Gupta, and former postdoctoral fellow Aniruddha Nag.

So, in what regards does the BP copolymer outperform the conventional PVDF binder for graphite anodes? First, the BP binder offers significantly better mechanical stability and adherence to the anode. This comes in part from the so-called π-π interactions between the bis-imino-acenaphthenequinone groups and graphite, and also from the good adherence of the copolymer's ligands to the copper current collector of the battery. Secondly, not only is the BP copolymer much more conductive than PVDF, it also forms a thinner conductive solid electrolyte interface with less resistance. Thirdly, the BP copolymer does not react easily with the electrolyte, which also greatly prevents its degradation.

All these advantages combined led to some serious performance improvements, as the researchers demonstrated through experimental measurements. "Whereas a half-cell using PVDF as a binder exhibited only 65% of its original capacity after about 500 charge-discharge cycles, the half-cell using the BP copolymer as a binder showed a capacity retention of 95% after over 1700 such cycles," highlights Prof. Matsumi. The BP copolymer-based half-cells also showed a very high and stable coulombic efficiency, a measure that compares the amount of charge flowing in and out of the cell in a given cycle; this is also indicative of the long-term durability of the battery. Images of the binders taken with a scanning electron microscope before and after cycling revealed that only tiny cracks formed on the BP copolymer, whereas large cracks had already formed on PVDF in less than a third of the total number of cycles.

The theoretical and experimental findings of this study will pave the way for developing long-lasting Li-ion batteries. In turn, this could have far-reaching economic and environmental consequences, as Prof. Matsumi explains: "The realization of durable batteries will help in the development of more reliable products for long-term use. This will encourage consumers to purchase more expensive battery-based assets like electric vehicles, which will be used for many years." He also remarks that durable batteries would be good news for those relying on artificial organs, such as patients with certain heart diseases. Of course, the general population would also benefit, considering how much smartphones, tablets, and laptops are used and recharged every day.

Further progress in electrode binders will hopefully put us closer to more durable battery-based products and a greener future.

Credit: 
Japan Advanced Institute of Science and Technology

Researchers find AI can predict new atrial fibrillation, stroke risk

DANVILLE, Pa. - A team of scientists from Geisinger and Tempus have found that artificial intelligence can predict risk of new atrial fibrillation (AF) and AF-related stroke.

Atrial fibrillation is the most common cardiac arrhythmia and is associated with numerous health risks, including stroke and death. The study, published in Circulation, used electrical signals from the heart--measured from a 12-lead electrocardiogram (ECG)--to identify patients who are likely to develop AF, including those at risk for AF-related stroke.

"Each year, over 300 million ECGs are performed in the U.S. to identify cardiac abnormalities within an episode of care. However, these tests cannot generally detect future potential for negative events like atrial fibrillation or stroke," said Joel Dudley, chief scientific officer at Tempus. "This critical work stems from our major investments in cardiology to generate algorithms that make existing cardiology tests, such as ECGs, smarter and capable of predicting future clinical events. Our goal is to enable clinicians to act earlier in the course of disease."

To develop their model, the team of data scientists and medical researchers used 1.6 million ECGs from 430,000 patients over 35 years of patient care at Geisinger. These data were used to train a deep neural network--a specialized class of artificial intelligence--to predict, among patients without a previous history of AF, who would develop AF within 12 months. The neural network performance exceeded that of current clinical models for predicting AF risk. Furthermore, 62% of patients without known AF who experienced an AF-related stroke within three years were identified as high risk by the model before the stroke occurred.

"Not only can we now predict who is at risk of developing atrial fibrillation, but this work shows that the high risk prediction precedes many AF-related strokes," said Brandon Fornwalt, M.D., Ph.D., co-senior author and chair of Geisinger's Department of Translational Data Science and Informatics. "With that kind of information, we can change the way these patients are screened and treated, potentially preventing such severe outcomes. This is huge for patients."

Credit: 
Geisinger Health System

Automatic adverse drug reaction extraction from electronic health records

Patients' electronic health records convey crucial information. The application of natural language processing techniques to these records may be an effective means of extracting information that may improve clinical decision making, clinical documentation and billing, disease prediction and the detection of adverse drug reactions. Adverse drug reactions are a major health problem, resulting in hospital re-admissions and even the death of thousands of patients. An automatic detection system can highlight said reactions in a document, summarise them and automatically report them.

In this context, the Basurto University Hospital and the Galdakao Hospital 'were interested in creating a system that would use natural language processing techniques to analyse patient health records in order to automatically identify any adverse effects' explains the engineer Sara Santiso, who also holds a PhD in Computer Science. After the hospitals contacted the IXA group at the UPV/EHU, several researchers started working to build a robust model with which to extract adverse drug reactions from electronic health records written in Spanish, based on clinical text mining.

To this end 'not only have we used techniques based on traditional machine learning algorithms, we have also explored deep learning techniques, reaching the conclusion that these are better able to detect adverse reactions' explains Santiso, one of the authors of the study. Machine learning and deep learning imitate the way the human brain learns, although they use different types of algorithms to do so.

Difficulties finding a corpus in Spanish

Santiso underscores the difficulties the team encountered when trying to find a large enough corpus with which to work: 'At first, we started with only a few health records, because they are difficult to obtain due to privacy issues; you have to sign confidentiality agreements in order to work with them' she explains. The research team has found that 'having a larger corpus helps the system learn the examples contained in it more effectively, thereby giving rise to better results'.

Through this study, which was carried out with health records written in Spanish, 'we are contributing to closing the gap between clinical text mining in English and that carried out in other languages, which accounts for less than 5% of all papers published in the field. Indeed, the extraction of clinical information is not yet fully developed due (among other things) to the potential for extracting information from other hospitals and in other languages' claims the researcher.

Although natural language processing has been of inestimable help in the computer-aided detection of adverse drug reactions, there is still room for improvement: 'To date, systems have tended to focus on detecting drug-disease pairs located in the same sentence.

However, health records contain implicit information that might reveal underlying relations (for example, information about antecedents might be relevant for determining the causes of an adverse event). In other words, future research should strive to detect both explicitly and implicitly-stated inter-sentence relationships'. Moreover, another issue that should be the subject of future research is the lack of electronic health records written in Spanish.

Credit: 
University of the Basque Country

Making sense of commotion under the ocean to locate tremors near deep-sea faults

image: Using a method to better locate the source of weak tremors from regions with complex geological features, researchers from Kyushu University's International Institute for Carbon-Neutral Energy Research found that many tremors originate from the shear zone, an area of high fluid pressure within the pores of rocks, in the Nankai Trough, which is schematically shown here with the structures of the tectonic plates and fault lines. More accurately determined tremor sources and related geophysical properties that can be obtained from such information will aid the monitoring and modelling of large earthquakes along plate interfaces.

Image: 
Takeshi Tsuji, Kyushu University

Researchers from Japan and Indonesia have pioneered a new method for more accurately estimating the source of weak ground vibrations in areas where one tectonic plate is sliding under another in the sea. Applying the approach to Japan's Nankai Trough, the researchers were able to estimate previously unknown properties in the region, demonstrating the method's promise to help probe properties needed for better monitoring and understanding larger earthquakes along this and other plate interfaces.

Episodes of small, often imperceptible seismic events known as tremors occur around the world and are particularly common in areas near volcanoes and subduction zones--regions where one of the massive plates forming Earth's outer layers slides under another. Though they may be weak, studying these vibrations is important for estimating features of the associated tectonic plate boundaries and is necessary for detecting slipping among the plates that can be used to warn against larger earthquake events and tsunamis.

"Tremor episodes occur frequently in subduction zones, but their point of origin can be difficult to determine as they have no clear onset features like the sudden, strong shaking seen with ordinary earthquakes," explains Takeshi Tsuji, leader of the study's research team from Kyushu University's International Institute for Carbon-Neutral Energy Research (I2CNER).

"Current techniques to identify their source rely on waveform readings from nearby seismic stations together with a modelling system, but complex geological structures can greatly influence the results, leading to inaccurate travel times."

The I2CNER team developed the new methodology to take into account some of the complexities of subduction zones such as the Nankai Trough and estimate more accurate travel times from source to station. The novel approach involves a model that does not rely on a constant waveform and also considers the relationships between tremors detected at all possible pairs of monitoring stations.

"Applying this method to the Nankai Trough, we found that most tremors occurred in areas of high fluid pressure called the shear zone on the plate boundary," says study lead author Andri Hendriyana.

"The thickness of the shear zone was found to be a major controlling factor for the tremor epicentre, with the tremor sequence initiating at regions where fluid pressures within the rocks are the greatest."

Having better determined the locations of several tremors, the research could also more accurately estimate the speed of tremor propagation. Using this information, the team was then able to estimate how easily liquids can move through the deep fault. Known as permeability, this property is important for evaluating earthquake rupture processes and had never before been reported for the deep plate interface of the Nankai Trough.

"Accurately determining tremor source and related geophysical properties is crucial in the monitoring and modelling of larger earthquakes along the plate interface," comments Tsuji. "Our method can also be applied in other regions where tremor location estimation is difficult because of a complex geography to better obtain this vital information."

Credit: 
Kyushu University

Tantalizing signs of phase-change 'turbulence' in RHIC collisions

image: The STAR detector at the U.S. Department of Energy's Brookhaven National Laboratory

Image: 
Brookhaven National Laboratory

UPTON, NY—Physicists studying collisions of gold ions at the Relativistic Heavy Ion Collider (RHIC), a U.S. Department of Energy Office of Science user facility for nuclear physics research at DOE’s Brookhaven National Laboratory, are embarking on a journey through the phases of nuclear matter—the stuff that makes up the nuclei of all the visible matter in our universe. A new analysis of collisions conducted at different energies shows tantalizing signs of a critical point—a change in the way that quarks and gluons, the building blocks of protons and neutrons, transform from one phase to another. The findings, just published by RHIC’s STAR Collaboration in the journal Physical Review Letters, will help physicists map out details of these nuclear phase changes to better understand the evolution of the universe and the conditions in the cores of neutron stars.

“If we are able to discover this critical point, then our map of nuclear phases—the nuclear phase diagram—may find a place in the textbooks, alongside that of water,” said Bedanga Mohanty of India’s National Institute of Science and Research, one of hundreds of physicists collaborating on research at RHIC using the sophisticated STAR detector.

As Mohanty noted, studying nuclear phases is somewhat like learning about the solid, liquid, and gaseous forms of water, and mapping out how the transitions take place depending on conditions like temperature and pressure. But with nuclear matter, you can’t just set a pot on the stove and watch it boil. You need powerful particle accelerators like RHIC to turn up the heat.

RHIC’s highest collision energies “melt” ordinary nuclear matter (atomic nuclei made of protons and neutrons) to create an exotic phase called a quark-gluon plasma (QGP). Scientists believe the entire universe existed as QGP a fraction of a second after the Big Bang—before it cooled and the quarks bound together (glued by gluons) to form protons, neutrons, and eventually, atomic nuclei. But the tiny drops of QGP created at RHIC measure a mere 10-13 centimeters across (that’s 0.0000000000001 cm) and they last for only 10-23 seconds! That makes it incredibly challenging to map out the melting and freezing of the matter that makes up our world.

“Strictly speaking if we don’t identify either the phase boundary or the critical point, we really can’t put this [QGP phase] into the textbooks and say that we have a new state of matter,” said Nu Xu, a STAR physicist at DOE’s Lawrence Berkeley National Laboratory.

Tracking phase transitions

To track the transitions, STAR physicists took advantage of the incredible versatility of RHIC to collide gold ions (the nuclei of gold atoms) across a wide range of energies.

“RHIC is the only facility that can do this, providing beams from 200 billion electron volts (GeV) all the way down to 3 GeV. Nobody can dream of such an excellent machine,” Xu said.

The changes in energy turn the collision temperature up and down and also vary a quantity known as net baryon density that is somewhat analogous to pressure. Looking at data collected during the first phase of RHIC’s “beam energy scan” from 2010 to 2017, STAR physicists tracked particles streaming out at each collision energy. They performed a detailed statistical analysis of the net number of protons produced. A number of theorists had predicted that this quantity would show large event-by-event fluctuations as the critical point is approached.

The reason for the expected fluctuations comes from a theoretical understanding of the force that governs quarks and gluons. That theory, known as quantum chromodynamics, suggests that the transition from normal nuclear matter (“hadronic” protons and neutrons) to QGP can take place in two different ways. At high temperatures, where protons and anti-protons are produced in pairs and the net baryon density is close to zero, physicists have evidence of a smooth crossover between the phases. It’s as if protons gradually melt to form QGP, like butter gradually melting on a counter on a warm day. But at lower energies, they expect what’s called a first-order phase transition—an abrupt change like water boiling at a set temperature as individual molecules escape the pot to become steam. Nuclear theorists predict that in the QGP-to-hadronic-matter phase transition, net proton production should vary dramatically as collisions approach this switchover point.

“At high energy, there is only one phase. The system is more or less invariant, normal,” Xu said. “But when we change from high energy to low energy, you also increase the net baryon density, and the structure of matter may change as you are going through the phase transition area.

“It’s just like when you ride an airplane and you get into turbulence,” he added. “You see the fluctuation—boom, boom, boom. Then, when you pass the turbulence—the phase of structural changes—you are back to normal into the one-phase structure.”

In the RHIC collision data, the signs of this turbulence are not as apparent as food and drinks bouncing off tray tables in an airplane. STAR physicists had to perform what’s known as “higher order correlation function” statistical analysis of the distributions of particles—looking for more than just the mean and width of the curve representing the data to things like how asymmetrical and skewed that distribution is.

The oscillations they see in these higher orders, particularly the skew (or kurtosis), are reminiscent of another famous phase change observed when transparent liquid carbon dioxide suddenly becomes cloudy when heated, the scientists say. This “critical opalescence” comes from dramatic fluctuations in the density of the CO2—variations in how tightly packed the molecules are.

“In our data, the oscillations signify that something interesting is happening, like the opalescence,” Mohanty said.

Yet despite the tantalizing hints, the STAR scientists acknowledge that the range of uncertainty in their measurements is still large. The team hopes to narrow that uncertainty to nail their critical point discovery by analyzing a second set of measurements made from many more collisions during phase II of RHIC’s beam energy scan, from 2019 through 2021.

The entire STAR collaboration was involved in the analysis, Xu notes, with a particular group of physicists—including Xiaofeng Luo (and his student, Yu Zhang), Ashish Pandav, and Toshihiro Nonaka, from China, India, and Japan, respectively—meeting weekly with the U.S. scientists (over many time zones and virtual networks) to discuss and refine the results. The work is also a true collaboration of the experimentalists with nuclear theorists around the world and the accelerator physicists at RHIC. The latter group, in Brookhaven Lab’s Collider-Accelerator Department, devised ways to run RHIC far below its design energy while also maximizing collision rates to enable the collection of the necessary data at low collision energies.

“We are exploring uncharted territory,” Xu said. “This has never been done before. We made lots of efforts to control the environment and make corrections, and we are eagerly awaiting the next round of higher statistical data,” he said.

Credit: 
DOE/Brookhaven National Laboratory

Huntington's disease driven by slowed protein-building machinery in cells

image: Disease-causing huntingtin, shown in red, interacts with ribosomes, shown in green, in a striatal neuron. The nucleus is blue.

Image: 
Image by Nicolai Urban of Max Planck Institute for Neuroscience in Jupiter, Florida.

JUPITER, FL -- In 1993, scientists discovered that a single mutated gene, HTT, caused Huntington's disease, raising high hopes for a quick cure. Yet today, there's still no approved treatment.

One difficulty has been a limited understanding of how the mutant huntingtin protein sets off brain cell death, says neuroscientist Srinivasa Subramaniam, PhD, of Scripps Research, Florida. In a new study published in Nature Communications on Friday, Subramaniam's group has shown that the mutated huntingtin protein slows brain cells' protein-building machines, called ribosomes.

"The ribosome has to keep moving along to build the proteins, but in Huntington's disease, the ribosome is slowed," Subramaniam says. "The difference may be two, three, four-fold slower. That makes all the difference."

Cells contain millions of ribosomes each, all whirring along and using genetic information to assemble amino acids and make proteins. Impairment of their activity is ultimately devastating for the cell, Subramaniam says.

"It's not possible for the cell to stay alive without protein production," he says.

The team's discoveries were made possible by recent advancements in gene translation tracking technologies, Subramaniam says. The results suggest a new route for development of therapeutics, and have implications for multiple neurodegenerative diseases in which ribosome stalling appears to play a role.

Huntington's disease affects about 10 people per 100,000 in the United States. It is caused by an excessive number of genetic repeats of three DNA building blocks. Known by the letters CAG, short for cytosine, adenine and guanine, 40 or more of these repeats in the HTT gene causes the brain degenerative disease, which is ultimately fatal. The more repeats present, the earlier the onset of symptoms, which include behavioral disturbances, movement and balance difficulty, weakness and difficulty speaking and eating. The symptoms are caused by degeneration of brain tissue that begins in a region called the striatum, and then spreads. The striatum is the region deep in the center of the brain that controls voluntary movement and responds to social reward.

For their experiments, the scientists used striatal cells engineered to have three different degrees of CAG repeats in the HTT gene. They assessed the impact of the CAG repeats using a technology called Ribo-Seq, short for high-resolution global ribosome footprint profiling, plus mRNA-seq, a method that allows a snapshot of which genes are active, and which are not in a given cell at a given moment.

The scientists found that in the Huntington's cells, translation of many, not all, proteins were slowed. To verify the finding, they blocked the cells' ability to make mutant huntingtin protein, and found the speed of ribosome movement and protein synthesis increased. They also assessed how mutant huntingtin protein impacted translation of other genes, and ruled out the possibility that another ribosome-binding protein, Fmrp, might be causing the slowing effect.

Further experiments offered some clues as to how the mutant huntingtin protein interfered with the ribosomes' work. They found it bound directly to ribosomal proteins and the ribosomal assembly, and not only affected speed of protein synthesis, but also of ribosomal density within the cell.

Many questions remain, Subramaniam says, but the advance offers a new direction for helping people with Huntington's disease.

"The idea that the ribosome can stall before a CAG repeat is something people have predicted. We can show that it's there," Subramaniam says. "There's a lot of additional work that needs to be done to figure out how the CAG repeat stalls the ribosome, and then perhaps we can make medications to counteract it."

Credit: 
Scripps Research Institute