Tech

Computing collaboration reveals global ripple effect of shifting monsoons

image: Members of the international team simulated changes to the start times of monsoon seasons across the globe, with warm colors representing onset delays.

Image: 
Moetasim Ashfaq and Adam Malin/Oak Ridge National Laboratory, U.S. Dept. of Energy

Scientists from the Department of Energy’s Oak Ridge National Laboratory and a dozen other international research institutions have produced the most elaborate set of projections to date that illustrates possible futures for major monsoon regions.

Multiple regions around the world plan energy production, agricultural practices and other essential economic endeavors based on the annual arrival of monsoons, which entails a seasonal shift in the direction of winds that provides periods of steady rainfall. However, unchecked greenhouse gas emissions could disrupt these traditionally predictable events.

Using RegCM4, the latest version of a popular regional climate model developed by the International Centre for Theoretical Physics in Italy, the team ran a series of simulations to project and evaluate changes in nine monsoon regions across five continents. The researchers designed the simulations with a tight grid of each region containing spacing of less than 16 miles, which provided a substantial level of detail.

The team, part of a global effort called the Coordinated Regional Downscaling Experiment, or CORDEX, published its findings in Climate Dynamics.

“This is the first time that a regional climate model has been used to provide a global view of changes in monsoons,” said lead author Moetasim Ashfaq, a climate computational scientist at ORNL. “It took a great deal of time and effort to compile and analyze such high-profile, high-resolution data, and these detailed simulations would not have been possible without a significant international collaboration.”

ORNL researchers simulated the South Asian monsoon region using resources of the laboratory’s Compute and Data Environment for Science and the compute cluster Eos, and the rest of the simulations were conducted at various other computing centers. The team uncovered commonalities in regional monsoon responses to increases in greenhouse gas emissions. These responses included monsoon onset delays, shorter monsoon seasons and more intense seasonal fluctuation.

The simulations predicted and compared changes that would occur in different scenarios provided by the Intergovernmental Panel on Climate Change, or IPCC, known as Representation Concentration Pathway, or RCP8.5 and RCP2.6.

RCP8.5 assumes that carbon emissions follow a “business as usual” scenario without policy interventions, whereas RCP2.6 is based on much lower increases in emissions with aggressive mitigation policies. Although the monsoon patterns will likely change for both RCPs, the simulations revealed that the amount of change would likely be minimal under RCP2.6 but could be significant under RCP8.5.

“If emissions are reduced based on RCP2.6 out to the year 2100, the simulations show that the long, damaging shifts in monsoon behaviors can mostly be avoided,” Ashfaq said. “If you look at the best-case scenario, we do still see changes, but they are insignificantly different from the typical year-to-year variation in regional monsoons that communities are already accustomed to.”

Seasons of change

Seven of the nine monsoon regions showed a gradual delay in monsoon onset with a continuous increase in global emissions, which could create wide-ranging consequences that directly affect approximately two-thirds of the world’s population by the end of this century. Unlike the areas that receive relatively even amounts of precipitation in all seasons, heavily populated monsoon regions receive 60% to 70% of their precipitation during the summer monsoon season.

“The RCP8.5 simulations reveal robust delays in the start of rainy seasons that ripple through many aspects of everyday life in these regions,” Ashfaq said. “For example, a monsoon that usually starts in the first week of June in South Asia and West Africa may be delayed as long as 15 (days) to 20 days or even an entire month over parts of these regions by the end of the 21st century.”

Although the simulations also showed a delay in the end of the rainy season, otherwise known as monsoon demise, this shift was not nearly as dramatic as the delay in monsoon onset, shortening the length of the entire monsoon season. The researchers also discovered that affected monsoon regions are likely to see more precipitation during that period, leading to more intense rains. Conversely, the rest of the year would see longer dry periods.

This increased seasonality could exacerbate the prevalence of floods, droughts, wildfires and other extreme climate events that already pose challenges to these regions. Significant changes in monsoon behavior could contribute to outbreaks of vector-borne diseases, such as cholera, dengue and malaria.

Since agricultural activities in monsoon regions are typically timed to coincide with the periodic onset and demise of the rainy season, these factors could alter the production of rain-dependent crop yields.

“More than half of the world’s arabica coffee supply is produced in Brazil, and more than 70% of the cacao used to make chocolate comes from West Africa, whereas more than one-third of rice exports come from India and Pakistan,” Ashfaq said. “If regional agriculture is subjected to monsoon onset delays and shorter rainy seasons, production of these types of commodities will be reduced and have a significant impact on the global economy.”

Many countries located in these regions rely on hydropower to generate electricity, including Brazil, which produces 75% of its energy via this method. Shorter monsoon seasons would not provide enough rainfall at the correct time to supply adequate power without overhauling current operations.

A delicate balance

In addition to identifying potential monsoon changes and their implications, the team also investigated the root causes responsible for these shifts.

In the absence of organized weather systems and a sustained moisture supply, the relatively dry pre-monsoon season receives only intermittent and convective rainfall, which is thermally driven. Lands in these regions get warmer every year during the pre-monsoon period, commonly reaching surface temperatures of 120 degrees Fahrenheit. The combination of convective precipitation warming the upper atmosphere and hot surface conditions warming the lower atmosphere causes disparities between warm air over the land and ocean that force the dry season to give way to monsoon rains.

However, the simulations revealed that a continuous increase in global emissions will make the pre-monsoon environment less conducive for convective precipitation, which will delay the warming of upper atmosphere and the transition from the dry to the rainy season. One key factor the researchers determined will decrease convective rainfall during the pre-monsoon period is the formation of a deeper and less saturated boundary layer – a part of the lower atmosphere where moisture and energy are exchanged between the land and the atmosphere.

“The upward force needed to lift air parcels to their level of free convection increases with the depth of the boundary layer,” Ashfaq said. “And the warmer the atmosphere, the more moisture needed for convective instability, which is essential for the development of thunderstorms. Fulfilling the requirement during the pre-monsoon period is challenging because of the limited moisture supply as winds blow away from the land.”

The team will contribute their CORDEX simulations to the regional climate change chapter of the next IPCC assessment.

This research used resources of the Oak Ridge Leadership Computing Facility, a DOE Office of Science User Facility located at ORNL. The team received support from the National Climate Computing Research Center, a collaboration between DOE and the National Oceanic and Atmospheric Administration.

UT-Battelle LLC manages Oak Ridge National Laboratory for DOE’s Office of Science, the single largest supporter of basic research in the physical sciences in the United States. DOE’s Office of Science is working to address some of the most pressing challenges of our time. For more information, visit https://energy.gov/science. —Elizabeth Rosenthal

Journal

Climate Dynamics

DOI

10.1007/s00382-020-05306-2

Credit: 
DOE/Oak Ridge National Laboratory

No keys to the kingdom: New single sign-on algorithm provides superior privacy

image: A new and secure single sign-on algorithm that eliminates all chances of gathering and leaking personal information without user consent.

Image: 
Tokyo University of Science

Over the last few decades, as the information era has matured, it has shaped the world of cryptography and made it a varied landscape. Amongst the myriad of encoding methods and cryptosystems currently available for ensuring secure data transfers and user identification, some have become quite popular because of their safety or practicality. For example, if you have ever been given the option to log onto a website using your Facebook or Gmail ID and password, you have encountered a single sign-on (SSO) system at work. The same goes for most smartphones, where signing in with a single username and password combination allows access to many different services and applications.

SSO schemes give users the option to access multiple systems by signing in to just one specific system. This specific system is called the "identity provider" and is regarded as a trusted entity that can verify and store the identity of the user. When the user attempts to access a service via the SSO, the "service provider" asks this identity provider to authenticate the user.

The advantages of SSO systems are many. For one, users need not remember several username and password combinations for each website or application. This translates into fewer people forgetting their passwords and, in turn, fewer telephone calls to IT support centers. Moreover, SSO reduces the hassle of logging in, which can, for example, encourage employees to use their company's security-oriented tools for tasks such as secure file transfer.

But with these advantages come some grave concerns. SSO systems are often run by Big Tech companies, who have, in the past, been reported to gather people's personal information from apps and websites (service providers) without their consent, for targeted advertising and other marketing purposes. Some people are also concerned that their ID and password could be stored locally by third parties when they provide them to the SSO mechanism.

In an effort to address these problems, Associate Professor Satoshi Iriyama from Tokyo University of Science and his colleague Dr Maki Kihara have recently developed a new SSO algorithm that on principle prevents such holistic information exchange. In their paper, published in Cryptography, they describe the new algorithm in great detail after going over their motivations for developing it. Dr Iriyama states: "We aimed to develop an SSO algorithm that does not disclose the user's identity and sensitive personal information to the service provider. In this way, our SSO algorithm uses personal information only for authentication of the user, as originally intended when SSO systems were introduced."

Because of the way this SSO algorithm is designed, it is impossible in essence for user information to be disclosed without authorization. This is achieved, as explained by Dr Iriyama, by applying the principle of "handling information while it is still encrypted." In their SSO algorithm, all parties exchange encrypted messages but never exchange decryption keys, and no one is ever in possession of all the pieces of the puzzle because no one has the keys to all the information. While the service provider (not the identity provider) gets to know whether a user was successfully authenticated, they do not get access to the user's identity and any of their sensitive personal information. This in turn breaks the link that allows identity providers to draw specific user information from service providers.

The proposed scheme offers many other advantages. In terms of security, it is impervious by design to all typical forms of attack by which information or passwords are stolen. For instance, as Dr Iriyama explains, "Our algorithm can be used not only with an ID and a password, but also with any other type of identity information, such as biometrics, credit card data, and unique numbers known by the user." This also means that users can only provide identity information that they wish to disclose, reducing the risk of Big Tech companies or other third parties siphoning off personal information. In addition, the algorithm runs remarkably fast, an essential quality to ensure that the computational burden does not hinder its implementation.

This study will hopefully bring about positive changes in current SSO systems, so that more users are encouraged to use them and reap their many benefits.

Credit: 
Tokyo University of Science

Cartwheeling light reveals new optical phenomenon

image: Rice University graduate student Lauren McCarthy led an effort that discovered details about a novel type of polarized-light matter interaction with light that literally turns end over end as it propagates from a source.

Image: 
Jeff Fitlow/Rice University

HOUSTON - (June 29, 2020) - A scientist might want to do cartwheels upon making a discovery, but this time the discovery itself relies on cartwheels.

Researchers at Rice University have discovered details about a novel type of polarized light-matter interaction with light that literally turns end over end as it propagates from a source. Their find could help study molecules like those in light-harvesting antennas anticipated to have unique sensitivity to the phenomenon.

The researchers observed the effect they call trochoidal dichroism in the light scattered by two coupled dipole-scatterers, in this case a pair of closely-spaced plasmonic metal nanorods, when they were excited by the cartwheeling light.

The light polarization the researchers used is fundamentally different from the linear polarization that makes sunglasses work and corkscrew-like circularly polarized light used in circular dichroism to study the conformation of proteins and other small molecules.

Instead of taking a helical form, the field of light is flat as it cartwheels -- rotating either clockwise or anticlockwise -- away from the source like a rolling hula hoop. This type of light polarization, called trochoidal polarization, has been observed previously, said Rice graduate student and lead author Lauren McCarthy, but nobody knew that plasmonic nanoparticles could be used to see how it rolled.

"Now we know how trochoidal polarizations relate to existing light-matter interactions," she said. "There's a difference between understanding the light and its physical properties and understanding light's influence on matter. The differential interaction with matter, based on the material's geometry, is the new piece here."

The discovery by the Rice lab of chemist Stephan Link is detailed in the Proceedings of the National Academy of Sciences.

The researchers weren't looking specifically for trochoidal dichroism. They were generating an evanescent field in a technique they developed to study chiral gold nanoparticles to see how spatially-confined, left- and right-handed circularly polarized light interacted with matter. Freely propagating circularly polarized light interactions are key to several technologies, including 3D glasses made of materials that discriminate between opposite light polarizations, but are not as well-understood when light is confined to small spaces at interfaces.

Instead of the circularly polarized light used before, the authors changed the incident light polarization used in order to generate an evanescent field with cartwheeling waves. The researchers found that the clockwise and anticlockwise trochoidal polarizations interacted differently with pairs of plasmonic nanorods oriented at 90? from each other. Specifically, the wavelengths of light the nanorod pairs scattered changed when the trochoidal polarization changed from clockwise to anticlockwise, which is a characteristic of dichroism.

"Trochoidal waves have been discussed, and different groups have probed their properties and applications," McCarthy said. "However, as far as we know, no one's observed that a material's geometry can enable differential interactions with anticlockwise versus clockwise trochoidal waves."

Molecules interact with light through their electric and magnetic dipoles. The researchers noted that molecules with electric and magnetic dipoles that are perpendicular to each other, as with the 90-degree nanoparticles, have charge motion that rotates in-plane when excited. Trochoidal dichroism could be used to determine the direction of this rotation, which would reveal molecular orientation.

Exciting self-assembled gold nanorod dimers also revealed subtle trochoidal dichroism effects, showing the phenomenon isn't limited to strictly fabricated nanoparticles arranged at 90 degrees.

"Having worked with polarized light interacting with plasmonic nanostructures for a long time now, the current discovery is certainly special in several ways," Link said. "Finding a new form of polarized-light matter interaction is exciting by itself. Equally rewarding was the process of the discovery, though, as Lauren and my former student, Kyle Smith, pushed me to keep up with their results. At the end it was real team effort by all co-authors of which I am very proud."

Credit: 
Rice University

Artificial intelligence identifies, locates seizures in real-time

image: This gif was recorded during two seizures, one at 2950 seconds, the other at 9200. The top left animation is of EEG signals from three electrodes. The top right is a map of the inferred network. The third animation plots the Fiedler eigenvalue, the single value used to detect seizures using the network inference technique.

Image: 
Li Lab

Researchers from Washington University in St. Louis' McKelvey School of Engineering have combined artificial intelligence with systems theory to develop a more efficient way to detect and accurately identify an epileptic seizure in real-time.

Their results were published May 26 in the journal Scientific Reports.

The research comes from the lab of Jr-Shin Li, professor in the Preston M. Green Department of Electrical & Systems Engineering, and was headed by Walter Bomela, a postdoctoral fellow in Li's lab.

Also on the research team were Shuo Wang, a former student of Li's and now assistant professor at the University of Texas at Arlington, and Chu-An Chou of Northeastern University.

"Our technique allows us to get raw data, process it and extract a feature that's more informative for the machine learning model to use," Bomela said. "The major advantage of our approach is to fuse signals from 23 electrodes to one parameter that can be efficiently processed with much less computing resources."

In brain science, the current understanding of most seizures is that they occur when normal brain activity is interrupted by a strong, sudden hyper-synchronized firing of a cluster of neurons. During a seizure, if a person is hooked up to an electroencephalograph -- a device known as an EEG that measures electrical output -- the abnormal brain activity is presented as amplified spike-and-wave discharges.

"But the seizure detection accuracy is not that good when temporal EEG signals are used," Bomela said. The team developed a network inference technique to facilitate detection of a seizure and pinpoint  its location with improved accuracy.

During an EEG session, a person has electrodes attached to different spots on his/her head, each recording electrical activity around that spot.

"We treated EEG electrodes as nodes of a network. Using the recordings (time-series data) from each node, we developed a data-driven approach to infer time-varying connections in the network or relationships between nodes," Bomela said. Instead of looking solely at the EEG data -- the peaks and strengths of individual signals -- the network technique considers relationships. "We want to infer how a brain region is interacting with others," he said.

It is the sum of these relationships that form the network.

Once you have a network, you can measure its parameters holistically. For instance, instead of measuring the strength of a single signal, the overall network can be evaluated for strength. There is one parameter, called the Fiedler eigenvalue, which is of particular use. "When a seizure happens, you will see this parameter start to increase," Bomela said.

And in network theory, the Fiedler eigenvalue is also related to a network's synchronicity -- the bigger the value the more the network is synchronous. "This agrees with the theory that during seizure, the brain activity is synchronized," Bomela said.

A bias toward synchronization also helps eliminate artifact and background noise. If a person, for instance, scratches their arm, the associated brain activity will be captured on some EEG electrodes or channels. It will not, however, be synchronized with seizure activity. In that way, this network structure inherently reduces the importance of unrelated signals; only brain activities that are in sync will cause a significant increase of the Fiedler eigenvalue.

Currently this technique works for an individual patient. The next step is to integrate machine learning to generalize the technique for identifying different types of seizures across patients.

The idea is to take advantage of various parameters characterizing the network and use them as features to train the machine learning algorithm.

Bomela likens the way this will work to facial recognition software, which measures different features -- eyes, lips and so on -- generalizing from those examples to recognize any face.

"The network is like a face," he said. "You can extract different parameters from an individual's network -- such as the clustering coefficient or closeness centrality -- to help machine learning differentiate between different seizures."

That's because in network theory, similarities in specific parameters are associated with specific networks. In this case, those networks will correspond to different types of seizures.

One day, a person with a seizure disorder can wear a device analogous to an insulin pump. As the neurons begin to synchronize, the device will deliver medication or electrical interference to stop the seizure in its tracks.

Before this can happen, researchers need a better understanding of the neural network.

"While the ultimate goal is to refine the technique for clinical use, right now we are focused on developing methods to identify seizures as drastic changes in brain activity," Li said. "These changes are captured by treating the brain as a network in our current method."

Credit: 
Washington University in St. Louis

Faster processing makes cutting-edge fluorescence microscopy more accessible

image: Two views of a 32-hour old zebrafish embryo. The blue fluorescence shows the edges of the cells.

Image: 
National Institute of Biomedical Imaging and Bioengineering

Scientists have developed new image processing techniques for microscopes that can reduce post-processing time up to several thousand-fold. The researchers are from the National Institutes of Health with collaborators at the University of Chicago and Zhejiang University, China.

In a paper published in Nature Biotechnology, Hari Shroff, Ph.D., chief of laboratory on High Resolution Optical Imaging at the National Institute of Biomedical Imaging and Bioengineering (NIBIB), describes new techniques that can significantly reduce the time needed to process the highly complex images that are created by the most cutting-edge microscopes. Such microscopes are often used to capture blood and brain cells moving through fish, visualize the neural development of worm embryos, and pinpoint individual organelles within entire organs.

As microscopes continue to get better, creating higher resolution images faster, researchers are finding they have more data than time to process it. While the videos themselves can be captured in minutes, the images could be terabytes in size and require weeks or, in some cases, months of processing time to be useable.

One reason it takes so long is that the videos often capture tiny objects that are blurred by the microscope. Such blurring can be reduced by a procedure called deconvolution, but this procedure requires a lot of computing power and time.

A second issue is that many of the microscopes in use today take multiple views of the same organism or cell. Those images need to be positioned correctly and then combined to make 3D images and video. Creating high resolution images from the raw data takes a significant amount of computer processing. And so, while the microscopes have developed to provide researchers with increasingly complex, high resolution images, computing power has limited what techniques are practical for researchers to use--since they know that the majority of the data they collect will go unused.

The first thing Shroff's lab and his collaborators attempted to do was modify the deconvolution algorithm that is used by many researchers, so it would run faster. This approach was originally proposed for other areas of medical imaging such as computed tomography (CT); however, this is the first time it was successfully adapted for use with fluorescence microscopy. Fluorescence microscopy uses dyes to improve contrast in the specimen, allowing researchers to focus on specific parts of a sample and see how different elements interact with each other.

Second, they reduced the time needed to position and stitch together multiple views of a sample. A key part of this advance relied on a process called parallelization. It is an approach that is sometimes used in supercomputing where instead of processing each individual function one after another, the job is broken up into smaller tasks that can be analyzed concurrently. It is like asking thousands of people to each solve one math problem simultaneously instead of asking one person to solve thousands of problems.

Finally, the researchers showed that they could further reduce the time it takes to process the data by using a neural network, a kind of artificial intelligence (AI). AI is increasingly being used to assist imaging processing and diagnoses. In this case, Shroff and his team trained the neural network to produce cleaner and higher resolution images much more quickly than would be possible otherwise.

"Acquiring modern imaging data is a bit like drinking from a firehose," said Shroff. "These methods help us obtain valuable biological information faster, which is essential, given the massive amount of data that can be produced by these microscopes."

These advances expand the use of existing technology, including allowing for imaging of thick samples that produce huge amounts of image data when examined with fluorescence microscopes. The advances are also essential for the use of a growing number of 'computational microscopes' in which the post-processing of unintelligible raw data is an essential step in producing the final high-resolution image. Shroff and his collaborators hope that they will help researchers with approaches they would not have thought to try, due to how labor intensive it would otherwise be to create meaningful images.

Credit: 
NIH/National Institute of Biomedical Imaging & Bioengineering

Team dramatically reduces image analysis times using deep learning, other approaches

image: Lateral and axial images of 32-hour zebrafish embryo, marking cell boundaries within and outside the lateral line primordium.

Image: 
Harshad Vishwasrao and Damian Dalle Nogare, NIH

WOODS HOLE, Mass. - A picture is worth a thousand words -but only when it's clear what it depicts. And therein lies the rub in making images or videos of microscopic life. While modern microscopes can generate huge amounts of image data from living tissues or cells within a few seconds, extracting meaningful biological information from that data can take hours or even weeks of laborious analysis.

To loosen this major bottleneck, a team led by MBL Fellow Hari Shroff has devised deep-learning and other computational approaches that dramatically reduce image-analysis time by orders of magnitude -- in some cases, matching the speed of data acquisition itself. They report their results this week in Nature Biotechnology.

"It's like drinking from a firehose without being able to digest what you're drinking," says Shroff of the common problem of having too much imaging data and not enough post-processing power. The team's improvements, which stem from an ongoing collaboration at the Marine Biological Laboratory (MBL), speed up image analysis in three major ways.

First, imaging data off the microscope is typically corrupted by blurring. To lessen the blur, an iterative "deconvolution" process is used. The computer goes back and forth between the blurred image and an estimate of the actual object, until it reaches convergence on a best estimate of the real thing.

By tinkering with the classic algorithm for deconvolution, Shroff and co-authors accelerated deconvolution by more than 10-fold. Their improved algorithm is widely applicable "to almost any fluorescence microscope," Shroff says. "It's a strict win, we think. We've released the code and other groups are already using it."

Next, they addressed the problem of 3D registration: aligning and fusing multiple images of an object taken from different angles. "It turns out that it takes much longer to register large datasets, like for light-sheet microscopy, than it does to deconvolve them," Shroff says. They found several ways to accelerate 3D registration, including moving it to the computer's graphics processing unit (GPU). This gave them a 10- to more than 100-fold improvement in processing speed over using the computer's central processing unit (CPU).

"Our improvements in registration and deconvolution mean that for datasets that fit onto a graphics card, image analysis can in principle keep up with the speed of acquisition," Shroff says. "For bigger datasets, we found a way to efficiently carve them up into chunks, pass each chunk to the GPU, do the registration and deconvolution, and then stitch those pieces back together. That's very important if you want to image large pieces of tissue, for example, from a marine animal, or if you are clearing an organ to make it transparent to put on the microscope. Some forms of large microscopy are really enabled and sped up by these two advances."

Lastly, the team used deep learning to accelerate "complex deconvolution" - intractable datasets in which the blur varies significantly in different parts of the image. They trained the computer to recognize the relationship between badly blurred data (the input) and a cleaned, deconvolved image (the output). Then they gave it blurred data it hadn't seen before. "It worked really well; the trained neural network could produce deconvolved results really fast," Shroff says. "That's where we got thousands-fold improvements in deconvolution speed."

While the deep learning algorithms worked surprisingly well, "it's with the caveat that they are brittle," Shroff says. "Meaning, once you've trained the neural network to recognize a type of image, say a cell with mitochondria, it will deconvolve those images very well. But if you give it an image that is a bit different, say the cell's plasma membrane, it produces artifacts. It's easy to fool the neural network." An active area of research is creating neural networks that work in a more generalized way.

"Deep learning augments what is possible," Shroff says. "It's a good tool for analyzing datasets that would be difficult any other way."

Credit: 
Marine Biological Laboratory

Understanding of relaxor ferroelectric properties could lead to many advances

image: Chiral (mirror) molecules give relaxor ferroelectrics their amazing properties.

Image: 
MRI, Penn State

A new fundamental understanding of polymeric relaxor ferroelectric behavior could lead to advances in flexible electronics, actuators and transducers, energy storage, piezoelectric sensors and electrocaloric cooling, according to a team of researchers at Penn State and North Carolina State.

Researchers have debated the theory behind the mechanism of relaxor ferroelectrics for more than 50 years, said Qing Wang, professor of materials science and engineering at Penn State. While relaxor ferroelectrics are well-recognized, fundamentally fascinating and technologically useful materials, a Nature article commented in 2006 that they were heterogeneous, hopeless messes.

Without a fundamental understanding of the mechanism, little progress has been made in designing new relaxor ferroelectric materials. The new understanding, which relies on both experiment and theoretical modeling, shows that relaxor ferroelectricity in polymers comes from chain conformation disorders induced by chirality. Chirality is a feature of many organic materials in which molecules are mirror images of each other, but not exactly the same. The relaxor mechanism in polymers is vastly different from the mechanism proposed for ceramics whose relaxor behavior originates from chemical disorders.

"Different from ferroelectrics, relaxors exhibit no long-range large ferroelectric domains but disordered local polar domains," Wang explained. "The research in relaxor polymeric materials has been challenging owing to the presence of multiple phases such as crystalline, amorphous and crystalline-amorphous interfacial area in polymers."

In energy storage capacitors, relaxors can deliver a much higher energy density than normal ferroelectrics, which have high ferroelectric loss that turns into waste heat. In addition, relaxors can generate larger strain under the applied electric fields and have a much better efficiency of energy conversion than normal ferroelectrics, which makes them preferred materials for actuators and sensors.

Penn State has a long history of discovery in ferroelectric materials. Qiming Zhang, professor of electrical engineering at Penn State, discovered the first relaxor ferroelectric polymer in 1998, when he used an electron beam to irradiate a ferroelectric polymer and found it had become a relaxor. Zhang along with Qing Wang also made seminal discoveries in the electrocaloric effect using relaxor polymers, which allows for solid state cooling without the use of noxious gases and uses much less energy than conventional refrigeration.

"The new understanding of relaxor behavior would open up unprecedented opportunities for us to design relaxor ferroelectric polymers for a range of energy storage and conversion applications," said Wang.

Credit: 
Penn State

Osmotic stress identified as stimulator of cellular waste disposal

image: Image of mouse astrocytes showing the actin cytoskeleton (red) and lysosomes (green)

Image: 
Tania Lopez-Hernandez

Cellular waste disposal, where autophagy and lysosomes interact, performs elementary functions, such as degrading damaged protein molecules, which impair cellular function, and reintroducing the resulting building blocks such as amino acids into the metabolic system. This recycling process is known to keep cells young and, for instance, protects against protein aggregation, which occurs in neurodegenerative diseases. But what, apart from starvation, actually gets this important system going? Researchers from the Leibniz-Forschungsinstitut für Molekulare Pharmakologie (FMP) in Berlin have now discovered a previously unknown mechanism: osmotic stress, i.e. a change in water and ionic balance, triggers a response within hours, resulting in the increased formation and activity of autophagosomes and lysosomes. The work, now published in "Nature Cell Biology", describes the new signaling pathway in detail, and provides a crucial basis for improving our understanding of the impact environmental influences have on our cellular recycling and degradation system, and how this knowledge can be used for therapeutic purposes.

Our cells are occasionally in need of a "spring clean" so that incorrectly folded protein molecules or damaged cell organelles can be removed, preventing the aggregation of protein molecules. The mechanisms responsible for this removal are so-called "autophagy" and the closely related lysosomal system, the discovery of which earned the Nobel Prize for Medicine in 2016.

Quite a number of studies suggest that autophagy and lysosomes play a central role in aging and in neurodegenerative diseases. It is also generally agreed that fasting or food deprivation can kickstart this cellular degradation and recycling process. Other than that, little is known about how cells and organs control the quality of their protein molecules, and which environmental influences give the decisive signal to start cleaning up.

Water loss induces the formation of lysosomes and autophagy

A new trigger has now been identified by scientists from the Leibniz-Forschungsinstitut für Molekulare Pharmakologie (FMP) in Berlin: it is osmotic stress, i.e. the state in which cells lose water, that starts the system of autophagy and of lysosomal degradation. The study has just been published in the prestigious journal "Nature Cell Biology".

"When dehydration occurs, we suddenly see more lysosomes in the cells, i.e. more organelles where aggregated protein molecules are degraded," explained co-last author PD Dr. Tanja Maritzen. "It's a clever adaptation because cellular water loss simultaneously fosters the aggregation of proteins. These aggregates must be removed quickly to ensure the continued function of cells, and this works better when cells have more lysosomes."

Ion transporter NHE7 switches on newly discovered pathway

The researchers were able to observe what happens at the molecular level in dehydrated cells using astrocytes, star-shaped cells in the brain that assist the work of our nerve cells: in the event of dehydration, the ion transporter NHE7 translocates from the cell's interior, where it is normally positioned, to the cell's limiting plasma membrane that shields the cell from the outside. This leads to an influx of sodium ions into the cell, indirectly increasing the level of calcium - a key messenger - in the cytosol. The elevated level of calcium in turn activates a transcription factor called TFEB, which finally switches on autophagy and lysosomal genes. In other words, the system is initiated by the ion transporter NHE7, triggered by osmotic stress.

"This pathway was completely unknown," stated group leader and last author of the study, Professor Dr. Volker Haucke. "It is a new mechanism that responds to a completely different type of physiological challenge to those previously known."

Discovery of aggregated proteins in brain cells

Counter experiments revealed the importance of this pathway for human health: when the researchers removed a component of the signaling pathway, such as the transporter NHE7 or the transcription factor TFEB, aggregated protein molecules accumulated in astrocytes under osmotic stress conditions; they could not be broken down. In the study, this phenomenon was demonstrated for components such as synuclein - a protein that plays a role in Parkinson's disease.

"Neurodegenerative diseases in particular are a possible consequence of this pathway being switched on incorrectly," stated Tania López-Hernández, post-doc in Professor Haucke's and Dr. Maritzen's respective groups, and lead author of the study. "In addition, NHE7 is a so-called Alzheimer's risk gene. We now have new insights into why this gene could play such a critical role."

Another interesting point is that an intellectual disability in boys, passed on via the X chromosome, is due to a mutation in the NHE7 gene. The researchers suspect that the disease mechanism is linked to the degradation mechanism that has now been described. If only the switch, i.e. the NHE7 protein, were defective, an attempt could be made to turn on the pathway in another way. "It is very difficult in practice, and extremely expensive, to repair a genetic defect, but it would be conceivable to pharmacologically influence the NHE7 protein or to use other stimuli such as spermidine as a food supplement to switch on the autophagy system in these patients," explained cell biologist and neurocure researcher Volker Haucke.

Medical relevance of basic research

In order to carry out such interventions, however, the foundations need to be researched more thoroughly. For example, it is not yet clear how osmotic stress affects the translocation of NHE7 to the cell surface. It is also not known whether the entire degradation system is initiated or whether just individual genes are switched on, or which specific responses to osmotic stress are needed to activate the lysosomal system. Nor is it known which other stimuli may be triggered by this physiological process. The researchers now seek to answer all these questions in subsequent projects.

"Our work has shown us the fundamental impact that our water and ionic balance has on the capability of our cells and tissue to break down defective protein molecules," remarked Volker Haucke. "Now we want to gain a better understanding of this mechanism - also because it plays a major role in aging, neurodegeneration and the prevention of several other diseases."

Credit: 
Forschungsverbund Berlin

Researchers catch a wave to determine how forces control granular material properties

image: The image is a combination of two sets of data from X-ray scans of single crystal sapphire spheres. The reconstructed X-ray Computed Tomography (XRCT) data defines the surface of all 621 grains in the load frame. The far-field X-ray Diffraction (ff-XRD or 3DXRD) data provides a strain tensor that is mapped to each grain center. The combination and colorization of this data shows the distribution of stresses for each grain under load. This information was used as initial conditions for ultrasonic transmission measurements, where structure-property relationships were measured in-situ.

Image: 
Johns Hopkins University

Stress wave propagation through grainy, or granular, materials is important for detecting the magnitude of earthquakes, locating oil and gas reservoirs, designing acoustic insulation and designing materials for compacting powders.

A team of researchers led by a Johns Hopkins mechanical engineering professor used X-ray measurements and analyses to show that velocity scaling and dispersion in wave transmission is based on particle arrangements and chains of force between them, while reduction of wave intensity is caused mainly from particle arrangements alone. The research appears in the June 29 edition of the journal the Proceedings of the National Academy of Sciences.

"Our study provides a better understanding of how the fine-scale structure of a granular material is related to the behavior of waves propagating through them," said Ryan Hurley, assistant professor of mechanical engineering at Johns Hopkins Whiting School of Engineering. "This knowledge is of fundamental importance in the study of seismic signals from landslides and earthquakes, in the nondestructive evaluation of soils in civil engineering, and in the fabrication of materials with desired wave properties in materials science."

Hurley conceived of this research while a postdoc at Lawrence Livermore National Laboratory, collaborating with a team that included LLNL physicist Eric Herbold. The experiments and analysis were later performed by Hurley and Whiting School postdoc Chongpu Zhai after Hurley moved to JHU, with experimental assistance and continued discussions with Herbold.

Structure-property relations of granular materials are governed by the arrangement of particles and the chains of forces between them. These relations enable design of wave damping materials and non-destructive testing technologies. Wave transmission in granular materials has been extensively studied and demonstrates unique features: power-law velocity scaling, dispersion and attenuation (the reduction of the amplitude of a signal, electric current, or other oscillation).

Earlier research, dating back to the late 1950s described "what" may be happening to the material underlying wave propagation, but the new research provides evidence for "why."

"The novel experimental aspect of this work is the use of in-situ X-ray measurements to obtain packing structure, particle stress and inter-particle forces throughout a granular material during the simultaneous measurement of ultrasound transmission," said Hurley. "These measurements are the highest fidelity dataset to-date investigating ultrasound, forces and structure in granular materials."

"These experiments, along with the supporting simulations, allow us to reveal why wave speeds in granular materials change as a function of pressure and to quantify the effects of particular particle-scale phenomena on macroscopic wave behavior," said Zhai, who led the data analysis efforts and was that paper's first author.

The research provides new insight into time- and frequency-domain features of wave propagation in randomly packed grainy materials, shedding light on the fundamental mechanisms controlling wave velocities, dispersion and attenuation in these systems.

Credit: 
Johns Hopkins University

USC scientists examine the impact of a very specific defect in DNA replication

image: A cell undergoes mitosis.

Image: 
iStock

USC researchers peering deep inside a living cell have discovered something surprising: Its system for preventing genetic damage linked to diseases can fail so badly that the cell would be better off without it.

It's a paradoxical finding because it challenges the idea that tiny protein guardians of cell division always offer protection, yet the study shows that they can at times allow bad things to happen simply by doing their job too well.

The findings have important implications for treating cancer. In addition, glitches in DNA replication lead to other genetic diseases, including birth defects, autism and neurological impairments. A cell's ability to make new cells is also important to sustain tissues and organs.

"Generally, cells respond to errors during DNA replication by deploying monitoring proteins, called checkpoints, that serve to recognize the problem and stop cell division so that chromosome damage is prevented," said Susan Forsburg, senior author of the study and a Distinguished Professor of Biology at the USC Dornsife College of Letters, Arts and Sciences. "This study makes the unexpected finding that in certain forms of replication stress, an active checkpoint actually allows cells to divide, causing worse damage than if it were missing entirely."

The findings appear in a scientific paper published today in the journal Molecular and Cellular Biology.

Investigating the aftermath of DNA replication problems

This is fundamental research into the principles of how cells operate, how they divide to form new cells and how built-in molecular checks and balances ensure that cell division occurs correctly. It's the sort of foundation upon which clinicians and translational scientists can find better ways to treat diseases.

"We are interested in how problems in DNA replication lead to bad things for cells and people, including cancer," Forsburg said.

For the study, the scientists utilized a type of yeast -- Schizosaccharomyces pombe -- with chromosomes similar to those in humans and that uses the same genes to maintain those chromosomes. It's been proven as an important model for cell division.

"The analogy I use is comparing a Mercedes and a lawnmower," Forsburg said. "If you're trying to understand the basic principles of an internal combustion engine, the lawnmower is a simplified version of the Mercedes engine. The yeast uses the same genes we do, and every gene we study has a human equivalent, with nearly all of them linked to cancer."

In the study, the scientists examined how cells respond to a defect supervised by an important gene called CDS1. It functions like a guardian for the DNA replication process, and it has an analog in humans called CHEK1. As a checkpoint, the gene ensures the DNA is smoothly copied before cell division. Usually, when something goes wrong that hinders DNA replication, the gene stops cells from dividing until they can fix the problem. Otherwise, cells would divide without properly replicated DNA, which has deadly consequences.

Cancer treatments often combine drugs that hinder DNA replication with compounds that block the checkpoint, like a poison pill to drive the tumor cells into a lethal division. This study finds a condition where that poison pill backfires.

"We found that the active checkpoint actually allowed the cells to divide abnormally," Forsburg said. "Unexpectedly, when we deleted the replication checkpoint, the mutant cells didn't divide because another damage control mechanism kicked in to stop the unwanted cell divisions."

Study will lead to better understanding of cells, improved cancer treatments.

How can a gene that seeks to help keep the cell healthy mess up so badly that it perpetuates harm to the tissue or organ? In certain instances, it seems the checkpoint gets blindsided and continues doing its job when it would be better if it took the day off.

Forsburg explained: "Our experiments examined a very specific defect in DNA replication, and it appears that this created a perfect storm. The checkpoint didn't know what to do with it. Its best effort to protect the cells actually allowed them to slip into lethal divisions."

Credit: 
University of Southern California

Pernicious effects of stigma

The recent killings of unarmed individuals such as George Floyd, Breonna Taylor, Ahmaud Arbery and Tony McDade have sparked a national conversation about the treatment of Black people -- and other minorities -- in the United States.

"What we're seeing today is a close examination of the hardships and indignities that people have faced for a very long time because of their race and ethnicity," said Kyle Ratner, an assistant professor of psychological and brain sciences at UC Santa Barbara. As a social psychologist, he is interested in how social and biological processes give rise to intergroup bias and feelings of stigmatization.

According to Ratner, "It is clear that people who belong to historically marginalized groups in the United States contend with burdensome stressors on top of the everyday stressors that members of non-disadvantaged groups experience. For instance, there is the trauma of overt racism, stigmatizing portrayals in the media and popular culture, and systemic discrimination that leads to disadvantages in many domains of life, from employment and education to healthcare and housing to the legal system."

Concerned by negative rhetoric directed at Latinx individuals, Ratner and his lab have investigated how negative stereotype exposure experienced by Mexican-American students can influence the way their brains process information.

In a recent paper published in the journal Social Cognitive and Affective Neuroscience, the research team focuses on how negative stereotype exposure affects responses to monetary incentives. Their finding: The brains of Mexican-American students exposed to negative stereotypes anticipate rewards and punishments differently versus those who were not so exposed. The discovery, he said, is the first step in a series of studies that could help researchers understand neural pathways through which stigma can have detrimental effects on psychological and physical health.

'I'm so tired of this'

Much existing research has focused on how experiencing stigma and discrimination triggers anger, racing thoughts and a state of high arousal. Although Ratner believes this is a reaction that people experience in some contexts, his recent work focuses on the psychological fatigue of hearing your group disparaged. "It's this feeling of 'oh, not again,' or 'I'm so tired of this,'" he said, describing a couple of reactions to the stress of managing self-definition in the face of negative stereotypes.

While noticing several years ago that experiencing stigma can produce this sense of withdrawal and resignation, Ratner was reminded of work he conducted earlier in his career relating stress to depressive symptoms.

"In work I was involved in over a decade ago, we showed that life stress can be associated with anhedonia, which is a blunted sensitivity to positive and rewarding information, such as winning money," he said. "If you're not sensitive to the rewarding things in life, you're basically left being sensitive to all the frustrating things in life, without that positive buffer. And that's one route to depression."

Given that experiencing stigma can be conceptualized as a social stressor, Ratner wanted to investigate whether negative stereotype exposure might also relate to sensitivity to reward.

Reward Processing in the Brain

Ratner and his colleagues focused on the nucleus accumbens, a sub-cortical brain region that plays a central role in anticipating pleasure -- the "wanting" stage of reward processing that motivates behaviors.

Using functional MRI to measure brain activity, the researchers asked Mexican-American UCSB students to view sets of video clips in rapid succession and then gave these students the opportunity to win money or avoiding losing money.

In the control group, the viewers were shown news and documentary clips of social problems in the United States that were relevant to the country in general -- childhood obesity, teen pregnancy, gang violence and low high school graduation numbers.

In the stigmatized group, subjects were shown news and documentary clips covering the same four domains, but that singled out the Latinx community as the group specifically at risk for these problems.

"These videos were not overtly racist," Ratner said of the stigmatizing clips. Rather, he explained, the videos tended to spend a disproportionate amount of attention on the association between specific social issues and their effects in the Latinx community, rather than presenting them as problems of American society as a whole. The clips were mostly from mainstream news agencies -- the newscasters and narrators, he said, appeared to be "presenting facts as they understood them," but the content of these clips reinforced negative stereotypes.

After repeated exposure to negative stereotypes, the research participants were asked to perform a Monetary Incentive Delay (MID) task, which required them to push a button whenever they saw a star on the screen. Pressing the button fast enough resulted in either winning money or avoiding losing money.

In those individuals shown the stigmatizing clips, the nucleus accumbens responded differently to waiting for the star to appear, as compared to those who viewed the control clips, a pattern that suggests that negative stereotype exposure was "spilling-over" to affect how participants were anticipating winning and losing money.

"We saw that something about watching these stigmatizing videos was later influencing the pattern of response within this brain region," Ratner said. This suggests that the nucleus accumbens is representing the potential of winning and losing money differently in the brains of those who previously saw the stigmatizing videos than those who didn't, he explained. The researchers also found that the group that saw the stigmatizing videos reported lower levels of arousal right before starting the MID task, consistent with stigmatizing experiences having a demotivating effect.

"The nucleus accumbens is very important for motivated behavior, and sparks of motivation are important for many aspects for everyday life," Ratner said. A loss of motivation, he continued, is often experienced by those who perceive their situation as out of their control.

One reason negative stereotypes in the media and popular culture are so problematic is they make people feel stigmatized even when they are not personally targeted in their daily life by bigoted people, he explained. "It becomes something you can't escape -- similar to other stressors that are out of people's control and have been shown to cause anhedonia."

Ratner is careful to point out that this study merely scratches the surface of brain processes involved in intergroup reactions such as stigma -- how the brain processes social motivations is far more complex and necessitates further study.

"People shouldn't generalize too much from this specific finding," he said, pointing out that his sample of 40 Mexican-American college students, while not small for a brain imaging study, represents only a small segment of a far more diverse community. When his lab is back up and running following the COVID-19 related cessation, he said, he and his collaborators hope to study a larger, non-student sample.

Credit: 
University of California - Santa Barbara

Human-Artificial intelligence collaborations best for skin cancer diagnosis

Artificial intelligence (AI) improved skin cancer diagnostic accuracy when used in collaboration with human clinical checks, an international study including University of Queensland researchers has found.

The global team tested for the first time whether a 'real world', collaborative approach involving clinicians assisted by AI improved the accuracy of skin cancer clinical decision making.

UQ's Professor Monika Janda said the highest diagnostic accuracy was achieved when crowd wisdom and AI predictions were combined, suggesting human-AI and crowd-AI collaborations were preferable to individual experts or AI alone

"This is important because AI decision support has slowly started to infiltrate healthcare settings, and yet few studies have tested its performance in real world settings or how clinicians interact with it," Professor Janda said.

"Inexperienced evaluators gained the highest benefit from AI decision support and expert evaluators confident in skin cancer diagnosis achieved modest or no benefit.

"These findings indicated a combined AI-human approach to skin cancer diagnosis may be the most relevant for clinicians in the future."

Although AI diagnostic software has demonstrated expert level accuracy in several image-based medical studies, researchers have remained unclear on whether its use improved clinical practice.

"Our study found that good quality AI support was useful to clinicians but needed to be simple, concrete, and in accordance with a given task," Professor Janda said.

"For clinicians of the future this means that AI-based screening and diagnosis might soon be available to support them on a daily basis.

"Implementation of any AI software needs extensive testing to understand the impact it has on clinical decision making."

Researchers trained and tested an artificial convolutional neural network to analyse pigmented skin lesions, and compared the findings with human evaluations on three types of AI-based decision support.

Credit: 
University of Queensland

Bleaching affects aquarium corals, too

image: The Australian saucer coral (Homophyllia australis), lives just off the coast of Mackay. With the worldwide demand for Australian aquarium corals increasing, a single aquarium specimen of Homophyllia australis fetched more than $8,000 AUD in Japan in 2017. However, bleaching events kill this particular species.

Image: 
Ciemon Caballes.

A new study illustrates the potential impact of recurrent heatwaves on coral species collected by the Australian aquarium coral industry.

The study's lead author, Professor Morgan Pratchett from the ARC Centre of Excellence for Coral Reef Studies at James Cook University (Coral CoE at JCU), says there are active and expanding aquarium coral fisheries operating across the country in Western Australia, the Northern Territory and Queensland.

"With widespread coral bleaching again affecting the Great Barrier Reef, and also occurring on coral reefs in Western Australia, there is inevitable concern regarding the sustainability and defensibility of ongoing coral harvesting," Prof Pratchett said.

Prior to the study, scientists didn't know much about the temperature sensitivity and bleaching susceptibility of Australian aquarium corals.

The researchers tested these parameters on six of the most important exported coral species from Australia.

"We found two of the most striking species were particularly susceptible and died at the temperatures you would expect when bleaching occurs," Prof Pratchett said.

"These corals are most abundant within the nearshore habitats of the southern Great Barrier Reef--an area that bleached earlier this year."

One of these species is the Australian saucer coral (Homophyllia australis), found just off the coast of Mackay.

With the worldwide demand for Australian aquarium corals increasing, a single aquarium specimen of Homophyllia australis fetched more than $8,000 AUD in Japan in 2017.

The study found the other, more widespread, aquarium corals were able to cope with higher temperatures. They bleached but didn't die--the corals are already regularly exposed to extreme temperatures in a wide variety of different environments, including shallow tidal pools in north Western Australia.

"Understanding the differential susceptibilities of different coral species to environmental change is a very important aspect of managing coral fisheries," Prof Pratchett said.

Australian coral fisheries are often the first to provide reports of coral bleaching across diverse reef environments, as they need to respond to changes in coral health.

"Those in the industry don't collect bleached corals and actively avoid areas where there has been recent and severe mass bleaching," Prof Pratchett said.

He said the study, which was supported by the Fisheries Research and Development Corporation, highlights the need for more specific and targeted in-situ monitoring for these popular aquarium corals.

This is especially crucial with the increasing threat posed by ongoing environmental change.

Credit: 
ARC Centre of Excellence for Coral Reef Studies

Casting a wider net: New system measures brain activity of several zebrafish concurrently

image: Example of EEG signals detected during baseline recording and during the application of convulsant. (a) shows the EEG signal from a fish that hadn't been treated by anti-epileptic drug (AED) while (b) shows the signal from a fish that had been treated by AED. AED-treated and untreated fish demonstrate clear differences in EEG signal.

Image: 
dgist

Before a drug can be used to treat patients, it goes through several rounds of testing for efficacy and toxicity, which begin in animal models. Zebrafish, a tiny species of fish native to South Asia, are cheaper to maintain and easier to breed than laboratory mice or other animal models. They also share many disease-related genes with us, particularly those involved in neurological disorders. This makes them a great model for drug development.

While different experimental methodologies have been developed to study the effects of drugs in zebrafish, they have several major shortcomings. Most notably, behavior monitoring, a popular approach that involves observing the behavior of zebrafish in a tank after administering a drug to them, cannot be used to accurately quantify their neurological responses. Techniques that can be used for more accurate quantification, such as electroencephalography (EEG)--measuring the electrical brain activity non-invasively using surface electrodes or invasively using needle electrodes--have been developed, but these have only been applicable to either larval zebrafish or single adult individuals at a time.

Professor Sohee Kim, from the Department of Robotics Engineering at Daegu Gyeongbuk Institute of Science and Technology (DGIST), Korea, has been pioneering developments in EEGs for zebrafish for the past several years. In a recent study published in Biosensors and Bioelectronics, she led a team to developing a novel system that--for the first time ever--makes it possible to take EEG measurements from multiple adult zebrafish simultaneously.

Their setup makes a key change to existing setups: it separates the drug delivery and fish holding units. Fish are held in several small plastic compartments called "fish fixers," each of which can hold an anesthetized zebrafish. Drugs and water are transported directly into the mouth of the fish through small tubes at controlled rates. The electrical signals from the fish brains are recorded using flexible electrodes attached on the surface of the fish's head. With the drug delivery and fish fixing units separated, no physical/environmental artefacts interfere with and hamper the EEG recording process and output, thereby simultaneously yielding clear recordings of the biological effects of the drugs on all the fish in the fixing unit. Yuhyun Lee, Prof Kim's student and first author of this study, remarks: "Our system is expected to reduce the time and cost at the early stages of drug development and increase success rates when moving on to experiments in mice."

The team administered an epilepsy-inducing drug to fish that had and hadn't been given a known anti-epileptic drug called valproic acid beforehand. The EEG readings they then got successfully mapped the changes in the fish's brain signals during the various degrees of ensuing epileptic attacks, demonstrating that their system could accurately measure the efficacy of this anti-epileptic drug.

Prof. Kim comments, "Using EEG recordings, an accurate and quantifiable measure of brain activity, we expect our system to facilitate the mass screening of drugs for neurological diseases such as sleep disorders, epilepsy, and autism." This study will hopefully accelerate the development processes of drugs for currently intractable diseases.

Credit: 
DGIST (Daegu Gyeongbuk Institute of Science and Technology)

Researchers employ antennas for angstrom displacement sensing

image: Schema of the near-field interaction between the antennas and incident light field.

Image: 
ZANG Tianyang et al.

Micro - nano Optics and Technology Research Group led by Prof. LU Yonghua and Prof. WANG Pei from University of Science and Technology of China (USTC) of the Chinese Academy of Sciences (CAS) realized nanometric displacement measurement through the interaction between the illumination optical field and the optical antennas. This study was published on Physical Review Letters.

Optical metrology is of particular significance for it allows measurements of distance or displacement in a noncontact high-precision way. However, despite of the wide application in longitudinal displacement measurement of interferometric method, such as laser radar, laser ranging and small vibration measurement, lateral displacement perpendicular to the direction of the beam is hard to be detected through conventional methods.

The researchers presented a novel technique based on directional excitation of surface plasmon polaritons (SPPs).

They first excited asymmetric SPPs with a pair of optical slot antennas under the illumination of the focused Hermite-Gaussion (HG) (1,0) mode light. Then, by detecting the SPPs leakage at the back-focal plane of an oil-immersed objective, they sensitively measured the transverse displacement.

Unlike the previous strategy to retrieve the free scattering signals, which remains challenging even when employing a weak measurement technique, the SPPs leakage pattern is spatially separated from the forward scattering of the slot antennas, and thus could be utilized to monitor displacements in the back-focal plane.

The resolution of their system reaches subwavelength level (~0.3 nm). However, the extreme resolution could be down to angstrom level. It is potentially applicable in superresolution microscopy, semiconductor lithography, and calibration of nanodevices.

Credit: 
University of Science and Technology of China