Tech

Contact lenses for diagnostic and therapeutic use

image: Enlarged prototype of biosensor lens with reservoirs.

Image: 
Khademhosseini Lab

(LOS ANGELES) - Human bodily fluids and secretions contain molecules known as biomarkers that contain a wealth of information about the body's health and the presence of disease. Among secretions such as tears, sweat and saliva, tears are considered the best source of biomarkers, with concentrations similar to those found in blood. Tears are also sterile, readily available and less susceptible to the damaging effects of temperature change, evaporation and secretion rate.

Useful and measurable biomarkers found in tears include sodium ions, which are useful indicators of dry-eye disease, and glucose molecules, an early diagnostic tool for diabetes. Also, measuring the pH of tears can be used to check for cell viability, drug effectiveness and for signs of disease.

Therefore, the ability to collect tears in an effective way and to measure their pH and levels of biomarkers in real-time is highly desirable. One approach that is actively being explored is the idea of contact lens biosensors. Such contact lenses could be designed to include tiny channels on their surfaces for guiding the flow of tears into tiny reservoirs for collection and monitoring.

Pliable and transparent materials, known as hydrogels, are currently being used commercially to make contact lenses; they are easy to work with and cost effective. However, to date, they have not been shown to be ideal materials with which to fashion channels and reservoirs, as they are sensitive to the fabrication techniques needed. In previous efforts, hydrogels have been vulnerable to deformities caused by the solvents or the temperature and vacuum conditions required by some fabrication methods. Other methods have produced hydrogel channels with rough surfaces or non-uniform dimensions.

A collaborative team, which includes a group from the Terasaki Institute for Biomedical Innovation, has developed a fabrication method to meet all the challenges in making a hydrogel contact lens for biomarker sensing. The team began by optimizing the components of the hydrogel to obtain elastic characteristics which would allow it to be engineered into various shapes with a smooth surface profile. They next fashioned microchannels in the hydrogel with the use of a 3D printed mold. The final step in the fabrication process was to enclose the hydrogel channels by bonding an additional layer of hydrogel onto the microchannel surface.

Once the successful prototype was completed, it was extensively tested for its performance in channeling and collecting fluids. Flow rates of artificial tears in the channels were measured at different levels of hydration, with zero flow measured at complete dehydration and full spontaneous flow observed at full hydration.

A noteworthy observation during these tests was that when the hydrogel was mildly dehydrated, liquid flow in the channels would stop, but when additional rhythmic pressure was applied, the flow would resume. This was an important demonstration of support for the hypothesis that eye blinking would also provide the necessary pressure and additional hydration needed to allow for tear flow in the contact lens, and therefore, in the eye.

"In addition to our successful fabrication of microchannels in commercial contact lens hydrogels, we also found that eye-blinking pressure may facilitate tear exchange in the lens through these microchannels, said Shiming Zhang, Ph.D., from the Terasaki Institute's research team. "This is an exciting finding because it opens the possibility for the lenses to be a means of preventing dry eye disease, a condition commonly found in contact lens wearers. We aim to develop a patented contact lens that actively treats this condition by enhancing tear flow in the eye."

The team next prototyped sensors to collect, test for and measure pH levels of artificial tears flowing through the microchannels. Sodium levels were also tested, and the results showed an acceptable and predicted range of sodium detection for diagnostic purposes.

Additional goals would be to fine-tune factors such as humidity, hydrogel hydration and applied pressure in order to enhance all the flow rates and dynamics of the biosensing contact lens. There are also plans to experiment with smaller channels in thinner hydrogel films before a final contact lens is devised.

"The production of the successful prototype described here and the continuing efforts to perfect its capabilities mark a significant advance in contact lens biosensing," said Ali Khademhosseini, Ph.D., director and CEO of the Terasaki Institute. "Such innovative work fits in well with our institute's mission to create solutions that restore or enhance the health of individuals."

Credit: 
Terasaki Institute for Biomedical Innovation

Researchers minimize quantum backaction in thermodynamic systems via entangled measurement

image: Conceptual design of the quantum work and its experimental realization

Image: 
WU Kangda et al.

Led by academician Prof. GUO Guangcan from the Chinese Academy of Sciences (CAS), Prof. LI Chuanfeng's group and Prof. XIANG Guoyong's group from University of Science and Technology of China (USTC), CAS, in cooperation with theoretical physicists from Germany, Italy and Switzerland, conducted the first experiment using entangled collective measurement for minimizing quantum measurement backaction based on photonic system.

The result was published online in Physical Review Letters on Nov. 16.

When an observable object is measured twice on an evolving coherent quantum system, the first measurement usually changes the statistical information of the second measurement because the first measurement broke the quantum coherence of the system, which is called measurement backaction.

A former theoretical work of Dr. Mart í Perarnau Llobet in 2017 pointed out that, without violating the basic requirements of quantum thermodynamics, measurement backaction can't be completely avoided, but the degree of backaction caused by projective measurement can be reduced through collective measurement.

Based on the above theoretical research results, Prof. XIANG and the coauthors realized the quantum collective measurement and successfully observed the reduction of measurement backaction in 2019.

Since the quantum collective measurements used in previous works were separable, a natural question can be raised: whether there is quantum entangled collective measurement which reduces more backaction than what we have achieved.

Prof. XIANG and his theoretical collaborators studied the optimal collective measurement in the two qubit system. They found that there is an optimal entanglement collective measurement theoretically, which can minimize the backaction in a two qubit system, and the backaction can be suppressed to zero in the case of strongly coherent evolution.

Then, they designed and implemented the entanglement measurement via photonic quantum walk with fidelity up to 98.5%, and observed the reduction of the reaction of projection measurement.

This work is significant to the study of collective measurement and quantum thermodynamics. The referees commented the work as representing a major advance in the field: "The experiment is well executed, as the results follow closely what one would expect from an ideal implementation. Overall, I find the article a highly interesting contribution to the topic of quantum backaction and a great combination of new theory and flawless experimental implementation."

Credit: 
University of Science and Technology of China

Strain engineering of 2D semiconductor and graphene

image: a, Selective results to show the tunable properties under strain. From left to right are the changed band structure of monolayer TMDC under biaxial strain, redshifted PL and absorption spectra of monolayer TMDC under tensile strain and an illustrative scenario for the "funnel" effect in a wrinkled TMDC, respectively. b, Selective sketch maps for the setup or working principle of the strain engineering technologies. Top-left panel: experimental setup for a bending system to apply uniaxial stain to 2D materials. Top-right panel: a rolling technology to apply strain to graphene. Bottom-left panel: a piezoelectric substrate-based technology to apply biaxial strain to 2D materials. Bottom-right panel: a technology to form a wrinkled TMDC. c, Some selective practical applications. Left panel: schematic of a PDMS fiber incorporating graphene nanocomposites-based strain sensor. Middle panel: the strain-dependent optical loss of the strain sensor described in the left panel to measure the movement of the human body. Right panel: a PL map of a strain-induced single-photon emitter. The insert evidences its single-photon emission behavior.

Image: 
by Zhiwei Peng, Xiaolin Chen, Yulong Fan, David J. Srolovitz, Dangyuan Lei

Strain engineering usually refers to a kind of material processing technology, which aims to regulate the properties of materials or optimize related devices' performance by inherent or external strain. In recent years, with the development of 2D materials, the research of strain engineering of 2D materials (transition metal dichalcogenides (TMDCs), graphene, etc.) has attracted significant attention. Compared with strain engineering of traditional bulk materials, the atomic thickness of 2D materials makes them more suitable to serve as the platform for strain-engineering research and builds a bridge between strain engineering and nanophotonics. Hence, they are worthy of attention in many points of view, from fundamental physics to practical applications.

In a new paper published in Light: Science & Applications, a team of scientists, led by Doctor Dangyuan Lei from Department of Materials Science and Engineering, City University of Hong Kong, China, and co-workers have written a review article to comprehensively summarized recent developments in this burgeoning field. In this review paper, the traditional macroscopic strain field theory is introduced firstly. Then, the band structure changes of strained 2D semiconductors (TMDCs) and strained graphene are discussed, while the optical responses observed under different kinds of strain fields are reviewed. Subsequently, this paper summarizes the strain engineering techniques that can apply different kinds of strains to specific 2D materials. At the end of this article, the diverse applications in optical devices, optoelectronics and other photonics applications are presented, and the existing problems in this field and their future development are prospected, respectively.

Traditional strain engineering mainly focuses on silicon, germanium and other 3D bulk materials, which usually lack high fracture strength due to their intrinsic 3D properties. Therefore, the rising 2D materials with atomic thickness (such as graphene, TMDCs) have entered the field of vision. Their strain engineering has been widely studied in both the scientific community and industrial society. Compared with the traditional 3D materials, the 2D characteristics of 2D materials endow them with some quite different and novel characteristics, making their strain engineering more attractive. These scientists summarize those unique properties of 2D materials:

"Based on the following three points, we think 2D materials as a perfect platform for strain engineering: (1) 2D materials have better mechanical properties (deformation capacity), which means they can sustain larger strain before fracture when compared to bulk materials; (2) 2D materials have better optical properties due to their strong exciton effects, which benefits their further applications in photonics devices; and (3) 2D materials have more variable deformation patterns. Their atomic thickness properties allow them to achieve out-of-plane strain, which is almost impossible in 3D bulk materials, allowing 2D materials to possess more deformation patterns, such as uniaxial and biaxial in-plane strain, wrinkle, fold, and localized non-uniform strain."

"Since the types of the applied strain are varied, the changes of electrical and optical properties are different. In general, we can observe the redshifted (blueshifted) PL spectra from the tensile (compressive) strained 2D TMDCs. Similarly, we can observe the shift and splitting of the Raman spectra from strained graphene. Besides, many novel optical responses, such as 'funnel' effect, single-photon emission and tunable second-harmonic generation, emerge under some special strain distribution." they added.

"There are various technologies to apply strains to 2D materials. Based on the type of the induced strain, we usually classified them into three categories, namely, the uniaxial strain technologies, biaxial strain technologies and local strain technologies. We should pay more attention to local strain technologies. They actually give a new way to control photons in an ultrasmall area. In conclusion, the flexibility and optical properties of 2D materials (compared to their bulky counterparts) open the door for the development of potentially important new strain-engineered photonic applications." the scientists forecast.

Credit: 
Light Publishing Center, Changchun Institute of Optics, Fine Mechanics And Physics, CAS

A chemist from RUDN University synthesized analogs of natural toxins

image: A chemist from RUDN University suggested a simple and accurate method for the synthesis of analogs of two natural toxins, antofine and septicine. This universal approach can also be used to obtain other biologically active substances for medicinal chemistry.

Image: 
RUDN University

A chemist from RUDN University suggested a simple and accurate method for the synthesis of analogs of two natural toxins, antofine and septicine. This universal approach can also be used to obtain other biologically active substances for medicinal chemistry. The research was published in the Organic Letters journal.

Antofine and septicine have antibacterial and antitumor properties and therefore can be used in the pharmaceutical industry. However, it is difficult to obtain them directly from natural plant sources: they are hydrophobe and can sometimes be unexpectedly toxic. A chemist from RUDN University developed a universal method to produce 2-pyridone derivatives on which antofine and septicine are based.

2-pyridone and 4-pyrimidone are cyclic molecules with one oxygen atom attached to the cycle. The cycle of the former compound contains one nitrogen atom, and of the latter--two. Both substances are used as molecular frameworks for medicinal drugs. By attaching additional cycles and atoms to them, one can obtain compounds with antitumor, antiviral, anti-inflammatory, and antimalarial properties. The new method of 2-pyridone and 4-pyrimidone production consists of just 2 steps.

The first step is the so-called four-component Ugi reaction. Using it, one can obtain peptide fragments (analogs of proteins) from four simple substances. In his experiment, the chemist from RUDN University used different substances at room temperature and the Ugi reaction went on for 12 hours. As a result, around 60 different compounds were obtained, and in all of them, a hydrocarbon ring was attached to other groups of atoms with peptide bonds.

In the next step, several new cycles had to be created in the peptide fragment, and at least one of them had to contain nitrogen atoms. To achieve this, the chemist suggested using gold-based catalysts. Out of five tested catalysts, one produced the best 2-pyridone yield (75%). This result was achieved in the reaction with the first of the synthesized peptide fragments. Further studies showed that the use of other first-stage products can lead up to 93% yield of the products with the required molecular frameworks. The same approach was used for the synthesis of 4-pyrimidone derivatives.

"The main advantage of our method is the ability to develop organic substances based on heterocycles with different functional groups. This advantage appears at the stage of the four-component Ugi reaction, as many simple and affordable reagents can be used in it. The cyclization reaction suggested by us can involve reagents with different functional groups. The simplicity of this approach and a wide range of potential results make it favorable for medicinal chemistry," said Erik Van der Eycken, the head of the Joint Institute for Chemical Research at RUDN University.

Credit: 
RUDN University

Changes in fire activity are threatening more than 4,400 species globally

image: Wildfire near a windfarm northeast Spain.

Image: 
Image: Lluis Brotons

Changes in fire activity are putting at risk more than 4,400 species across the globe, says a new paper led by the University of Melbourne, involving 27 international researchers.

"Those species include 19 per cent of birds, 16 per cent of mammals, 17 per cent of dragonflies and 19 per cent of legumes that are classified as critically endangered, endangered or vulnerable," said lead author, Dr Luke Kelly, a Senior Lecturer in Ecology and Centenary Research Fellow.

"That's a massive number of plants and animals facing threats associated with fire."

The paper, Fire and biodiversity in the Anthropocene, published in Science, found that the species categorized as threatened by an increase in fire frequency or intensity, include the orangutan in Indonesia and mallee emu-wren in Australia.

"Recent fires have burned ecosystems where wildfire has historically been rare or absent, from the tropical forests of Queensland, Southeast Asia and South America to the tundra of the Arctic Circle," Dr Kelly said.

"Very large and severe fires have also been observed in areas with a long history of recurrent fire, and this is consistent with observations of longer fire seasons and predictions of increased wildfire activity in the forests and shrub lands of Australia, southern Europe and the western United States."

The research team also found a striking example from Australia: the total area burnt by bushfires in the eastern seaboard from August 2019 to March 2020, 12.6 million hectares, was unprecedented in scale.

However, some species and ecosystems are threatened when fire doesn't occur. Frequent fires, for example, are an important part of African savanna ecosystems and less fire activity can lead to shrub encroachment, which can displace wild herbivores such as wildebeest that prefer open areas.

"Understanding what's causing changes in different places helps us to find effective solutions that benefit people and nature," Dr Kelly said.

Researchers, including 27 authors from a combined 25 institutions around the world (including six authors from the University of Melbourne), identified three main groups of human drivers as transforming fire activity and its impacts of biodiversity: global climate change, land-use and biotic invasions. This means that people and governments around the world need to act and confront the diverse changes to the environment that are occurring.

"It really is time for new, bolder conservation initiatives," Dr Kelly said. "Emerging actions include large-scale habitat restoration, reintroductions of mammals that reduce fuels, creation of low-flammability green spaces and letting bushfires burn under the right conditions. The role of people is really important: Indigenous fire stewardship will enhance biodiversity and human well-being in many regions of the world."

Michael Clarke, Professor of Zoology at La Trobe University, who supported the study, echoed Dr Kelly's call, saying "Our research highlights the magnitude of the challenge fire poses to animals, plants and people, given worsening climatic conditions - a conclusion echoed in the recent Royal Commission report into last summer's fires."

Credit: 
University of Melbourne

Some parents prioritize Thanksgiving traditions over reducing COVID-19 risks

image: How parents plan to reduce COVID-19 risks at Thanksgiving.

Image: 
C.S. Mott Children's Hospital National Poll on Children's Health at Michigan Medicine.

ANN ARBOR, Mich. - For some families, one of the most difficult steps in reducing COVID-19 risks has been keeping children apart from grandparents and other extended family members.

And that may be especially true during the holiday season, as novel coronavirus cases rapidly accelerate across the nation and public health officials discourage gatherings to help slow the spread of the deadly virus.

Still, some parents may prioritize continuing Thanksgiving Day traditions with their children over reducing transmission risks, a new national poll suggests.

One in three parents say the benefits of gathering with family for the holidays are worth the risk of spreading or getting the virus, according to the C.S. Mott Children's Hospital National Poll on Children's Health at Michigan Medicine.

But parents are weighing competing priorities. While over half indicate it is very important that their child sees extended family and shares in family holiday traditions, three-quarters also believe it's important to prevent the spread of COVID-19 at family gatherings.

"As COVID-19 cases spike, many families are struggling with whether and how to continue their holiday traditions while balancing risks and benefits," says Mott Poll co-director Sarah Clark, M.P.H.

Half of parents say COVID-19 has substantially decreased the amount of time their children spend with extended family members and some may be growing weary of these separations, Clark says.

"For many parents, holidays mean sharing special rituals across different generations and opportunities for children to connect with grandparents, cousins, and other relatives," Clark says.

"Our report suggests that while many children have spent less time with relatives during the pandemic, some parents may have a hard time foregoing holiday gatherings in order to reduce COVID-19 risks."

But with children returning to face-to-face school and other activities in some communities, it may be especially risky for them to reunite with older adults who are at highest risk of getting seriously sick.

"Families may need to consider alternative, safer ways to celebrate and preserve traditions in order to keep loved ones safe," Clark says.

The nationally-representative report was based on responses from 1,443 parents of at least one child age 12 or under.

Shortening the guest list to reduce the risk of transmission

Among parents whose children usually see extended family on Thanksgiving, 61% still plan to meet in-person for the upcoming holiday. But only 18% plan to involve people traveling from out of state this season, even though 40% say gatherings usually involve people traveling that far.

Many parents who do plan to proceed with in-person celebrations say they will use different strategies to keep children and guests safe, according to the report.

Eighty-eight percent of parents say they will ask family members not to attend a Thanksgiving gathering if they have any COVID-19 symptoms or exposure. Meanwhile, two-thirds will not invite certain family members who have not been practicing safety precautions, such as mask wearing.

In assessing the safety precautions of extended families, parents need to ask about adults and children. Given the differences in local and state regulations, parents should ask whether cousins or other school-age family members are attending in-person classes and activities. If they are, there should be specific questions about how well COVID-19 precautions are consistently followed.

Parents should anticipate that some of these conversations will be uncomfortable, as there is uneven acceptance about precautions like wearing masks, Clark says.

"A key strategy to minimize the risk of COVID-19 transmission will be to limit the number of households who get together and choosing carefully who to include in Thanksgiving celebrations. Parents will also have to be vigilant about safety precautions," Clark says.

Maintaining distance at gatherings

Many parents also plan to take extra steps to protect older adults. Nine in 10 parents say Thanksgiving gatherings typically include grandparents, and three-quarters of parents will try to limit contact between their child and high-risk guests, including seniors or people with medical conditions.

Two-thirds of parents also plan to ask guests to maintain social distancing as much as possible.

However, experts caution that enforcing these rules may be a challenge.

"It may be difficult to maintain distance between children and high risk adults throughout a multi-day visit or even during a lengthy dinner," Clark says.

"Parents should be realistic about how feasible it will be to limit contact and think carefully about whether to gather in person with high-risk family members."

In families that choose to see extended family or other guests, parents should also talk with children in advance about how to celebrate safely, including a reminder about masks and social distancing. They may also want to talk about proper "voice etiquette" by limiting singing or yelling, as these actions can more easily spread viruses.

It is also recommended that children engage in outdoor activities for as much of the day as possible, she says.

Parents should consider substitute traditions

Alternatively, families may consider creative ways to sustain family traditions without in-person gatherings.

"The key for parents is to focus on elements of the celebration that represent family traditions or that seem most important to children," Clark says.

Ideas may include:

Talking with children about their favorite Thanksgiving foods, decorations or activities, and then using that input to plan a virtual celebration that includes family members in different locations.

If children mention a particular memorable holiday decoration displayed by grandparents, parents can encourage them to create their own version at home.

If children favor a family member's pumpkin pie, parents can help children make it at home, possibly with video calls with grandparents and other family members who can coach them through the process.

Arranging a group call or virtual gathering at a specific time for extended family to share stories or to have a family member give a blessing before Thanksgiving dinner.

"We all know that large public gatherings carry great risks of spreading COVID-19. But small and casual social gatherings where people feel most 'safe' are also part of what has been fueling transmission," Clark says.

"With COVID-19 cases increasing in every state, it is essential that all family members do their part to prevent further spread. That may mean celebrating the holidays a little differently this year."

Credit: 
Michigan Medicine - University of Michigan

Research shows the intrinsically nonlinear nature of receptive fields in vision

The receptive field (RF) of a neuron is the term applied to the space in which the presence of a stimulus alters the response of the same neuron. The responses of visual neurons, as well as visual perception phenomena in general, are highly nonlinear functions of the visual input (in mathematics, nonlinear systems represent phenomena whose behaviour cannot be expressed as the sum of the behaviours of its descriptors).

Conversely, vision models used in science are based on the notion of linear receptive field; in artificial intelligence and machine learning, as artificial neural networks are based on classical models of vision, also use linear receptive fields. "Modelling vision based on a linear receptive field poses several inherent problems: it changes with each input, it presupposes a set of basis functions for the visual system, and it conflicts with recent studies on dendritic computations", asserts Marcelo Bertalmío, first author of a study recently published in the journal of the group Nature, Scientific Reports.

The paper proposes modelling the receptive field in a nonlinear manner, introducing the concept of intrinsically nonlinear receptive field or INRF

The paper proposes modelling the receptive field in a nonlinear manner, introducing the intrinsically nonlinear receptive field or INRF. A study conducted by Marcelo Bertalmío, Alex Gómez-Villa, Adrián Martín, Javier Vázquez-Corral and David Kane, researchers with the UPF Department of Information and Communication Technologies (DTIC), together with Jesús Malo, a researcher from the University of Valencia.

An approach with broad implications

The INRF, apart from being more physiologically plausible and embodying the efficient representation principle, has a key property of wide-ranging implications: for several vision science phenomena where a linear RF must vary with the input in order to predict responses, while an RF linear varies for each stimulus, the INRF can remain constant under different stimuli.

Bertalmío adds: "We have also proved that artificial neural networks with INRF modules instead of linear filters have a remarkably improved performance and better emulate basic human perception". This research highlights the intrinsically nonlinear nature of receptive fields in vision and suggests a paradigm shift for both vision science and for artificial intelligence.

Credit: 
Universitat Pompeu Fabra - Barcelona

Cascading events led to 2018 Kīlauea volcanic eruption, providing clues for forecasting

video: Video of Fissure 8 lava fountain during the 2018 Kilauea eruption on Hawaii Island, Hawaii.

Image: 
Bruce Houghton, University of Hawaii.

The 2018 eruption of Kīlauea Volcano was one of the largest volcanic events in Hawai'i in 200 years. This eruption was triggered by a relatively small and rapid change at the volcano after a decade-long build-up of pressure in the upper parts of the volcano, according to a recent study published in Nature Communications by earth scientists from the University of Hawai'i (UH) at Mānoa and U.S. Geological Survey (USGS).

Using USGS Hawaiian Volcanoes Observatory (HVO) data from before and during the 2018 eruptions at the summit and flank, the research team reconstructed the geologic events.

"The data suggest that a backup in the magma plumbing system at the long-lived Pu'u 'Ō'ō eruption site caused widespread pressurization in the volcano, driving magma into the lower flank," said Matthew Patrick, research geologist at the USGS HVO and lead author of the study.

The eruption evolved, and its impact expanded, as a sequence of cascading events allowed relatively minor changes at Pu'u 'Ō'ō to cause major destruction and historic changes across the volcano.

A cascading series of events of this type was not considered the most likely outcome in the weeks prior to the onset of the eruption.

"This form of tunnel vision, which gives less attention to the least likely outcomes, is a bias that can be overcome by considering the broader, longer history of the volcano," said Bruce Houghton, the Hawai'i State Volcanologist, earth sciences professor at the UH Mānoa School of Ocean and Earth Science and Technology and study co-author. "For Kīlauea, this consists of widening the scope to consider the types of behavior seen in the first half of the 20th century and perhaps earlier."

"Our study demonstrates that eruption forecasting can be inherently challenging in scenarios where volcanoes prime slowly and trigger due to a small event, as the processes that build to eruption may be hard to detect and are easy to overlook on the scale of the entire volcano," said Patrick. "It is also a cautionary tale against over-reliance on recent activity as a guide for future eruptions."

The State of Hawai'i absorbed a significant amount of the economic and social cost of the 2018 eruption and likely will do so again as Kīlauea and Mauna Loa continue to erupt, suggested Houghton. Studies like this, which probe the more subtle influences of the behavior of these volcanoes, are targeted at reducing the costs, human and physical, of the next eruptions.

With future work the research team aims to adopt diverse approaches to understanding the subsurface structure and movement of magma on Kīlauea's East Rift Zone.

Credit: 
University of Hawaii at Manoa

Deep learning in the emergency department

Using a deep-learning model designed for high-dimensional data, KAUST researchers have shown that it is possible to predict emergency department overcrowding from complex hospital records. This application of the "Variational AutoEncoder" deep-learning model is an example of how machine learning can be used to interpret and extract meaning from difficult data sets that are too voluminous or complex for humans to decipher.

Machine learning is an important aspect of artificial intelligence (AI) that involves training an AI model using training data. An AI model could learn, for example, to recognize images of the number three by training it using a data set containing thousands of versions of handwritten numerals. A simple neural network model--comprising interconnected "neurons" that take an input, apply a rule and produce an output-- becomes increasingly accurate as it is exposed to more training data and the rules on each neuron are refined.

By adding hidden intermediate layers of neurons into these networks, however, the model can be prompted to self-learn the relationships in the input data without the rules being specified in advance. Such models, known as deep-learning models, are extremely powerful because they allow us, for the first time, to interpret data that has previously been too large, heterogeneous or multiparametered to meaningfully analyze any other way.

"Deep learning has emerged as a promising line of research in modeling and forecasting, in both academia and industry," says Fouzi Harrou, a research scientist at KAUST. "These models can automatically extract information from voluminous datasets with limited human instruction, such as implicit relationships between variables, complicated pattern recognition and descriptions of dependencies in time series data."

Harrou, with statistician Ying Sun from KAUST and collaborators from France and Algeria, applied a particularly promising deep-learning-based model called a Variational AutoEncoder (VAE) to the problem of predicting patient admissions and flow through an emergency department in a pediatric hospital.

"A particularly attractive feature of VAEs is their ability to compress high-dimensional, or many-parameter, data into a lower-dimensional representation, which enables flexible generation of quantitative comparisons," says Harrou. The results demonstrated that the VAE approach performs better than other models, providing a range of insights, such as peak patient admission days and causative relationships.

"Accurate forecasting of patient arrivals is essential for emergency department managers to reduce patient waiting time and length of stay," says Harrou. "Our results clearly show the promising performance of deep-learning models for such applications, and we are now working to extend the approach to COVID-19 case forecasting."

Credit: 
King Abdullah University of Science & Technology (KAUST)

Remote control of heat nanosources motion and thermal-induced fluid flows by using light forces

image: a, Multiple gold NPs (spheres of 200 nm radius) are confined by a ring-shaped laser trap (wavelength of 532 nm) and optically transported around it. These NPs rapidly assemble into a stable group of hot particles creating a confined heat source (G-NP) of temperature ~500 K. Free (not trapped) gold NPs acting as tracer particles are dragged toward the G-NP by the action of the thermal-induced water flow created around it (see Video S5 of the paper). The speed of the G-NP is controlled by the optical propulsion force which is proportional to the phase gradient strength tailored along the laser trap as displayed in b, corresponding to the transport state 1. This non-uniform propulsion force drives the G-NP reaching a maximum speed of 42 μm/s. b, Sketch of the switching of the phase gradient configuration (state 1 and 2) enabling a more sophisticated manipulation of the heat source: split and merge of the G-NP. (c), The opposite averaged propulsion forces in the split region (see state 3 at ~0 deg, shown in b) separate the NPs belonging to the original G-NP thus creating G-NP1 and G-NP2, as observed in the displayed sequence (see Video S6 of the paper). These two new heat sources are propelled by the time averaged propulsion force corresponding to state 3 in opposite directions toward the region where they finally merge into a joint G-NP again. Complex transport trajectories for G-NP delivery, for example in form of knot circuit (see Video S7 of the paper), can be created enabling spatial distribution of moving heat sources across a target network

Image: 
by José A. Rodrigo, Mercedes Angulo and Tatiana Alieva

Today, optofluidics is one of the most representative application of photonics for biological/chemical analysis. The ability of plasmonic structures (e.g., colloidal gold and silver nanoparticles, NPs) under illumination to release heat and induce fluid convection at the micro-scale has attracted high interest over the past two decades. Their size- and shape-dependent as well as wavelength-tunable optical and thermal properties have paved the way for relevant applications such as photothermal therapy/imaging, material processing, biosensing and thermal optofluidics to name a few. In-situ formation and motion control of plasmon-enhanced heat sources could pave a way for further harnessing their functionalities, especially in optofluidics. However, this is a challenging multidisciplinary problem combining optics, thermodynamics and hydrodynamics.

In a recent paper published in Light Science & Application, Professor Jose A. Rodrigo and co-workers from Complutense University of Madrid, Faculty of Physics, Department of Optics, Spain, have developed a technique for jointly controlling the formation and motion of heat sources (group of gold NPs) as well as of the associated thermal-induced fluid flows created around them. The scientists summarize the operational principle of their technique:

"The technique applies a structured laser-beam trap to exert an optical propulsion force over the plasmonic NPs for their motion control, while the same laser simultaneously heats up them. Since both the shape of the laser trap and the optical propulsion forces are easily and independently tailored, the hot NPs can be optically transported along reconfigurable routes with controlled speed according to the standing application."

They underline the main achievement:

"Based on this remote light-driven manipulation mechanism, we report the first evidence of thermal-induced fluid flow originated by a moving heat source with controlled speed along the target trajectory. This contactless manipulation of a fluid at the microscale provides a versatile optofluidic actuation enabling new functionalities, for example, to deliver nano-objects and analytes selectively to target locations as chemistry and biology research demand. Moreover, we experimentally demonstrate that the spatial and temporal control of the optical propulsion force allows changing the fluid streams as well as in-situ dividing/merging the dynamic group of NPs comprising the heat source. The reported results have fundamental and practical significance in the field of optical manipulation of nano-structures and thermal optofluidics. This is a nice example of the synergy between optical manipulation, thermoplasmonics and hydrodynamics."

The physicists envision:
"The achieved combination of optical-induced heating of plasmonic NPs and their simultaneous programmable optical transport breaks ground for light micro-robotics and, in particular, for the creation of future thermal optofluidic tools."

Credit: 
Light Publishing Center, Changchun Institute of Optics, Fine Mechanics And Physics, CAS

Early signs of Alzheimer's disease in people with Down's syndrome

Researchers at Karolinska Institutet in Sweden have studied the incidence and regional distribution of Alzheimer's disease biomarkers in the brains of people with Down's syndrome. The results can bring new possibilities for earlier diagnosis and preventive treatment of dementia. The study is published in Molecular Neurodegeneration.

While medical advances and improvements of life quality have increased the life expectancy of people with Down's syndrome to an average of 60 years, up to 90 percent develop Alzheimer's disease if they live long enough.

In Alzheimer's disease, clumps of amyloid form plaques around the neurons of the brain, while another protein, tau, accumulates inside the nerve cells in what are referred to as tangles. Plaque and tangles appear first in one region of the brain and then spread, causing gradually deteriorating functional impairment.

People with Down's syndrome have an extra copy of chromosome 21. One reason for the high incidence of Alzheimer's disease in people with Down's syndrome is that the gene coding for the production of amyloid is located on chromosome 21, which can cause an accumulation of amyloid in the brain already in childhood.

"Previous studies of Down's syndrome by our group and others have been able to demonstrate that pathological forms of amyloid and tau can appear years before someone shows signs of dementia," says Lotta Granholm, professor at Karolinska Institutet and the University of Denver, and one of the paper's principal authors.

In the current study, the researchers studied the extent and distribution of tau and amyloid in the brain tissue of people with Down's syndrome with or without an Alzheimer's diagnosis, and of people who had died with Alzheimer's disease but without any other functional disability.

Their analyses showed that the incidence of tau in the brain tissue of people with Down's syndrome and Alzheimer's disease was higher than in people with Alzheimer's but without Down's syndrome, suggesting that tau is an early change in Down's syndrome.

"Apart from a high level of tau, we also measured a different regional distribution of tau in the brains of people with Down's syndrome and Alzheimer's disease compared to the control group," says the paper's first author Laetitia Lemoine, assistant professor at the Department of Neurobiology, Care Sciences and Society, Karolinska Institutet.

Traces of tau were also detected in the brain tissue of fetuses with Down's syndrome. Early prophylactic measures against tau accumulation could prevent the development of Alzheimer pathology in childhood for this patient group, the researchers believe.

"Our studies support the need for continued research on the progress and incidence of amyloid plaque and tau in the brain by imaging the brains of live individuals with Down's syndrome," says Agneta Nordberg, professor at the Department of Neurobiology, Care Sciences and Society, and the paper's second principal author. "Our aim is a better understanding of how we can take early steps to discover pathological changes that produce cognitive symptoms and begin medication that helps to improve life quality."

Credit: 
Karolinska Institutet

Researchers develop more efficient method to recover heavy oil

image: Novel chemical flooding method

Image: 
Yuichiro Nagatsu/ TUAT

The current global supply of crude oil is expected to meet demand through 2050, but there may be a few more drops to squeeze out. By making use of a previously undesired side effect in oil recovery, researchers have developed a method that yields up to 20% more heavy oil than traditional methods.

Tokyo University of Agriculture and Technology (TUAT) team published their results on August 24 in Energy & Fuels, a journal of the American Chemical Society.

The researchers focused on heavy oil recovery, which involves extracting highly viscous oil stuck in porous rocks.

"The total estimated volume of recoverable heavy oil is almost the same as the remaining light reserves in the world," said paper author Yuichiro Nagatsu, associate professor in the Department of Chemical Engineering at TUAT. "With the depletion of light oil resources and rising energy demands, the successful recovery of heavy oil is becoming increasingly important."

Generally, less than 10 % of heavy oil may be produced from the reservoirs by natural flow after drilling the well. To yield more oil, water may be injected into the reservoir to maintain pressure in order to keep the flow moving. Engineers may also make the water more alkaline by adding sodium hydroxide or sodium carbonate to help the oil flow better. Called chemical flooding, this process often fails to yield much heavy oil, as it is too thick for even the enriched water to sweep up. Heavy oil can also be made to flow more easily through heat, but that's not a tangible solution for deep or thin reservoirs, according to Nagatsu.

"It is important to develop non-thermal chemical flooding for the recovery of heavy oil," Nagatsu said. "The key problem of chemical flooding in heavy oil reservoirs is an inefficient sweep as a result of the low mobility of the oil."

Because the viscosity of water and heavy oil differ so greatly, Nagatsu said, contemporary sweep methods are inefficient for heavy oil recovery.

To correct this inefficiency in chemical recovery, Nagatsu and his team injected calcium hydroxide into their model reservoir. The compound helps oil to flow more smoothly but is considered undesirable because it reacts with the oil to produce a metallic soap where the water and oil interface.

"The metallic soap behaves as a viscoelastic material in the pore and blocks the preferentially swept region and improves the sweep efficiency," Nagatsu said.

The metallic soap stops the sweeping flow from moving into already swept areas, forcing the forward recovery of heavy oils locked deep in rocks and crevices. In experimental testing, the method yielded 55% of oil in the reservoir, compared to a 35% recovery using alkaline water and a 33% recovery using regular water. Nagatsu noted that this method is inexpensive, can be used in a range of temperatures and it does not require energy-consuming techniques to perform.

The researchers plan to study the effectiveness of this method across crude oil viscosity ranges and in different ranges of oil reservoir permeability in laboratory experiments before moving to field tests. They would also like to engage with industry to further develop the technology for practical application, Nagatsu said.

Credit: 
Tokyo University of Agriculture and Technology

From lab to industry? Ideally ordered porous titania films, made at scale

image: (upper) Illustration of new high-throughput process for making ordered through-hole membranes out of titania. (lower left) Scanning electron micrograph of titania through-hole membrane. (lower right) Cross-sectional scanning electron micrograph of through-hole membrane.

Image: 
Tokyo Metropolitan University

Tokyo, Japan - Researchers from Tokyo Metropolitan University have realized high-throughput production of thin, ordered through-hole membranes of titanium dioxide. Titania layers were grown using anodization on mask-etched titanium before being crystallized. Applying a second anodization, they converted part of the layer back to an amorphous state. The amorphous portion was then selectively dissolved to free the film while leaving the template intact. This paves the way for industrial production of ordered titania membranes for photonics.

Titania, or titanium dioxide, might be the most useful substance you've never heard of. It is widely used as a pigment, and is the active ingredient in most sunscreens, with strong UV absorbing properties. It is found as a reflective layer in mirrors, as well as coatings for self-cleaning, anti-fogging surfaces. Importantly for industry, it can accelerate all sorts of chemical reactions in the presence of light; it is already found in building materials to speed up the breakdown of harmful pollutants in the air, with work under way to apply it to air filters, water purifiers and solar cells.

It's the strong interaction between titania and light that makes it the future material for a wide range of applications involving photonics, particularly photonic crystals, ordered arrays of material which can absorb or transmit light depending on their wavelength. To make these "crystals," researchers have come up with ways of creating porous titania films in the lab, where tiny holes, tens of nanometers across, are patterned onto thin titanium dioxide layers in ordered arrays. Despite their promise, however, it is still not possible to produce them at scale, a major stumbling block for getting them out of the lab and into the latest photonic tech.

Now, a team led by Associate Professor Takashi Yanagishita and Prof. Hideki Masuda of Tokyo Metropolitan University have taken an important step towards developing an industrial production process. Previously, they came up with a method of "stamping" patterns on titanium metal before growing a layer of titanium dioxide using a method called anodization. The layers had holes which formed the same pattern as the ones made artificially on the metal. But because titanium is so hard, the stamps didn't last very long. Now, they've come up with a method that avoids stamps altogether. After they grow a layer of titania with ordered arrays of holes on an etched titanium template, they apply heat, changing the amorphous, disordered structure of the titania into a crystalline form. They then go through a second anodization; a layer close to the original template surface returns to a disordered state. Because disordered and crystalline titania dissolve differently, they are then able to selectively dissolve away the layer still in contact with the template using acid, leaving a free layer of titania with the same through hole pattern.

Of the many advantages of their method, a key benefit is that the template pattern on the metal is left intact. After the film is removed, the same template can be reused over and over again. The team also experimented with different spacings, going down to holes spaced by a mere 100nm. Importantly, the protocol is scalable and high-throughput, meaning that it might not be long before industrial quantities make their way into commercial products. The team hopes their method will not only bring widespread application a step closer, but be applied to a wide range of other nanostructured materials with different functions.

Credit: 
Tokyo Metropolitan University

Improving quantum dot interactions, one layer at a time

image: Low quantum dot concentrations during superlattice fabrication suppresses quantum resonance between dots in the same layer, while high concentrations activates it

Image: 
DaeGwi Kim, Osaka City University

Osaka City University scientists and colleagues in Japan have found a way to control an interaction between quantum dots that could greatly improve charge transport, leading to more efficient solar cells. Their findings were published in the journal Nature Communications.

Nanomaterials engineer DaeGwi Kim led a team of scientists at Osaka City University, RIKEN Center for Emergent Matter Science and Kyoto University to investigate ways to control a property called quantum resonance in layered structures of quantum dots called superlattices.

"Our simple method for fine-tuning quantum resonance is an important contribution to both optical materials and nanoscale material processing," says Kim.

Quantum dots are nanometer-sized semiconductor particles with interesting optical and electronic properties. When light is shone on them, for example, they emit strong light at room temperature, a property called photoluminescence. When quantum dots are close enough to each other, their electronic states are coupled, a phenomenon called quantum resonance. This greatly improves their ability to transport electrons between them. Scientists have been wanting to manufacture devices using this interaction, including solar cells, display technologies, and thermoelectric devices.

However, they have so far found it difficult to control the distances between quantum dots in 1D, 2D and 3D structures. Current fabrication processes use long ligands to hold quantum dots together, which hinders their interactions.

Kim and his colleagues found they could detect and control quantum resonance by using cadmium telluride quantum dots connected with short N-acetyl-L-cysteine ligands. They controlled the distance between quantum dot layers by placing a spacer layer between them made of oppositely charged polyelectrolytes. Quantum resonance is detected between stacked dots when the spacer layer is thinner than two nanometers. The scientists also controlled the distance between quantum dots in a single layer, and thus quantum resonance, by changing the concentration of quantum dots used in the layering process.

The team next plans to study the optical properties, especially photoluminescence, of quantum dot superlattices made using their layer-by-layer approach. "This is extremely important for realizing new optical electronic devices made with quantum dot superlattices," says Kim.

Kim adds that their fabrication method can be used with other types of water-soluble quantum dots and nanoparticles. "Combining different types of semiconductor quantum dots, or combining semiconductor quantum dots with other nanoparticles, will expand the possibilities of new material design," says Kim.

Credit: 
Osaka City University

Emergency imaging trends in pediatric vs. adult patients for abdominal pain

image: Chart shows trends among pediatric patients in use of imaging in emergency department visits for abdominal pain.

Read More: https://www.ajronline.org/doi/full/10.2214/AJR.19.22667

Image: 
American Roentgen Ray Society (ARRS), American Journal of Roentgenology (AJR)

Leesburg, VA, November 20, 2020--According to an article in ARRS' American Journal of Roentgenology (AJR), although pediatric CT use has decreased for the evaluation of abdominal pain (perhaps due to implementing an ultrasound-first strategy for suspected appendicitis), CT use has continued to increase among adults with abdominal pain in U.S. emergency department (ED) visits.

"Although trends in CT use have previously been reported for children and adults, our study is the first, to our knowledge, to contrast the two cohorts in the ED setting in a nationally representative sample," wrote first author Ralph C. Wang of the University of California, San Francisco.

Analyzing data from the National Hospital Ambulatory Medical Care Survey (1997-2016), CT and ultrasound usage was measured over time in visits for abdominal pain and visits in which appendicitis was diagnosed. Predictors of CT use were identified by means of regression analysis.

For children, CT use increased from 1.2% in 1997, peaked in 2010 at 16.6%, and decreased slightly in 2016. In adults, CT use increased steadily from 3.9% in 1997 to 37.8% in 2016.

CT use increased for both pediatric and adult ED visits with a diagnosis of appendicitis--from 5.2% to 71.0% for children and 7.2% to 83.3% for adults. Children with abdominal pain and a diagnosis of appendicitis evaluated in a pediatric ED were at decreased odds (pain odds ratio, 0.6; appendicitis odds ratio, 0.2) of receiving CT than were those evaluated in general EDs.

"We believe that the decrease in use of CT to evaluate abdominal pain in children is related to successful nationwide research and implementation efforts to decrease radiation exposure of children," the authors of this AJR article concluded.

Credit: 
American Roentgen Ray Society