Tech

Extracting order from a quantum measurement finally shown experimentally

image: A thin silicon nitride membrane (white) is stretched tight across a silicon frame (red). The membrane contains a pattern of holes, with one small island in the center, whose vibrations are measured in the experiment.

Image: 
Ola Joensen

In physics, it is essential to be able to show a theoretical assumption in actual, physical experiments. For more than a hundred years, physicists have been aware of the link between the concepts of disorder in a system, and information obtained by measurement. However, a clean experimental assessment of this link in common monitored systems, that is systems which are continuously measured over time, was missing so far.

But now, using a "quantum drum", a vibrating, mechanical membrane, researchers have realized an experimental setup that shows the physical interplay between the disorder and the outcomes of a measurement. A collaboration of experimentalists from the Niels Bohr Institute, University of Copenhagen and theorists at Queen's University Belfast, and the University of Sao Palo, could show how to extract order from this largely disordered system, providing a general tool to engineer the state of the system, essential for future quantum technologies, like quantum computers. The result is now published in as an Editors' Suggestion in Physical Review Letters.

Measurements will always introduce a level of disturbance of any system it measures. In the ordinary, physical world, this is usually not relevant, because it is perfectly possible for us to measure, say, the length of a table without noticing that disturbance. But on the quantum scale, the consequences of the disturbance made by measurements are huge. These large disturbances increase the entropy, or disorder, of the underlying system, and apparently preclude to extract any order from the measurement. But before explaining how the recent experiment realized this, the concepts of entropy and thermodynamics need a few words.

Breaking an egg is thermodynamics

The law of thermodynamics covers extremely complicated processes. The classic example is that if an egg falls off of a table, it breaks on the floor. In the collision, heat is produced - among many other physical processes - and if you imagine you could control all of these complicated processes, there is nothing in the physical laws that say you can't reverse the process. In other words, the egg could actually assemble itself and fly up to the table surface again, if we could control the behavior of every single atom, and reverse the process. It is theoretically possible. You can also think of an egg as an ordered system, and if it breaks, it becomes extremely disordered. Physicists say that the entropy, the amount of disorder, has increased. The laws of thermodynamics tell us that the disorder will in fact always increase, not the other way round: So eggs do not generally jump off floors, assemble and land on tables in the real world.

Correct quantum system readouts are essential - and notoriously difficult to obtain

If we turn to quantum mechanics, the world looks rather different, and yet the same. If we continuously measure the displacement of a mechanical, moving system like the "membrane-drum" (illustration 1) with a precision only limited by the quantum laws, this measurement disturbs the movement profoundly. So you will end up measuring a displacement which is disturbed during the measurement process itself, and the readout of the original displacement will be spoiled - unless you can measure the introduced disorder as well. In this case, you can use the information about the disorder to reduce the entropy produced by the measurement and generate order from it - comparable to controlling the disorder in the shattered egg-system. But this time we have the information on the displacement as well, so we have learnt something about the entire system along the way, and, crucially, we have access to the original vibration of the membrane, i.e. the correct readout. Alessio Belenchia, the study's senior author, and his colleagues from Belfast and Sao Paolo have established a powerful formal framework for this kind of analysis.

A generalized framework for understanding entropy in quantum systems

"The connection between thermodynamics and quantum measurements has been known for more than a century. However, an experimental assessment of this link was missing so far, in the context of continuous measurements. That is exactly what we have done with this study. It is absolutely essential that we understand how measurements produce entropy and disorder in quantum systems, and how we use it in order to have control over the readouts we shall have in the future from, say, a quantum system like a quantum computer. If we are not able to control the disturbances, we basically won't be able to understand the readouts - and the quantum computer readouts will be illegible, and useless, of course", says Massimiliano Rossi, PhD student and first author on the scientific article. "This framework is important in order to create a generalized basic foundation for our understanding of entropy producing systems on the quantum scale. That's basically where this study fits into the grander scale of things in physics".

Credit: 
University of Copenhagen

Acorn woodpeckers wage days-long battles over vacant territories, radio tag data show

image: This photograph shows an acorn woodpecker granary.

Image: 
Neil Losin

When acorn woodpeckers inhabiting high-quality territories die, nearby birds begin a battle royal to win the vacant spot. Researchers used radio tags to understand the immense effort woodpecker warriors expend traveling to and fighting in these dangerous battles. They also found spectator woodpeckers go to great lengths to collect social information, coming from kilometers around just to watch these chaotic power struggles. The work appears September 7 in the journal Current Biology.

"When you're approaching a big tree with a power struggle from far away, you'll first hear a lot of acorn woodpeckers calling very distinctly, and see birds flying around like crazy," says first author Sahas Barve (@SahasBarve), currently a postdoctoral fellow at the Smithsonian National Museum of Natural History. "When you get closer, you can see that there are a dozen or more coalitions of three or four birds fighting and posturing on branches. One group has to beat all the others to win a spot in the territory, which is really, really rare in animals--even in fantasy novels it usually boils down to one army against the other."

The chaos of the battles makes studying behavior using direct observation difficult. But Barve and his team had an advantage: they used new radio telemetry technology that allowed them to track the birds' locations down to the minute. With radio tags, which "sit like a rock-climbing harness with a fanny pack on the woodpecker's back," the researchers could learn how much time was spent fighting at the power struggles and where the warriors came from.

Power struggles for co-breeding positions in oak trees with "granaries"--large acorn storage structures built by the birds consisting of acorns stuffed into thousands of individual holes in the bark--involve fighting coalitions formed by groups of non-breeding brothers or sisters from neighboring territories. The radio tag data showed that some birds return day after day and fight for ten hours at a time. "We didn't think it could be that long because they have to be away from their home territory," says Barve. "When do they eat? We still don't know."

The researchers hypothesized that woodpeckers would fight the hardest for territories closest to their current home, but found that deciding to fight may depend on more complex social cues as they recruit members to join their coalition. "These birds often wait for years, and when there's the right time and they have the right coalition size, they'll go and give it their all to win a really good territory," he says.

The woodpeckers' complex social behavior also extends to the other group that comes to power struggles: the spectators. "We never really paid attention to them because we were always fixated on the birds that were actually fighting," Barve says. "We often forget that there are birds sitting on trees watching nearby." His team found that the biggest battles can attract more than 30 birds, or a third of all woodpeckers in the area, with some traveling more than three kilometers to "come with popcorn and watch the fight for the biggest mansion in the neighborhood."

The radio tag data also showed that the spectators spend up to an hour a day watching the fights, despite many already having breeding position granaries of their own. For them, the benefits of social information must outweigh the costs of leaving their home territory unattended for considerable amounts of time. Acorn woodpeckers have tight social networks and know everyone's place due to frequent travels to other territories. "If anything is disruptive to that, or if anything weird happens, they want to go check it out," he says. "The spectators are probably as interested in the outcome as the fighter is, although the warriors benefit more directly."

There's still a lot researchers don't know about the acorn woodpeckers' complex social structures, but radio telemetry provides a glimpse into their unique social behaviors. "They potentially have friendships, and they probably have enemies," Barve says. "With our radio tag data, we can tell when two birds are at the same place at the same time. The next step is to try and understand how their social networks are shaped, and how they vary across the year."

Credit: 
Cell Press

New surgical tools with smart sensors can advance cardiac surgery and therapy

image: Seen here, a conformal array of electrodes affixed to an inflated balloon catheter

Image: 
John Rogers/Northwestern University

SUMMARY

Researchers developed a new class of medical instruments equipped with an advanced soft electronics system that could dramatically improve the diagnoses and treatments of a number of cardiac diseases and conditions.

Detailed in a new paper published in the journal Nature Biomedical Engineering, the researchers, led by engineers at the George Washington University and Northwestern University, applied stretchable and flexible matrices of electrode sensors and actuators, along with temperature and pressure sensors, to a balloon catheter system, often used in minimally invasive surgeries or ablations to treat conditions such as heart arrhythmias.

The new system, which conforms better to the body's soft tissue than current devices, can perform a variety of functions, including: simultaneous in vivo measurements of temperature, force of contact and electrophysiological parameters; the ability to customize diagnostic and therapeutic functions; and real-time feedback. The new system can also dramatically reduce the length of invasive ablation procedures and exposure of patients and doctors to X-ray radiation.

THE SITUATION

Many minimally invasive surgeries rely on catheters inserted into the body through small incisions to conduct diagnostic measurements and therapeutic interventions. Physicians, for example, use this catheter-based approach to map and treat irregular heartbeats, or arrhythmias, often by locating and killing or ablating cardiac tissue area which is causing the arrhythmias.

Though widely used in surgery, the current catheter-based approach has a number of drawbacks. The rigidity of today's catheter devices means they do not conform well to soft, biological tissues, impacting high fidelity mapping of an organ's electrophysiological signals. Current devices make contact with only a small part of an organ at a time, making it necessary to constantly move a probe around, lengthening medical procedures. Current catheter systems are also limited in the number of functions they can perform, requiring physicians to use multiple catheters in a single ablation procedure.

Additionally, long procedures--for example, to locate and ablate tissues causing arrhythmias--risk exposing both patient and physician to potentially damaging X-rays, as physicians rely on X-ray images during the course of the surgery to guide their catheters.

THE BENEFIT

The new class of instruments the researchers developed will allow physicians to acquire a rich set of electrophysiological information and to complete surgeries in shorter times with a single instrumented catheter system.

By outfitting a balloon catheter with advanced organ conformal electronic components, sensors and actuators, the researchers overcame the flaws of current systems. Specific advances over previous systems include:

* Instrumented sensors and actuators in multiplexed array formats can probe the complex nature of tissues, specifically in the beating heart. This will allow, for example, for better localization of sources of lethal arrhythmias causing sudden cardiac death.

* The device's multilayered and multifunctional architecture with combined diagnostic and therapeutic functions enhances a number of minimally invasive cardiac procedures, including radio frequency or irreversible electroporation ablation--wherein cardiac or nerve cells are ablated, or "burned," to eliminate sources of arrhythmia--and the delivery of drugs and other biomaterials directly into cells through a process called reversible electroporation.

* Capabilities for real-time feedback control, enabled by simultaneous, multimodal operation of sensors and actuators.

FROM THE RESEARCHERS

"We have taken new breakthrough materials and fabrication techniques typically employed by the semiconductor industry and applied them to the medical field, in this case cardiology, to advance a new class of medical instruments that will improve cardiac outcomes for patients and allow physicians to deliver better, safer and more patient-specific care."
- Igor Efimov, the Alisann and Terry Collins Professor of Biomedical Engineering at the George Washington University

"Hard, rigid catheters cannot conform to the heart because the heart itself is not hard and rigid. We leveraged our advances in soft, stretchable and flexible electronics to develop medical devices that include elastic, interconnected arrays of sensors and actuators, capable of gently and softly conforming to tissue surfaces. The result improves the accuracy and precision of associated surgical processes, for faster, less risky and more effective treatments."
- John A. Rogers, the Louis A. Simpson and Kimberly Querrey Professor of Materials Science and Engineering, Biomedical Engineering and Neurological Surgery at Northwestern University

Credit: 
George Washington University

Genome sequencing accelerates cancer detection

image: Recent cancer studies have shown that genomic mutations leading to cancer can occur years, or even decades, before a patient is diagnosed.

Researchers have developed a statistical model that analyses genomic data to predict whether a patient has a high or low risk of developing oesophageal cancer.

The results could enable early detection and improve treatment of oesophageal cancer in future.

Image: 
Spence Phillips / EMBL-EBI

Oesophageal cancer is the eighth most common cancer worldwide. It often develops from a condition called Barrett's oesophagus. Existing monitoring and treatment methods are very intrusive, and many patients have to undergo burdensome procedures to ensure that no cancer is missed.

Researchers have now developed a statistical model that uses genomic data to accurately predict whether a patient with Barrett's oesophagus has a high or low risk of developing cancer.

Genomics and statistics come together

Researchers at the University of Cambridge, EMBL's European Bioinformatics Institute (EMBL-EBI), and collaborators sequenced genomes from biopsies routinely collected from patients with Barrett's oesophagus. These patients are monitored for early signs of oesophageal cancer. The researchers used the data to look for differences between patients who were ultimately diagnosed with cancer and those who were not. The data were used to develop a statistical model measuring each patient's individual risk. The research was published in Nature Medicine.

Other recent cancer studies have shown that genomic mutations leading to cancer may occur many years before a patient is diagnosed with the disease. Being able to identify these mutations could provide a new route to early diagnosis and treatment.

Using genomic data from 88 patients with Barrett's oesophagus, the researchers identified half of the patients who were diagnosed with oesophageal cancer as 'high risk' more than eight years before diagnosis. The numbers went up to 70% two years before diagnosis. Equally important, the model also accurately predicted patients who were at a very low risk of developing cancer.

"One of the unique things about this study was the richness of the data provided by colleagues at Addenbrooke's Hospital in Cambridge," explains Moritz Gerstung, Group Leader at EMBL-EBI. "These patients have been in surveillance for over 15 years, so overall we had over 800 samples, taken over time and from different areas of the oesophagus. This allowed us to measure in great detail what type of genomic changes occur and how these trajectories differ between patients with and without cancer. Without such thorough surveillance programmes, this study wouldn't have been possible."

The benefit of early detection

Although people with Barrett's oesophagus are at considerably higher risk of developing oesophageal cancer than the general population, only 1 patient in 300 will be diagnosed with cancer per year. Nevertheless, they all have to go through intrusive monitoring procedures every two years. This surveillance can be uncomfortable, stressful, and time-consuming for the patients, and it places an additional burden on the healthcare system.

"The benefit of our method is twofold," explains Sarah Killcoyne, Visiting Postdoctoral Fellow at EMBL-EBI. "The patients who have high-risk Barrett's, which is likely to become cancerous, can receive treatment earlier. And individuals who have something that looks genetically stable, and unlikely to develop into the disease, do not need to undergo such intense surveillance. The hope is that our method can help improve early detection and treatment, and decrease unnecessary treatment for low-risk patients, without compromising patient safety."

These results mean that patients at greatest risk can be treated immediately, rather than conducting repeated biopsies until early signs of cancer are found. Conversely, patients with low risk and stable disease can be monitored less frequently. Overall, the authors estimate that monitoring can be reduced for 50% of patients with Barrett's oesophagus.

"This is an exciting example of how a collaboration between computational biologists and clinician scientists can bring new insights into an important clinical problem," says Rebecca Fitzgerald, Professor of Cancer Prevention and MRC Programme Leader at the University of Cambridge. "Oesophageal cancer is devastating when it is diagnosed late, but early intervention can be performed endoscopically and spare patients unnecessary chemotherapy and removal of their oesophagus. Similar approaches could be extended to other cancer types in the future."

According to the authors, the next steps are to refine the method, ideally by analysing data from more patients. It is also important to bring in clinical information and improve the model's accuracy. Eventually, this will lead to clinical trials to show that this model is useful in clinical practice for patients currently in surveillance.

Credit: 
European Molecular Biology Laboratory - European Bioinformatics Institute

A new twist on DNA origami

image: Models and transmission electron microscopy (TEM) images of various 3D polyhedra that were constructed by connecting the self-linked triangular M-DNA and rectangular M-DNA. From left to right: a tetrahedron, triangular bipyramid, octahedron, pentagonal bipyramid, triangular prism, rectangular prism, pentagonal and hexagonal prisms

Image: 
Hao Yan

A team* of scientists from ASU and Shanghai Jiao Tong University (SJTU) led by Hao Yan, ASU's Milton Glick Professor in the School of Molecular Sciences, and director of the ASU Biodesign Institute's Center for Molecular Design and Biomimetics, has just announced the creation of a new type of meta-DNA structures that will open up the fields of optoelectronics (including information storage and encryption) as well as synthetic biology.

This research was published today in Nature Chemistry - indeed the meta-DNA self-assembly concept may totally transform the microscopic world of structural DNA nanotechnology.

It is common knowledge that the predictable nature of Watson-Crick base-pairing and the structural features of DNA have allowed DNA to be used as a versatile building block to engineer sophisticated nanoscale structures and devices.

"A milestone in DNA technology was certainly the invention of DNA origami, where a long single-stranded DNA (ssDNA) is folded into designated shapes with the help of hundreds of short DNA staple strands," explained Yan. "However it has been challenging to assemble larger (micron to millimeter) sized DNA architectures which up until recently has limited the use of DNA origami." The new micron sized structures are on the order of the width of a human hair which is 1000 times larger than the original DNA nanostructures.

Ever since gracing the cover of Science Magazine in 2011 with their elegant DNA origami nanostructures, Yan and collaborators have been working tirelessly, capitalizing on inspiration from nature, seeking to solve complex human problems.

"In this current research we developed a versatile "meta-DNA" (M-DNA) strategy that allowed various sub-micrometer to micrometer sized DNA structures to self-assemble in a manner similar to how simple short DNA strands self-assemble at the nanoscale level," said Yan.

The group demonstrated that a 6-helix bundle DNA origami nanostructure in the sub-micrometer scale (meta-DNA) could be used as a magnified analogue of single-stranded DNA (ssDNA), and that two meta-DNAs containing complementary "meta-base pairs" could form double helices with programmed handedness and helical pitches.

Using meta-DNA building blocks they have constructed a series of sub-micrometer to micrometer scale DNA architectures, including meta-multi-arm junctions, 3D polyhedrons, and various 2D/3D lattices. They also demonstrated a hierarchical strand-displacement reaction on meta-DNA to transfer the dynamic features of DNA to the meta-DNA.

With the help of assistant professor Petr Sulc (SMS) they used a coarse-grained computational model of the DNA to simulate the double-stranded M-DNA structure and to understand the different yields of left-handed and right-handed structures that were obtained.

Further, by just changing the local flexibility of the individual M-DNA and their interactions, they were able to build a series of sub-micrometer or micron-scale DNA structures from 1D to 3D with a wide variety of geometric shapes, including meta-junctions, meta-double crossover tiles (M-DX), tetrahedrons, octahedrons, prisms, and six types of closely packed lattices.

In the future, more complicated circuits, molecular motors, and nanodevices could be rationally designed using M-DNA and used in applications related to biosensing and molecular computation. This research will make the creation of dynamic micron-scale DNA structures, that are reconfigurable upon stimulation, significantly more feasible.

The authors anticipate that the introduction of this M-DNA strategy will transform DNA nanotechnology from the nanometer to the microscopic scale. This will create a range of complex static and dynamic structures in the sub-micrometer and micron-scale that will enable many new applications.

For example, these structures may be used as a scaffold for patterning complex functional components that are larger and more complex than previously thought possible. This discovery may also lead to more sophisticated and complex behaviors that mimic cell or cellular components with a combination of different M-DNA based hierarchical strand displacement reactions.

Credit: 
Arizona State University

Producing leather-like materials from fungi

An international team led by material chemists Alexander Bismarck and Mitchell Jones from the University of Vienna demonstrate the considerable potential of these renewable sustainable fabrics derived from fungi in their latest review article in Nature Sustainability.

Traditional leather and its alternatives are typically obtained from animals and synthetic polymers. Leather can be considered a co-product of meat production with both livestock farming and the leather production process increasingly considered to be ethically questionable and environmentally unfriendly (e.g. deforestation for grazing, greenhouse gas emissions, use of hazardous substances in the tanning process). The production of synthetic leather materials from plastics such as polyvinyl chloride (PVC) or polyurethane (PU) also depend on chemicals derived from fossil fuels.

"This is where leather-like materials from fungi come into play, which, in general, are CO2 neutral as well as biodegradable at the end of their life span," says Alexander Bismarck from the Faculty of Chemistry at the University of Vienna, who additionally holds a visiting professorship at Imperial College London.

Growth of fungal mycelium

Leather substitutes can be produced from fungi by upcycling low-cost agricultural and forestry by-products (e.g. sawdust). These serve as a feedstock for the growth of fungal mycelium, which constitutes a mass of elongated tubular structures and represents the vegetative growth of filamentous fungi. Within a couple of weeks, the fungal biomass can be harvested and physically and chemically treated (e.g. pressing, cross-linking). "As a result, these sheets of fungal biomass look like leather and exhibit comparable material and tactile properties," says department head Alexander Bismarck. The first biotech companies are already marketing materials derived from fungi.

Leather substitute materials derived from fungi typically contain completely biodegradable chitin (which acts as a stabiliser in the material) and other polysaccharides such as glucans. In their own studies, Alexander Bismarck and Mitchell Jones (now affiliated with Vienna University of Technology) already conducted research using fungal species, such as the white button mushroom A. bisporus and bracket fungus D. confragosa, to produce paper and foam-like construction materials for applications, such as insulation.

Considerable potential as a leather substitute

In this review article, the scientists examine the sustainability of bovine and synthetic leathers and present an overview of the first developments and commercialisation of leather substitutes derived from fungi. According to the authors, one of the greatest challenges in the production of fungi-derived leather-like materials is still to achieve homogeneous and consistent mycelium mats, "exhibiting uniform growth and consistent thickness, colour and mechanical properties".

To date, the production of these materials has been driven mainly by entrepreneurial spirit. Fungi as a raw material for leather substitutes provide a cost-effective, socially and environmentally sound alternative to bovine and synthetic leather and are of particular interest to sustainability-conscious consumers and companies as well as to the vegan community, the researchers write. According to them, "substantial advances in this technology and the growing number of companies that are producing fungi-biomass-based leather alternatives suggests that this new material will play a considerable role in the future of ethically and environmentally responsible fabrics".

Credit: 
University of Vienna

Quality over quantity in recovering language after stroke

image: ECU researchers have found that intensive therapy is not necessarily best for treating the loss of language after stroke.

Image: 
Edith Cowan University

New Edith Cowan University (ECU) research has found that intensive therapy is not necessarily best when it comes to treating the loss of language and communication in early recovery after a stroke.

Published today in the International Journal of Stroke, the research found that unlike physical and motor skill rehabilitation, recovering lost language caused by a condition known as aphasia after stroke is a marathon, not a sprint. It also showed that early intervention is crucial.

Lead author, Associate Professor Erin Godecke from ECU's School of Medical and Health Sciences, said the findings have important implications for the treatment of aphasia because they mean service delivery options are likely to change.

"Previously people with aphasia got the majority of their therapy in the first 6-8 weeks after stroke," Professor Godecke said.

"Our research shows that there is no benefit to this. It is likely that the same therapy could be spread over a longer period to enhance recovery, rather than getting a burst at the start and very little over the next months or years," she said.

Aphasia is a neurological disorder affecting spoken language, comprehension, reading and writing. It affects one third of around 17 million people worldwide who experience stroke each year and is treated with speech therapy.

Early care is vital, but not intensity

Professor Godecke said aphasia therapy and early intervention are vitally important for recovery outcomes after stroke. However, increasing the intensity of the treatment doesn't equate to better results.

"We found that when we provided early aphasia therapy people had a massive increase in their ability to communicate at 12 and 26 weeks after their stroke. They could talk better and had less difficulty finding and using the right words.

"Importantly though, we also found that if we provided around 10 hours of therapy per week versus nearly 23 hours a week the results weren't any different. We didn't see any harm, but we didn't see any benefit," Professor Godecke said.

Language recovery is different to motor recovery

Professor Godecke said the way people recover motor skills after a stroke is different to how they regain language.

"We tend to believe that more intensive is always better. However, we're beginning to see data emerge to show us that language recovery might behave a little differently to motor recovery functions such as walking, moving your arm or sitting up," she said.

"We don't need quite as intensive a regimen for language as we do for walking recovery. We might need the same amount of treatment, just spread over a longer period."

Professor Godecke said the difficulty level, or intensity, of the aphasia therapy needs to be tailored to what the person can tolerate.

"Because language is a higher order function and it involves more thinking time and cognitive skill, having breaks between sessions may help consolidate learning," Professor Godecke said.

"It's akin to running on a treadmill - you can only run on the treadmill if you can walk.

"There's no benefit having someone run at full speed when you can have them run at a moderate pace, get the learning they need, retain it for longer and build on it," she said.

VERSE study a world first

The Very Early Rehabilitation for Speech (VERSE) study at ECU is the first international aphasia trial. The study aimed to determine whether intensive aphasia therapy, beginning within 14 days after stroke, improved communication recovery compared to usual care.

Researchers recruited 246 participants with aphasia after stroke from 17 acute-care hospitals across Australia and New Zealand. Participants either received the usual level of aphasia therapy, or one of two higher intensity regimens.

The ECU study found early intensive aphasia therapy did not improve communication recovery within 12 weeks post stroke compared to usual care.

Credit: 
Edith Cowan University

Protected areas can 'double' imperilled species populations

image: The greater one-horned rhinoceros (Rhinoceros unicornis), captured the near Narayani River in the Chitwan Community Forest buffer zone, Nepal.

Image: 
Sharp Photography

A University of Queensland-led research team has revealed that many endangered mammal species are dependent on protected areas, and would likely vanish without them.

Professor James Watson, of UQ and the Wildlife Conservation Society, said despite the success of protected areas, their popularity as a go-to conservation tool has started to wane.

"Since the 1970s, the global network of protected areas has experienced a fourfold expansion, and some of these protected sites have been crucial to protect and even enhance wildlife populations," he said.

"However, there's increasing debate around the role of the global protected area estate in sustaining and recovering threatened species.

"What our research has clearly shown is that protected areas, when well-funded and well-placed, are incredibly effective.

"In fact, 80 per cent of mammal species we monitored in these protected areas have at least doubled their coverage in protected areas over the last 50 years.

"And 10 per cent of the species analysed live predominantly on protected land."

The scientists compared current distributions of 237 threatened terrestrial mammal species from the 1970s to today, measuring changes in species' ranges, then overlaid them with the protected area network.

"A great example is the greater one-horned rhinoceros (Rhinoceros unicornis), which now has 80 per cent of its range in a protected area," Professor Watson said.

"Their numbers have been decimated elsewhere - the species has lost more than 99 per cent of its distribution in the last 50 years.

"Now about 87 per cent of the remaining animals live in just two protected areas - Kaziranga National Park in India and Chitwan National Park in Nepal."

Professor Watson said mammals were retreating into protected areas and more than ever, protected areas were vital to protecting the world's biodiversity.

"There is little doubt that without protected areas we would have lost amazing species like tigers and mountain gorillas," he said.

"This science clearly shows that to abate the extinction crisis, we need better funded and more protected areas that are well-supported and well-managed by governments and other land managers.

"At the same time, we need to reward efforts that ensure re-expansion and restoration of wildlife populations into territories beyond protected area boundaries.

"We must focus on retaining Earth's remaining intact ecosystems that contain key protected areas and prioritise efforts to restore habitat corridors between isolated reserves, providing opportunities for movement and genetic exchange."

Credit: 
University of Queensland

Producing technicolor through brain-like electronic devices

image: Experimental results of micro-sized color printing. The device consists of four layers including top and bottom layers of Ag surrounding IGZO and SiO2 layers. The color pixels were fabricated by an FIB process after the deposition of a 180 nm thick SiO2 layer.

Image: 
Junsuk Rho (POSTECH)

Structural coloration is promised to be the display technology of the future as there is no fading - it does not use dyes - and enables low-power displays without strong external light source. However, the disadvantage of this technique is that once a device is made, it is impossible to change its properties so the reproducible colors remain fixed. Recently, a POSTECH research team has successfully obtained vivid colors by using semiconductor chips - not dyes - made by mimicking the human brain structure.

POSTECH's joint research team consisting of Professor Junsuk Rho of the mechanical engineering and chemical engineering departments, Inki Kim, a mechanical engineering student in the MS/PhD integrated program, along with Professor Yoonyoung Jung and masters student Juyoung Yun of the Department of Electrical Engineering developed a technology that can freely change the structural colors using IGZO(Indium-Galium-Zinc-Oxide), a type of oxide semiconductor. IGZO is a material that is widely used not only in flexible displays but also in neuromorphic electronic devices. This is the first study that incorporates IGZO to nanoptics.

IGZO can freely control the charge concentration within a layer through the hydrogen plasma treatment process, thereby controlling the refractive index in all ranges of visible light. In addition, nanoptical simulations and experiments have confirmed that the extinction coefficient of visible light is close to zero, thus enabling the actualization of a transmittable color filter in the penetrable form that can transmit exceptionally clear colors with extremely low light loss.

The IGZO-based color filter technology developed by the research team consists of a 4-layer (Ag-IGZO-SiO2-Ag) multilayer and can transmit vivid colors using the Fabry-Perot resonance3 properties. Experiments have confirmed that as the charge concentration of IGZO layer increases, the refractive index decreases which can change the resonance properties of light that is selectively transmitted.

This design method can be applied not only to color filters for large-scale displays, but also to color printing technique of micro (11-6, millionth) or nano (10-9, billionth) sizes.

To verify this, the research team demonstrated a color printing technology that has a pixel size of one micrometer (μm, one millionth of a meter).

The results proved that the colors from the centimeter or micrometer-sized color pixels can be adjusted freely depending on the charge concentration of the IGZO layer. It was also confirmed that the structural color can be changed more reliably and quickly through changing the refractive index via charge concentration compared to other conventional all solid-state variable materials like WO3 or GdOx.

"This research is the very first application of IGZO to nanoptical structural color display technology. IGZO is the next-generation oxide semiconductor used in flexible displays and neuromorphic electronic devices," stated Professor Rho who led the research. He added, "It is anticipated that this technology, which enables filtering the transmitted light by adjusting the charge concentration, can be applied to the next-generation low-power reflective display and anti-tamper display technologies."

Credit: 
Pohang University of Science & Technology (POSTECH)

New drug shown to improve bone growth in children with achondroplasia

image: Sarah, who has achoondroplasia, was the first Australian participant enrolled in the global phase 3 trial of vosoritide

Image: 
MCRI

A phase three global clinical trial led by the Murdoch Children's Research Institute (MCRI) has shown a new drug boosts bone growth in children born with achondroplasia, the most common type of dwarfism.

The randomised, double-blind, placebo-controlled trial results, led by MCRI clinical geneticist Professor Ravi Savarirayan, have been published today in the prestigious medical journal, The Lancet.

Achondroplasia is the most common cause of dwarfism and is caused by overactivity of the FGFR3 protein, which slows bone growth in children's limbs, spine, and the base of their skull.

The experimental drug, vosoritide, blocks the activity of FGFR3, potentially returning growth rates to normal. Previous MCRI-led trials have confirmed vosoritide was safe to give to young people with dwarfism. This new randomised controlled trial conclusively shows it is also effective increasing bone growth over one year of daily injections.

Professor Savarirayan said, "This drug is like releasing the handbrake on a car, it lets you get up to full speed instead of having to drive with the brakes on."

Achondroplasia is a genetic bone disorder affecting 250,000 people worldwide, or about one in every 25,000 children. It is caused by a mutation in the FGFR3 gene that impairs bone growth and means that children grow around 4 cm per year, instead of the usual 6 to 7 cm.

Current achondroplasia treatments, like surgery, only address the symptoms. In contrast, vosoritide is a precision therapy directly targeted at the molecular cause of the disease.

BioMarin Pharmaceutical, who manufacturers the peptide drug and funded the trial, has applied to the US Food and Drug Administration to license vosoritide for its use in treating achondroplasia. The European Medicines Agency validated the Company's application. Australian licensing is expected to follow sometime after a successful US application.

For the trial, 121 children aged five to under 18 were enrolled, which was conducted at 24 hospitals in seven countries. In Melbourne, the trial was conducted at the Melbourne Children's Trial Centre. The 60 children who received daily injections of vosoritide grew an average of 1.57 cm per year more than the children who received placebo, which brought them almost in line with their typically developing peers.

Professor Savarirayan said, "We know that beyond the cold hard facts and figures around growth rates and bone biology, we have hope that a treatment can improve kids' health outcomes, social functioning and increase access to their environments. Anecdotally, our patients tell us they now are able to do more stuff like climbing trees, jumping rocks and being more independent generally, which is specific to their experiences."

Dr Johnathon Day, Medical Director of Clinical Science at BioMarin Pharmaceutical Inc. said, "Vosoritide is the first potential precision pharmacological therapy that addresses the underlying cause of achondroplasia and this randomized, double-blind, placebo-controlled Phase 3 study further adds to the scientific knowledge we've gained over many years from the clinical development program. I'd like to personally thank and congratulate all of the investigators and I am especially grateful to all of the children and their families who have participated in these studies,"

Paul Cohen and Elizabeth Ryan's daughter, Sarah, was born with achondroplasia. Sarah was one of the very first patients enrolled in the trial. Mr Cohen said, "During the trial we've seen Sarah grow up at the same rate as her friends. She can now join in bike rides with her friends, and loves being allowed on our local waterslide."

Although the trial did not significantly improve the children's proportions between their upper and lower bodies, the children will be followed until they achieve their final adult height to see how long the drug's effects last and whether they experience a growth spurt during puberty, as this doesn't normally happen in children with achondroplasia.

Vosoritide is also being tested in children from birth to five years which may improve final height, body proportion and other age-related complications such as spinal cord compression, which can cause sudden death.

Credit: 
Murdoch Childrens Research Institute

Deep underground forces explain quakes on San Andreas Fault

Rock-melting forces occurring much deeper in the Earth than previously understood appear to drive tremors along a notorious segment of California's San Andreas Fault, according to new USC research that helps explain how quakes happen.

The study from the emergent field of earthquake physics looks at temblor mechanics from the bottom up, rather than from the top down, with a focus on underground rocks, friction and fluids. On the segment of the San Andreas Fault near Parkfield, Calif., underground excitations -- beyond the depths where quakes are typically monitored -- lead to instability that ruptures in a quake.

"Most of California seismicity originates from the first 10 miles of the crust, but some tremors on the San Andreas Fault take place much deeper," said Sylvain Barbot, assistant professor of Earth sciences at the USC Dornsife College of Letters, Arts and Sciences. "Why and how this happens is largely unknown. We show that a deep section of the San Andreas Fault breaks frequently and melts the host rocks, generating these anomalous seismic waves."
The newly published study appears in Science Advances. Barbot, the corresponding author, collaborated with Lifeng Wang of the China Earthquake Administration in China.

The findings are significant because they help advance the long-term goal of understanding how and where earthquakes are likely to occur, along with the forces that trigger temblors. Better scientific understanding helps inform building codes, public policy and emergency preparedness in quake-ridden areas like California. The findings may also be important in engineering applications where the temperature of rocks is changed rapidly, such as by hydraulic fracturing.

Parkfield was chosen because it is one of the most intensively monitored epicenters in the world. The San Andreas Fault slices past the town, and it's regularly ruptured with significant quakes. Quakes of magnitude 6 have shaken the Parkfield section of the fault at fairly regular intervals in 1857, 1881, 1901, 1922, 1934, 1966 and 2004, according to the U.S. Geological Survey. At greater depths, smaller temblors occur every few months.
So what's happening deep in the Earth to explain the rapid quake recurrence?

Using mathematical models and laboratory experiments with rocks, the scientists conducted simulations based on evidence gathered from the section of the San Andreas Fault extending up to 36 miles north of -- and 16 miles beneath -- Parkfield. They simulated the dynamics of fault activity in the deep Earth spanning 300 years to study a wide range of rupture sizes and behaviors.

The researchers observed that, after a big quake ends, the tectonic plates that meet at the fault boundary settle into a go-along, get-along phase. For a spell, they glide past each other, a slow slip that causes little disturbance to the surface.

But this harmony belies trouble brewing. Gradually, motion across chunks of granite and quartz, the Earth's bedrock, generates heat due to friction. As the heat intensifies, the blocks of rock begin to change. When friction pushes temperatures above 650 degrees Fahrenheit, the rock blocks grow less solid and more fluid-like. They start to slide more, generating more friction, more heat and more fluids until they slip past each other rapidly -- triggering an earthquake.

"Just like rubbing our hands together in cold weather to heat them up, faults heat up when they slide. The fault movements can be caused by large changes in temperature," Barbot said. "This can create a positive feedback that makes them slide even faster, eventually generating an earthquake."

It's a different way of looking at the San Andreas Fault. Scientists typically focus on movement in the top of Earth's crust, anticipating that its motion in turn rejiggers the rocks deep below. For this study, the scientists looked at the problem from the bottom up.

"It's difficult to make predictions," Barbot added, "so instead of predicting just earthquakes, we're trying to explain all of the different types of motion seen in the ground."

The study was supported by grants from the National Natural Science Foundation of China (NSFC-41674067 and NSFC-U1839211) and the U.S. National Science Foundation (EAR-1848192).

Credit: 
University of Southern California

Identification and treatment key in responding to COVID-19 health anxiety in children

Early identification and treatment is vital to avoid long-term mental health consequences from COVID-19 among children and young people, say researchers.

Writing in the Behavioural and Cognitive Psychotherapy Journal, the psychologists from the University of Bath highlight how health anxieties can be triggered by changes like returning to school and argue that young people need time to readjust to routine and to deal with emotions after such a prolonged period at home.

For some, they say, ongoing concerns about health, triggered by the invisible threat posed by COVID-19, could interfere with life and parents and teachers need to be aware of signs such as excessive hand washing, and reassurance-seeking about health-related worries.

Crucially not all children and young people will experience or develop health anxiety, and many have shown remarkable resilience in the face of an unprecedented health crisis. Yet for some, particularly those who are already vulnerable to worrying and anxiety, this year’s tumultuous events are likely to have significantly and negatively impacted them.

Dr Jo Daniels clinical psychologist within the Department of Psychology at the University of Bath, who throughout the pandemic has been active in advising and guiding individuals and organisations on responding to COVID-19-related health anxieties, explains: “Children are not immune to worries about their health, or the health of those around them. It is essential that we are able to recognise when normal concerns around covid become more problematic.

“Signs of stress in children may include tummy ache, sleeping problems and not engaging in normally enjoyable activities; for those particularly affected by health related anxiety, you might expect to see excessive hand-washing, exaggerated avoidance of touching objects for fear of picking up the virus, or repeated reassurance seeking from adults in addition to the usual signs of stress and worry.

“Children may not always be able to describe or verbalise their concerns clearly, so we are looking for marked changes in behaviour or worries that get in the way of living life to the full. Teachers also now have a role in this when children return back to school, as they tend to know the children well and school is where they will be spending a large part of their day.”

The team behind the study suggest health anxieties in children might be triggered by an immediate family member becoming ill, a shielding member of the household, or perhaps because of raised family tensions due to parental health-related worries. In these scenarios they advise parents and teachers to seek professional help where needed.

Their guidance offers suggestions about how cognitive behavioural therapy (CBT), including CBT conducted online or by phone, can be an effective treatment option to address children and young people’s health anxieties.

During the pandemic, the team have previously highlighted mental health vulnerabilities including health anxiety in adults, and loneliness in children and young people.

Dr Maria Loades, also from Bath’s Department of Psychology and who earlier this year published findings about the potential long-term mental health challenges for children and young people as a result of lockdown and loneliness, added: “As children and young people return to school, they need to have the opportunity to catch up, not just academically, but also socially and emotionally.

“A big part of this is having the time and space to connect with one another, through play, which gives them a chance to process the emotions and to share their experiences with others. It will take time for children and young people to adjust. While we want to avoid pathologizing normal responses to the pandemic, in children and young people especially, it is vital to spot the signs and intervene early.”

They recommend that parents or teachers who notice that a child or young person is worried about health should offer them the opportunity to talk about their worries by gently listening to their concerns, and then encouraging them to find ways to gradually face and overcome their fears.

Where a child or young person is seeking excessive reassurance from others, it’s important to remember that although this may help them in the short-term, it can keep their worries going over time. It is understandable to worry about health at this time and they say it is important to work with young people to find ways to resolve and understand their worries. Simple interventions that may be helpful could include correcting misunderstandings surrounding covid and the necessary precautions.

Although most will overcome their fears without specialist help, for some, their anxiety may get in the way of functioning and cause distress; in this instance, additional help should be sought via health care professionals or teachers.

Dr Loades adds: “We all need to work together to ensure children and young people are able to live their lives to the fullest.”

Credit: 
University of Bath

'Floppy' atomic dynamics help turn heat into electricity

image: Evolution of atomic lattice oscillation waves upon heating the tin sulfide crystal, as measured with neutron scattering.

Image: 
Tyson Lanigan-Atkins, Delaire group, Duke University

DURHAM, N.C. -- Materials scientists at Duke University have uncovered an atomic mechanism that makes certain thermoelectric materials incredibly efficient near high-temperature phase transitions. The information will help fill critical knowledge gaps in the computational modeling of such materials, potentially allowing researchers to discover new and better options for technologies that rely on transforming heat into electricity.

The results appear online on September 4 in the journal Nature Communications.

Thermoelectric materials convert heat into electricity when electrons migrate from the hot side of the material to the cold side. Because providing a temperature difference between its two sides is required, researchers are interested in trying to use these materials to generate electricity from the heat of a car's tailpipe or recovering energy lost as heat in power plants.

Over the past couple of years, new records were set for thermoelectric efficiency with an emerging material called tin selenide and its sister compound, tin sulfide. The sulfide version is not quite as good a thermoelectric yet, but it is being optimized further because it is cheaper to produce and more environmentally friendly.

While scientists know that both of these compounds are excellent thermoelectric materials, they don't exactly know why. In the new study, Olivier Delaire, associate professor of mechanical engineering and materials science at Duke, and two of his graduate students, Tyson Lanigan-Atkins and Shan Yang, tried to fill in a bit of that knowledge gap.

"We wanted to try to understand why these materials have such low thermal conductivity, which helps enable the strong thermoelectric properties they're known for," said Delaire. "Using a powerful combination of neutron scattering measurements and computer simulations, we discovered that it's related to the material's atomic vibrations at high temperature, which nobody had seen before."

Low thermal conductivity is a necessary ingredient of any good thermoelectric material. Because electricity generation requires a heat differential between its two sides, it makes sense that materials that stop heat from spreading across them would perform well.

To get a view of tin sulfide's atomic vibrations in action, Delaire and Lanigan-Atkins took samples to the High Flux Isotope Reactor at Oak Ridge National Laboratory. By ricocheting neutrons off of the tin sulfide's atoms and detecting where they end up after, the researchers could determine where the atoms were and how they were collectively vibrating in the crystal's lattice.

The facilities at ORNL were particularly well-suited for the task. Because the atomic vibrations of tin sulfide are relatively slow, the researchers need low-energy "cold" neutrons that are delicate enough to see them. And ORNL has some of the best cold-neutron instruments in the world.

"We found that the tin sulfide effectively has certain modes of vibration that are very 'floppy,'" said Delaire. "And that its properties are connected with inherent instability in its crystal lattice."

At lower temperatures, tin sulfide is a layered material with distorted grids of tin and sulfide lying on top of another, corrugated like an accordion. But at temperatures near its phase transition point of 980 degrees Fahrenheit--which is where thermoelectric generators often operate--that distorted environment begins to breaks down. The two layers, as if by magic, become undistorted again and more symmetric, which is where the "floppiness" comes into play.

Because the material is sloshing between the two structural arrangements at high temperature, its atoms no longer vibrate together like a well-tuned guitar string and instead become anharmonically damped. To understand this better, think of a car with terrible shocks as having a harmonic vibration -- it will keep bouncing long after going over the slightest bump. But proper shocks will dampen that vibration, making it anharmonic and stopping it from oscillating for a long time.

"Heat waves travel through atomic vibrations in a material," said Delaire. "So when the atomic vibrations in tin sulfide become floppy, they don't transmit vibrations very quickly and they also don't vibrate for very long. That's the root cause of its ability to stop heat from traveling within it."

With these results in hand, Delaire and Yang then sought to confirm and understand them computationally. Using supercomputers at Lawrence Berkeley National Laboratory, Yang was able to reproduce the same anharmonic effects at high temperatures. Besides confirming what they saw in the experiments, Delaire says these updated models will allow researchers to better search for new thermoelectric materials to use in tomorrow's technologies.

"Researchers in the field have not been accounting for strong temperature dependences on heat propagation velocities, and this modeling shows just how important that variable can be," said Delaire. "Adopting these results and other theoretical advances will make it easier for materials scientists to predict other good thermoelectric materials."

Credit: 
Duke University

Scientists predicted new hard and superhard ternary compounds

image: Ternary phase diagram of the W-Mo-B system at 0 K

Image: 
A. Kvashnin et al./Chemistry of Materials

Scientists from the Skolkovo Institute of Science and Technology (Skoltech), Institute of Solid State Chemistry and Mechanochemistry (ISSC SB RAS), Pirogov Medical University and Yerevan State University have predicted new hard and superhard ternary compounds in the tungsten-molybdenum-boron system using computational methods. Their research was published in the journal Chemistry of Materials.

According to Alexander Kvashnin, a senior research scientist at Skoltech and a co-author of the paper, the study is a natural follow-on to lengthy research into binary systems. In pursuit of new materials, the scientists had to create a more complex system by adding a third element, which resulted in strongly altered properties and new compounds. These changes were the focus of interest for the scientists.

The team predicted the structure of potentially superhard ternary compounds in the W-Mo-B system using the USPEX evolutionary algorithm developed by Artem Oganov, a Skoltech professor and a co-author of the paper, and his students.

"We planned to predict a series of ternary compounds that would display better mechanical properties, such as hardness and fracture resistance, as compared to binary compounds. We did predict several ternary compounds which turned out to be high-entropy alloys. The mixing of tungsten and molybdenum atoms produced compounds that were disordered and, therefore, had varying stability depending on temperature," explains Alexander Kvashnin.

Carbides ? four- or five-component compounds - are typically classified as high-entropy compounds. Scientists believe that their study is the first step towards finding such compounds among boride systems.

"Obvious prospects of this research may translate into new hard materials outperforming their existing counterparts and withstanding higher temperatures or pressures. Companies such as Gazpromneft may use those materials for drilling or other purposes," adds Christian Tantardini, one of the authors of the paper and an employee of ISSC and Skoltech.

The scientists intend to pursue their research effort. They are eager to find out what happens to even more complex compounds in response to temperature and pressure changes.

Credit: 
Skolkovo Institute of Science and Technology (Skoltech)

Painting with light: Novel nanopillars precisely control intensity of transmitted light

image: Illustration depicts a faithful reproduction of Johannes Vermeer's "Girl With a Pearl Earring" using millions of nanopillars that control both the color and intensity of incident light.

Image: 
T. Xu/Nanjing University

By shining white light on a glass slide stippled with millions of tiny titanium dioxide pillars, researchers at the National Institute of Standards and Technology (NIST) and their collaborators have reproduced with astonishing fidelity the luminous hues and subtle shadings of "Girl With a Pearl Earring," Dutch artist Johannes Vermeer's masterpiece. The approach has potential applications in improving optical communications and making currency harder to counterfeit.

For example, by adding or dropping a particular color, or wavelength, of light traveling in an optical fiber, scientists can control the amount of information carried by the fiber. By altering the intensity, researchers can maintain t the brightness of the light signal as it travels long distances in the fiber. The approach might also be used to "paint" paper money with small but intricate color details that a counterfeiter would have great difficulty forging.

Other scientists have previously used tiny pillars, or nanopillars, of varying sizes to trap and emit specific colors when illuminated with white light. The width of the nanopillars, which are about 600 nanometers in height, or less than one-hundredth the diameter of a human hair, determines the specific color of light that a pillar traps and emits. For a demanding test of such a technique, researchers examined how well the nanopillars reproduced the colors of a familiar painting, such as the Vermeer.

Although several teams of researchers had successfully arranged millions of nanopillars whose sizes were tailored to transmit red, green or blue light to create a specific palette of output colors, the scientists had no way to control the intensity of those colors. The intensity, or brightness, of colors determines an image's light and shadow -- its chiaroscuro --and enhances the ability to convey impressions of perspective and depth, a signature feature of Vermeer's work.

Now, by fabricating nanopillars that not only trap and emit specific colors of light but also change its polarization by varying degrees, the NIST researchers and their collaborators from Nanjing University in China have for the first time demonstrated a way to control both color and intensity. The researchers, who include Amit Agrawal and Wenqi Zhu of NIST and the University of Maryland in College Park, and Henri Lezec of NIST, describe their findings in the September 20 issue of the journal Optica, posted online today.

In their new work, the NIST team fabricated on a glass slide nanopillars of titanium dioxide that had an elliptical cross section rather than a circular one. Circular objects have a single uniform diameter, but elliptical objects have a long axis and a short axis.

The researchers designed the nanopillars so that at different locations their long axis was more aligned or less aligned with the polarization of the incoming white light. (Polarized light is light whose electric field vibrates in a particular direction as it journeys across space.) If the nanopillar's long axis was exactly aligned with the direction of polarization of the incoming light, the polarization of the transmitted light was unaffected. But if the long axis was rotated by some angle -- for instance 20 degrees -- relative to the direction of polarization of the incoming light, the nanopillar rotated the polarization of the incident light by twice that angle -- in this case, 40 degrees.

At each location on the glass slide, the orientation of a nanopillar rotated the polarization of the red, green or blue light it transmitted by a specific amount.

By itself, the rotation imparted by each nanopillar would not in any way alter the intensity of the transmitted light. But in tandem with a special polarizing filter placed on the back of the glass slide, the team achieved that goal.

The filter was oriented so that it prevented any light that had retained its original polarization from passing through. (Sunglasses work in much the same way: The lenses act as vertically polarized filters, reducing the intensity of horizontally polarized glare.) That would be the case for any place on the glass slide where a nanopillar had left unaltered the polarization of the incident light. Such a region would project as a dark spot on a distant screen.

In places where a nanopillar had rotated the polarization of the incident white light, the filter permitted a certain amount of the red, green or blue light to pass. The amount depended on the rotation angle; the greater the angle, the greater the intensity of the transmitted light. In this way, the team, for the first time, controlled both color and brightness.

Once the NIST researchers had demonstrated the basic design, they created a digital copy of a miniature version of the Vermeer painting, about 1 millimeter long. They then used the digital information to guide the fabrication of a matrix of millions of nanopillars. The researchers represented the color and intensity of each picture element, or pixel, of the Vermeer by a group of five nanopillars -- one red, two green and two blue -- oriented at specific angles to the incoming light. Examining the millimeter-size image that the team had created by shining white light through the nanopillars, the researchers found that they reproduced "Girl With the Pearl Earring" with extreme clarity, even capturing the texture of oil paint on canvas.

"The quality of the reproduction, capturing the subtle color gradations and shadow details, is simply remarkable," said NIST researcher and study co-author Agrawal. "This work quite elegantly bridges the fields of art and nanotechnology."

To construct the nanopillars, Agrawal and his colleagues first deposited a layer of an ultrathin polymer on glass, just a few hundred nanometers thick. Using an electron beam like a miniature drill, they then excavated an array of millions of tiny holes of varying dimensions and orientations in the polymer.

Then, using a technique known as atomic layer deposition, they backfilled these holes with titanium dioxide. Finally, the team etched away all of the polymer surrounding the holes, leaving behind millions of tiny pillars of titanium dioxide. The dimension and orientation of each nanopillar represented, respectively, the hue and brightness of the final millimeter-size image.

The nanopillar technique can easily be adapted to transmit specific colors of light, with particular intensities, to communicate information through an optical fiber, or to imprint a valuable item with a miniature, multihued identification mark that would be hard to replicate.

Credit: 
National Institute of Standards and Technology (NIST)