Tech

Surprisingly strong and deformable silicon

image: The production process for strong, deformable silicon pillars (links). The pillars are first etched through a resist, then oxidized and finally cleaned. On the right, the end result can be seen (electron microscope image).

Image: 
Illustration: ETH Zurich

Since the invention of the MOSFET transistor sixty year ago, the chemical element silicon on which it is based has become an integral part of modern life. It ushered in the computer age, and by now the MOSFET has become the most produced device in history. Silicon is readily available, cheap, and has ideal electrical properties, but also one important drawback: it is very brittle and, therefore, breaks easily. This can become a problem when trying to make micro-?electro-mechanical systems (MEMS) from silicon, such as the acceleration sensors in modern smartphones.

At ETH in Zurich, a team led by Jeff Wheeler, Senior Scientist at the Laboratory for Nanometallurgy, together with colleagues at the Laboratory for Mechanics of Materials and Nanostructures at Empa, has shown that, under certain conditions, silicon can be much stronger and more deformable than was previously thought. Their results have recently been published in the scientific journal Nature Communications.

Ten-year effort

"This is the result of a ten-year effort", says Wheeler, who worked as a researcher at Empa prior to his career at ETH. To understand how tiny silicon structures can deform, within the framework of an SNF project, he took a closer look at a widely used production method: the focused ion beam. Such a beam of charged particles can mill desired shapes into a silicon wafer very effectively, but in doing so leaves behind distinct traces in the form of surface damage and defects, which cause the material to break more easily.

Lithography with final cleaning

Wheeler and his collaborators had the idea to try a particular type of lithography as an alternative to the ion beam method. "First, we produce the desired structures - tiny pillars in our case - by etching away un-?masked material from the areas of the silicon surface using a gas plasma", explains Ming Chen, a former PhD student in Wheeler's group. In a further step, the surface of the pillars, some of which are narrower than a hundred nanometres, are first oxidized and then cleaned by completely removing the oxide layer with a strong acid.

Chen then studied the strength and plastic deformability of silicon pillars of different widths with an electron microscope and compared the two production methods. To that end, he pressed a tiny diamond punch into the pillars and studied their deformation behaviour in the electron microscope.

Striking results

The results were striking: the pillars that had been milled with an ion beam collapsed at a width of less than half a micrometre. By contrast, the pillars produced by lithography only suffered brittle fractures at widths above four micrometres, while thinner pillars were able to withstand the strain much better. "These lithographic silicon pillars can deform at sizes ten times greater than what we've seen in ion beam-?machined silicon with the same crystal orientation, with double the strength!", Wheeler summarizes the results of his experiments.

The strength of the lithographically produced pillars even reached values that one would only expect only in theory, for ideal crystals. What makes the difference here, says Wheeler, is the absolute purity of the surfaces of the pillars, which is achieved by the final cleaning step. This results in a much smaller number of surface defects from which a fracture could originate. With the assistance of Alla Sologubenko, a researcher with the microscopy centre ScopeM at ETH, this additional deformability also allowed the team to observe a striking change in deformation mechanisms at smaller sizes. This revealed new details on how silicon can deform.

Applications in smartphones

The results obtained by ETH researchers could have an immediate impact on the fabrication of silicon MEMS, Wheeler says: "In this way, the gyroscopes used in smartphones, which detect rotations of the device, could be made even smaller and more robust." That shouldn't be too difficult to realize, given that industry is already using the combination of etching and cleaning Wheeler and his colleagues investigated. The method could also be applied to other materials having crystal structures similar to that of silicon, the researchers believe.

Moreover, more elastic silicon could also be used to further improve the electrical properties of the material for certain applications. By applying a large strain of the semiconductor the mobility of its electrons can be increased, which can lead, for instance, to shorter switching times. So far, one had to produce nanowires to achieve that, but now this could be done directly using structures integrated into a semiconductor chip.

Credit: 
ETH Zurich

Scientists present new method for remote sensing of atmospheric dynamics

image: Graduate student Sergei Zenevich, a co-author of the study, is setting up a heterodyne spectrometer for observations on the roof of the Applied Mathematics Building of the Moscow Institute of Physics and Technology

Image: 
Alexander Rodin/MIPT

Physicists from the Moscow Institute of Physics and Technology have developed a new method for wind speed remote measurements. It may complement the widely employed lidar and radar sensing techniques. The paper came out in Atmospheric Measurement Techniques.

Wind speed measurements are essential for many applications. For example, assimilation of these data is required for fine-tuning climatological and meteorological models, including those used for weather forecasting. Despite the progress made in remote sensing over recent decades, measuring the movement of air masses is still a challenge. Most of the data are collected by means of traditional contact methods: via sensors installed on weather stations or sounding balloons. Lidar or sonar anemometers are commonly used for local measurements at distances of several hundred meters or less. Weather radars can help at distances of up to tens of kilometers. However, the latter are normally ineffective outside the troposphere -- the Earth's closest atmospheric layer, which is 10 to 18 kilometers thick. Satellite-based direct measurements of the movement of air masses are rare, only occasional experiments have been accomplished.

"Information on atmospheric dynamics is still fairly hard to obtain through direct observations. As of today, the most reliable way to remotely measure wind speeds is using Doppler radars. This technique involves sounding the environment with a powerful source of radiation and therefore takes considerable resources, including power, equipment mass, size, and cost. Our instrument offers an advantage in terms of these parameters: It's compact, inexpensive, and involves commercial components available in the telecom market," commented the study's lead author Alexander Rodin, who heads the Applied Infrared Spectroscopy Lab at MIPT.

The instrument is based on the principle of heterodyne detection, which is at work in many radio engineering applications. However, it should be noted that the instrument operates in the optical, or to be more precise, the near infrared range -- at a wavelength of 1.65 micrometers. The operating principle is based on combining the received signal (in this case, solar radiation that has passed through the atmosphere) and an etalon source (local oscillator), namely a tunable diode laser. Since the laws of electromagnetic wave propagation are the same for all spectral ranges, the principle of heterodyning is equally applicable to both radio signals and infrared radiation.

However, heterodyning faces certain difficulties if applied to the optical range. For instance, highly accurate matching of wave fronts is required, as displacement by even a fraction of a wavelength is unacceptable. The MIPT team employed a simple solution, applying a single-mode optical fiber.

A further challenge is the need for extremely precise frequency control of the local oscillator, with an error of no more than 1 MHz, a tiny quantity compared to the optical radiation frequency. To address this, the team had to employ a tricky approach and delve deep into the processes of diode laser emission. These efforts have resulted in a new instrument -- an experimental laser heterodyne spectroradiometer -- characterized by an unprecedented spectral resolution in the near infrared range. It measures the infrared atmospheric absorption spectrum with an ultra-high spectral resolution, making it possible to retrieve wind speeds with an accuracy of 3 to 5 meters per second.

"Building an instrument, even with record characteristics, is only half of the story," Rodin commented. "To retrieve wind speed at various altitudes up to the stratosphere using the measured spectra, you need a special algorithm that solves the inverse problem."

"We decided not to use machine learning but to implement a classical approach based on Tikhonov regularization. Despite the fact that this method is known for more than half a century, it is widely used all over the world, and its capabilities are far from being exhausted," the scientist insisted.

The calculations will enable vertical wind profile retrieval from the surface up to about 50 kilometers. Based on the relatively simple and affordable spectroradiometer, in the future one may create extensive networks for atmospheric monitoring.

The Applied Infrared Spectroscopy Lab at MIPT is planning to carry out an observational campaign to measure the stratosphere polar vortex as well as greenhouse gas concentration in the Russian Arctic with their newly developed instrument. In addition to that, in cooperation with the Space Research Institute of the Russian Academy of Sciences, the lab is developing an instrument for the studies of Venus atmosphere based on the same principle. The instrument is supposed to be installed onboard India's Venus orbiter in the framework of international cooperation.

Credit: 
Moscow Institute of Physics and Technology

Beavers are diverse forest landscapers

image: Diving beetles, frogs and waterbirds thrive in beaver-induced flood sites.

Image: 
Sonja Kivinen

Beavers are ecosystem engineers that cut down trees to build dams, eventually causing floods. Beaver-induced floods make forest landscapes and habitats increasingly diverse, but very little is known about the long-term effects of beavers on European landscapes. Researchers at the University of Eastern Finland and the University of Helsinki examined the history and occurrence of beaver-induced floods and patch dynamics in southern Finland. They used a unique dataset of field observations from 1970 to 2018.

Floods caused by beavers over the course of history form a network of different habitats that remain interconnected even for long periods of time.

"Beavers can help to restore wetland ecosystems and entire boreal forests, and they also help in conserving the biodiversity of these environments," Researcher Sonja Kivinen from the University of Eastern Finland says.

The European beaver was hunted to extinction in the 19th century Finland. Nowadays, the study area is home to the American beaver, which was introduced there in the 1950s. The American beaver builds similar dams as the European one.

"The spread of the beaver in our study area has created a diverse and constantly changing mosaic of beaver ponds and beaver meadows of different ages," Kivinen says.

In 49 years, number of beaver-induced flood sites grew 11-fold

The researchers observed that the number of beaver-induced flood sites grew by more than 11-fold over the study period. In addition to creating new flood sites, beavers also often use old sites to cause new floods. The duration of an individual flood and the frequency of floods can vary greatly between different flood sites, resulting in an abundance of habitat patches with different environmental conditions.

"Thanks to beaver activity, there is a unique richness of wetlands in the forest landscape: flowages dominated by bushes, beaver meadows, and deadwood that can be used by various other species," University Lecturer Petri Nummi from the University of Helsinki says.

Indeed, beaver-induced disturbances are more predictable in diversifying the forest landscape than for example forest fires or storms.

Credit: 
University of Eastern Finland

Self-healing bone cement

image: Prof. Dr Frank A. Mueller, Chair of Colloids, Surfaces and Interfaces of Friedrich Schiller University Jena (Germany).

Image: 
Anne Guenther

Our body is able to treat many injuries and wounds all by itself. Self-healing powers repair skin abrasions and enable bones to grow back together. However, doctors often have to lend a helping hand to repair bones after a fracture or due to a defect. Increasingly, bone replacement materials are being used, which partially or completely restore the form and function of the bone at the site of the damage. To ensure that such implants do not have to be replaced or repaired through extensive surgery in the event of damage, they should themselves possess self-healing capabilities. Material scientists at Friedrich Schiller University Jena have now developed a bone replacement material that minimises the extent of damage to it and at the same time repairs itself. They report on their research in the prominent research magazine "Scientific Reports".

Minimally invasive use of calcium phosphate cement

The experts from Jena, who collaborated with colleagues from the University of Würzburg as part of the German Research Foundation's priority research programme "Self-healing Materials", concentrated on what is called calcium phosphate cement - a bone substitute that is already widely used in medicine. On the one hand, the material stimulates bone formation and increases the ingrowth of blood vessels. On the other hand, it can be introduced into the body as a paste in a minimally invasive procedure. There, its malleability allows it to bind closely to the bone structure.

"Due to its high degree of brittleness, however, cracks form in the material when it is subjected to excessive load. These cracks can quickly widen, destabilise the implant and ultimately destroy it - similar to concrete on buildings," explains Prof. Frank A. Müller from the University of Jena. "For this reason, calcium phosphate cement has so far mainly been used on bones that do not play a load-bearing role in the skeleton, for example in the mouth and jaw area."

Bridging and refilling cracks

The material scientists in Jena have now developed a calcium phosphate cement in which any cracks do not develop into catastrophic damage. Instead, the material itself seals them. The reason for this is carbon fibres that have been added to the material.

"Firstly, these fibres significantly increase the damage tolerance of the cement, because they bridge cracks as they form and thus prevent them from opening further," Müller explains. "Secondly, we have chemically activated the surface of the fibres. This means that as soon as the exposed fibres encounter body fluid, which collects in the openings created by the cracks, a mineralisation process is initiated. The resulting apatite - a fundamental building block of bone tissue - then closes the crack again."

The Jena scientists have simulated this process in their experiments by deliberately damaging the calcium phosphate cement and healing it in simulated body fluid. This intrinsic self-healing ability - and the greater load-bearing capacity associated with fibre reinforcement - could considerably expand the areas in which bone implants made of calcium phosphate cement can be used, which could possibly also include load-bearing areas of the skeleton in the future.

Credit: 
Friedrich-Schiller-Universitaet Jena

COVID-19 antibody tests: How reliable are they?

With stay-at-home orders expiring around the world, many hope that COVID-19 antibody testing will help businesses and institutions reopen safely. Determining whether people have been infected with SARS-CoV-2 is a key tool in responding to the pandemic, but it is not a magic bullet. A feature article in Chemical & Engineering News, the weekly newsmagazine of the American Chemical Society, details the steps manufacturers are taking to ensure antibody tests are accurate and available. 

The U.S. Food and Drug Administration has issued several emergency use authorizations to companies for antibody tests, allowing them to be used during the pandemic. These blood-based tests measure the presence or absence of antibodies that the body produces when fighting infection with SARS-CoV-2, and the tests are distinct from the nasal swabs that detect current infection. After a person recovers from COVID-19, antibodies are thought to remain in their system for weeks, writes Senior Editor Megha Satyanarayana. Tests vary in their mechanisms for detecting antibodies, but all separate antibody-containing plasma from red blood cells and then determine whether the antibodies "stick" to specific COVID-19 proteins. That binding action is detected with fluorescent or other readout methods.

Manufacturers have been working around the clock to produce tests that are fast, automated and can be used with equipment that hospitals and clinical labs already have. While some of the early tests have had questionable accuracy, much progress has been made in recent months, and some manufacturers are reporting sensitivities and specificities near 100%. More reliable tests mean a better understanding of community spread, but they cannot provide information about the possibility of reinfection, a key question for those looking for guidance on reopening. However, experts believe that humans can produce large amounts of SARS-CoV-2 antibodies, and the virus is slow to mutate, which bodes well for long-term protection. While companies are working to maximize their production, experts caution that requiring proof of immunity could pose an ethical dilemma that might lead to further disenfranchisement of an already stressed workforce. 

Credit: 
American Chemical Society

Extinct camelids reveal insights about North America's ancient savannas

image: Skull of Camelops, one of the now extinct North American camelids. Used with permission from the American Museum of Natural History.

Image: 
Christine Janis

A new study looking at extinct camelids - ancestors of today's camels and llamas - tells the story of North America's ancient savannas and highlights how past climatic and environmental conditions influenced the composition of mammalian faunas.

Although savanna habitats (treed grasslands) are only found in the tropics today, around 18 million years ago, during the Miocene epoch, savanna ecosystems, similar to those of modern Africa, existed in the mid latitudes of North America. At their peak - around 12 million years ago - they were comparable in their mammalian diversity to that of the Serengeti today.

The study, published in Frontiers in Earth Science, is the work of palaeobiologists at the University of Bristol and the University of Helsinki. It provides the first quantitative characterisation of the ecomorphology of a group of large herbivorous ungulates (i.e. hoofed mammals) known as artiodactyls, which includes camels and antelope, from ancient North American savannas and how they compare with their counterparts from the present-day African savannas, such as the Serengeti.

Lead author of the research, Nuria Melisa Morales García from the University of Bristol, said: "The North American savannas housed a vast diversity of camelids. In fact, camelids actually originated and first diversified in North America where they lived for more than 40 million years and were incredibly successful and widespread."

The researchers measured the skulls, jaws and limb bones of dozens of extinct North American artiodactyls, including camelids, and compared them with those living today in the Serengeti savanna of East Africa. The researchers recorded data on body size and on aspects of the anatomy of the animals that are linked with their ecology.

"The Serengeti mammals are very well known to research: we know how they live, how they eat and we have all their measurements. By using what we know about them, we can make solid inferences on how the extinct artiodactyls of North America were behaving," said Professor Christine Janis, from the University of Bristol's School of Earth Sciences and supervising author of the study.

The analysis showed that while there was considerable overlap between the ecologies of extinct and modern species, the majority of extinct camelids were most similar to the modern common eland, an arid-adapted antelope with a diet of grass and leaves. This reveals important information about the ecosystem they inhabited and suggests the North American savannas were drier than modern African savannas (a notion supported by other research).

"We also studied how these faunas were affected by the climatic changes of the Neogene: as temperatures dropped and conditions became more arid, these faunas became more depauperate - lacking in number and diversity. Camels still dominated in these faunas, but the diversity of all ungulates took a big hit. Our study shows how ungulate faunas responded to a particular scenario of climate change which, now more than ever, is extremely relevant in understanding what is to come," said Morales-García.

Credit: 
University of Bristol

People make irrational trust decisions precisely

People make irrational trust decisions precisely: "shouting" and spelling mistakes add together to make online health information appear doubly less trustworthy

Online health information is deemed doubly less trustworthy if the text includes both "shouting" and spelling errors together, according to a new study at Brighton and Sussex Medical School (BSMS).

As the world desperately seeks answers to their questions about the coronavirus, this timely study shows how vital it is for anyone giving valid health advice online to understand how readers judge the backdrop and atmosphere surrounding the presented information as well as the words.

Dr Harry J Witchel, an expert in body language at BSMS and lead author of the study, said, "This is all about trust, which is vital at the moment. If you're reading something online and you instinctively don't believe what it's saying, then you won't follow the advice. If the advice is genuine and important, then that's a real problem, particularly at present when people are dying because others aren't following important guidance.

"We've known for some time that people profoundly alter their judgments about what they hear based on contextual cues rather than just the content of what is said. But this research looks at how 'shouting' - using capital letters - and typographic errors both reduce the credibility of what is being read, and is the first to show that the effects of both these errors add together quite precisely -- as though readers were keeping score in their minds of all these little things.

"On the back of this research, my advice to any government or medical professional giving online advice on COVID-19 would be - research your audience!"

Published in the Journal of Medical Internet Research, the study asked 301 healthy participants to read information on a health forum about multiple sclerosis, ranking it for trustworthiness.

They were asked to rate various paragraphs online in terms of how much they trusted the paragraph; however, the volunteers were not told that some of the paragraphs had typographic spelling errors, a few words of "shouting" text (all caps), or a combination of both types of errors.

The results showed that spelling mistakes alone made the copy appear less trustworthy by 9%, "shouting" made it less trustworthy by 6%, and a combination of these errors made it less trustworthy by 14%, showing an additive effect.

Credit: 
University of Sussex

What can maritime shipping learn from brain network science?

image: Representation of the global liner shipping maritime network and its structural core. The color of the nodes corresponds to the ports belonging to different modular communities.

Image: 
© BIOTEC/TU Dresden

Around 80 per cent of global trade by volume is transported by sea, and thus the connectivity network of the maritime transportation system is fundamental to the world economy and to the functioning of society. To better exploit new international shipping routes, the current ones need to be analysed: What are the principles behind their network organisation? What mechanisms determine the complex system association with international trade? However, there is another complex system that, similarly to maritime transportation systems, links the navigability of its network structure and organisation to its efficient performance in the environment. This complex system is: the brain. The motivation for this comparative and trans-disciplinary research came from the exchange during an international network science conference, followed by three years of collaborative work on the topic.

"Many complex systems share basic rules of self-organisation and economical functionality. When I examined the maritime network structure of the Chinese colleagues at the conference, I advanced the hypothesis that its structure displays a trade-off between high transportation efficiency and low wiring cost, similarly to the one we know is present in brain networks. We combined our knowledge in network science, maritime science and data processing, which led to new insights into the maritime network structural organisation complexity and its relevance to international trade", explains Dr. Cannistraci, research group leader for biomedical cybernetics at BIOTEC, TU Dresden. "An important result of this study is the development of new computational network measures for the investigation of modular connectivity and structural core organisation within complex networks in general, which here we applied to maritime science. In future projects, I plan to use these newly developed methods in my research at the BIOTEC where I focus on computational and network systems in biomedicine. They might turn out to be particularly useful for brain network organization analysis and the development of markers for brain diseases, such as depression and Alzheimer."

Credit: 
Technische Universität Dresden

Population ecology: Origins of genetic variability in seals

A new study led by Ludwig-Maximilians-Universitaet (LMU) in Munich researchers shows that fluctuations in population sizes in the past have had a significant effect on contemporary seal populations, and estimates the risk of genetic impoverishment in the species investigated.

In the course of Earth's history, evolution has given rise to an enormous range of biological diversity, which in turn enabled the emergence of complex, species-rich ecosystems. The availability of adequate levels of genetic variation is a basic prerequisite for evolution. Higher levels of genetic variability therefore increase the probability that any given population will be able to adapt to new environmental conditions and remain evolutionarily flexible. Scientists led by LMU evolutionary biologist Jochen Wolf have examined the genetic variability of multiple seal species and show that a large part of today's variation is due to historical fluctuations in population sizes. In addition, the authors use the results of their genomic analyses to derive a parameter that allows them to assess the risk that genetic impoverishment and inbreeding pose to seal populations today. The new study appears in the journal Nature Ecology & Evolution.

Genetic variation is the product of random mutations, which are passed down from generation to generation. However, mutations can also be lost, owing to the effects of 'genetic bottlenecks', for instance. Such bottlenecks can occur when a large fraction of the population is lost. "It is generally assumed that populations that are made up of many individuals are likely to exhibit high levels of genetic variability," says Wolf. "We have now tested this assumption for 17 species of seals, by analyzing the genetic differences between 458 animals from 36 populations."

Since the genetic variation found in present-day populations can tell us a great deal about the genetic make-up of their ancestors, the authors of the study were able to deduce from their data how different populations have changed with time. "Genetic data are like a microscope that allows us to peer into the past," says Wolf. "The greater the differences between the genomic sequences, the farther back in time their last common ancestor lived. Our analyses enable us to look back thousands and even millions of years, and we can see that many populations must have gone through very narrow genetic bottlenecks - in other words, were drastically reduced in size - while others experienced significant expansions."

The researchers use the 'effective population size' as a measure of the extent of genetic variation within a population. This parameter is defined as the number of individuals that, under theoretically ideal conditions, would be expected to exhibit the same level of genetic variance as the real population of interest. The effective population size is related to, but much smaller than, the actual size of the real population, because the parameter includes the effects of factors such as reproductive behavior. Male seals in some species compete aggressively for females. That implies that the less dominant males may have no chance to reproduce, which in turn reduces the range of genetic variation in the following generation. "We assessed the impact of such effects, but our analyses indicate that the amounts of genetic variation in modern seals have been influenced mainly by historical fluctuations in population sizes, which are probably related to changes in the climate," says Wolf.

The ratio of the effective to the actual population size is often used to infer whether or not a given population possesses enough genetic variability to survive in the longer term. A very low quotient serves as a warning signal, since populations with low levels of variation are especially susceptible to inbreeding effects which, among other things, increase the risk of disease. "Most genetic studies undertaken in the context of conservation assess the level of genetic variability only across a few generations," says Wolf. "Our investigation, on the other hand, extends much further back in time. So we were able to take fluctuations in population sizes into account, and could calculate the population sizes we would expect to find today due to the genetic variability."

The expected population sizes were then compared with their actual sizes by means of a complex statistical procedure, which reveals whether the extant population is larger or smaller than the expected value. "This then tells us if a population is at risk because its current size is much too small to sustain that particular species in the longer term," says Wolf. In this context, the absolute number of individuals can be misleading. For instance, only 400 Saimaa ringed seals survive in the wild, and the species is regarded as endangered." From a genetic point of view, however, despite their small number, we do not expect them to run into problems in the near future, as the animals are highly variable," says Wolf. The indications are that they settled in their present habitat only a short time ago - in evolutionary terms - and they retain the full range of variation that characterized their ancestors. The situation in the Galapagos is quite different. There too, seal and sea lion populations are small, but their levels of genetic variability are also low - a factor which is not reflected in the value of the conventional ratio of effective to actual population size. The study shows that comparative genomic analyses of animal populations constitute an important tool for the identification of vulnerable populations in order to take protective measures.

Credit: 
Ludwig-Maximilians-Universität München

New procedure 'rewires' the heart to prevent recurrent fainting spells

A procedure conducted for the first time in the United States at University of Chicago Medicine has provided much-needed relief for a patient who suffered from recurrent fainting spells.

Called cardioneural ablation, the procedure essentially rewired the heart to treat the recurring sudden drops in heart rate and blood pressure that had been causing the 52-year-old woman to faint at least once every two months for most of her life.

This type of ablation, performed by Roderick Tung, MD, an internationally known expert on advanced therapies for heart rhythm disorders, had been performed in Europe, South America, and Asia but not in the U.S. But Tung and his team wanted to help give the patient the relief she could not find with other therapies.

Since having the ablation a year and a half ago, the woman has not fainted, even in situations where she would normally faint, such as having her blood drawn.

"We have been following the data from other countries very carefully, and it's really exciting," said Tung, Director of Cardiac Electrophysiology at UChicago Medicine. "Now we have shown for the first time in the United States that this could be a viable therapy for patients who haven't responded to other treatments."

A case report about the patient and the procedure was published June 10 in JACC Case Reports.

The woman initially came to Tung because she had suffered from what is called vasovagal syncope for most of her life. The condition causes people to faint when certain triggers, like the sight of blood, cause their heart rate and blood pressure to drop suddenly and reduce blood flow to the brain.

The woman had tried several therapies to treat the condition, including medications and a dual-chamber pacemaker, but nothing seemed to work. She had read about cardioneural ablation therapy and approached Tung to see if he and his team would consider the procedure.

Tung is an expert on cardiac ablation, a minimally invasive procedure in which a catheter is threaded into the heart and heat is used to destroy tissue to restore correct heart rhythms. But physicians in other countries had shown that using the same technique to target ganglionated plexi (GP), clusters of neurons in the heart, had provided relief to those suffering from vasovagal syncope.

Tung agreed to the procedure on a compassionate-need basis as he had followed the evolving field through international publications. "We wanted to see if this was feasible as a treatment in desperate situations," he said.

There is no agreement on just how to use ablation to target GP, however; the neurons are located within the heart in a complex network. Tung and his team used high-frequency stimulation in the left upper chamber of the patient's heart to find areas that had the most nerves that slowed down the heart's rhythm. They then used cardiac ablation techniques to target three of those areas to remove the response. When they stimulated those areas after the procedure, they found that they no longer slowed the heart's rhythm.

"We are rewiring the heart to get rid of the excessive autonomic tone that slows heart rhythms and slows blood pressure, culminating in simple fainting," Tung said. "There is a yin and yang of autonomic tone in the heart, and too much of the slowing that counterbalances adrenaline responses leads to simple faints. To date, there are no established therapies for this frustrating condition, except behavioral modification."

A month after the procedure, the patient returned to undergo a tilt-table test, where a person lies on a table and is then is tilted up to stimulate the reflex that can cause fainting. The patient had undergone the test before the procedure and had fainted, but after the procedure, she did not faint, and when the test was performed again a year later, she still didn't faint. In fact, she hasn't had any fainting episodes since the procedure was performed.

Tung cautions that the procedure is not for everyone, and that much more research is needed before it is considered for standard clinical practice. He hopes to perform a controlled study in the future using patients who have not had success with other therapies.

"We want to make sure this procedure is safe and feasible, and we definitely do not recommend this for everyone who has had recurrent faints," he said. "It is a very specific physiologic response that we are looking for, but we are quite encouraged with this result. It may signal a paradigm shift in the way we think about rewiring the balance of nerves that regulate the heart."

Credit: 
University of Chicago Medical Center

Acoustics put a fresh spin on electron transitions

ITHACA, N.Y. - Electrons are very much at the mercy of magnetic fields, which scientists can manipulate to control the electrons and their angular momentum - i.e. their "spin."

A Cornell team led by Greg Fuchs, assistant professor of applied and engineering physics in the College of Engineering, in 2013 invented a new way to exert this control by using acoustic waves generated by mechanical resonators. That approach enabled the team to control electron spin transitions (also known as spin resonance) that otherwise wouldn't be possible through conventional magnetic behavior.

The finding was a boon for anyone looking to build quantum sensors of the sort used in mobile navigation devices. However, such devices still required a magnetic control field - and therefore a bulky magnetic antenna - to drive certain spin transitions.

Now, Fuchs's group has shown that these transitions can be driven solely by acoustics. This eliminates the need for the magnetic antenna, enabling engineers to build smaller, more power-efficient acoustic sensors that can be packed more tightly on a single device.

The team's paper, "Acoustically Driving the Single Quantum Spin Transition of Diamond Nitrogen-Vacancy Centers," published May 27 in Physical Review Applied.

"You can use a magnetic field to drive these spin transitions, but a magnetic field is actually a very extended, big object," Fuchs said. "In contrast, acoustic waves can be very confined. So if you're thinking about controlling different regions of spins inside your chip, locally and independently, then doing it with acoustic waves is a sensible approach."

In order to drive the electron spin transitions, Fuchs and Huiyao Chen '20, the paper's lead author, used nitrogen-vacancy (NV) centers, which are defects in the crystal lattice of a diamond. The acoustic resonators are microelectromechanical systems (MEMS) devices equipped with a transducer. When voltage is applied, the device vibrates, sending acoustic waves of 2 to 3 gigahertz into the crystal. These frequencies cause strain and stress in the defect, which results in the electron spin resonance.

One complication: This process also excites the magnetic field, so the researchers have never been entirely sure of the effect of the mechanical vibrations versus the effect of the magnetic oscillations. So Fuchs and Chen set out to painstakingly measure the coupling between the acoustic waves and the spin transition, and compare it to the calculations proposed by theoretical physicists.

"We were able to separately establish the magnetic part and the acoustic part, and thereby measure that unknown coefficient that determines how strongly the single quantum transition couples to acoustic waves," Fuchs said. "The answer was, to our surprise and delight, that it's an order of magnitude larger than predicted. That means that you can indeed design fully acoustic spin resonance devices that would make excellent magnetic field sensors, for instance, but you don't need a magnetic control field to run them."

Fuchs is working with Cornell's Center for Technology Licensing to patent the discovery, which could have important applications in navigation technology.

"There's a significant effort nationwide to make highly stable magnetic field sensors with diamond NV centers," Fuchs said. "People are already building these devices based on conventional magnetic resonance using magnetic antennas. I think our discovery is going to have tremendous benefit in terms of how compact you can make it and the ability to make independent sensors that are closely spaced."

Credit: 
Cornell University

Three research groups, two kinds of electronic properties, one material

image: It consist of six Dirac cones (located on the dashed circle) that represent the crystalline topological states coexisting with 1D linear spectrum (the X at the middle) that represents the helical metallic channel at the step edge.

Image: 
© MPI CPfS

This it is the story of a unique material - made of a single compound, it conducts electrons in different ways on its different surfaces and doesn't conduct at all in its middle. It is also the story of three research groups - two at the Weizmann Institute of Science and one in Germany, and the unique bond that has formed between them.

The material belongs to a group of materials discovered a decade and a half ago known as topological insulators. These materials are conducting on their surfaces and insulating in their inside "bulk." But the two properties are inseparable: Cut the material, and the new surface will be conducting, the bulk will remain insulating.

Some five years ago, Dr. Nurit Avraham, was starting out as a staff scientist in the new group of Dr. Haim Beidenkopf of the Institute's Condensed Matter Physics Department. Around that time, she and Beidenkopf met Prof. Binghai Yan when he had his first scientific visit to the Weizmann Institute. Back then Yan was working as a junior group leader in the group of Prof. Claudia Felser, a materials scientist who was developing new kinds of topological materials in her lab at the Max Planck Institute for Chemical Physics of Solids in Dresden. Beidenkopf and his group specialize in classifying and measuring these materials on the scale of single atoms and the paths of single electrons, while Yan was turning to theory - predicting how these materials should behave and working out the mathematical models that explain their unusual behavior.

Avraham and Beidenkopf were interested in uncovering the properties of a special type of topological insulators in which the chemical structure is organized in layers. How would the layers affect the way that electrons were conducted over the surface of the material? Theoretically, stacking layers of 2D topological insulator was expected to form a 3D topological insulator in which some of the surfaces are conducting and some are insulation. Yan suggested they work with a new material being predicted by him and later developed in Felser's lab. Soon the Weizmann and Max Planck groups started collaborating.

Avraham led the project, obtaining samples of the material from Felser's lab, undertaking the measurements, and working with Yan to see whether the theories' predictions would be born out experimentally. As the collaboration deepened, Beidenkopf and Avraham got the Faculty of Physics to invite Yan again to the Institute, and this visit eventually led Yan to decide to leave Germany and move his family to Rehovot, to take up a position in the Institute's Condensed Matter Physics Department. "That decision was a turning point that would set me on my present career path," say Yan.

Over the coming years, Beidenkopf, Avraham, Yan and Felser would collaborate on multiple research projects, exploring the properties of several different classes of topological materials. But understanding this particular material - a compound of bismuth, tellurium and iodine - would turn out to be a long-term project. To begin with, Yan analyzed the band structure of the material - in other words, the states electrons are "allowed" to inhabit. When the bands get crossed in the bulk - so called "band inversion" - they prevent electrons from moving around inside, but enable them to move on the surface. This "projection" of a state arising in the bulk of a material onto the surface is what gives topological materials their special properties.

Avraham and Beidenkopf worked with samples that had been cleaved, exposing fresh surfaces out of the layered structure. They used a scanning tunneling microscope - STM - in their lab to track the electron density in the different parts of the material. The theory predicted that the surface measurements would reveal a material that behaves as a weak topological insulator, thus being metallic on the edges and insulating on the top and bottom surfaces. Weak topological insulators were a class of topological materials that had been predicted but not yet proven experimentally, so the group was hoping to uncover such characteristic properties on the edges' surfaces. The researchers did, indeed, find that the material acted as a weak topological insulator on its cleft sides. But on the tops and bottoms of their samples, the group found evidence indicating a strong topological insulator, rather than the insulator that had been predicted.

Could this one material be not only at the same time insulating and conducting, but conduct in two different ways? As the researchers continued to experiment, testing the material with different methods and confirming their original results, together with Yan they continued to puzzle over the strange results. At one point, says Avraham, they even measured a new batch of samples that were grown independently by Junior Prof. Anna Isaeva and Dr. Alexander Zeugner at the Technische Universitaet Dresden, just to be sure the results are general and not an accidental property of one particular batch of samples.

Part of their eventual breakthrough says Yan, came from a theoretical research paper published by another physics group that conjectured how such a dual material might function. Topological materials are sometimes classified according to their symmetry - a property of the atomic structure of the material. The scientists looked for places on the surfaces where any such symmetry would be broken, due to flaws or irregularities on the surface, which, by scattering electrons, would affect the properties in that spot and highlight the type of symmetry "protecting" each topological state.

Finally, theory and experiment came together to show, in an article published in Nature Materials, that the material is, indeed, two different kinds of topological insulator in one. The exposed layers of the cleft, side surfaces create "step-edges" that channel the electrons into certain paths. While the sides are protected by both time reversal and translational symmetry, the tops and bottoms are protected by crystalline mirror symmetry, giving rise to a metal-like state in which the electrons can move.

While this two-in-one combination made it challenging to classify the material topologically - one of the main goals of such measurements - the researchers believe that other new topological materials could turn out to have such dual properties. That opens the possibility of engineering materials to have several desired electrical properties all in one.

"Technically, the work was challenging, but the story, itself, turned out to be simple," says Yan.

"It's also the story of a great friendship and what happens when you can have such close scientific collaboration," says Avraham.

"And it all started with a question about a particular kind of material," adds Beidenkopf.

Credit: 
Max Planck Institute for Chemical Physics of Solids

Bedrock type under forests greatly affects tree growth, species, carbon storage

image: The forest inventory plots in the Ridge and Valley physiographic province in Pennsylvanias used in this study are shown. Inventory plots were restricted to land owned and managed by the Pennsylvania Department of Conservation of Natural Resources Bureau of Forestry and the Pennsylvania Game Commission, underlain by shale and sandstone bedrock.

Image: 
Margot Kaye Research Group, Penn State

A forest's ability to store carbon depends significantly on the bedrock beneath, according to Penn State researchers who studied forest productivity, composition and associated physical characteristics of rocks in the Appalachian ridge and Valley Region of Pennsylvania.

The results have implications for forest management, researchers suggest, because forests growing on shale bedrock store 25% more live, aboveground carbon and grow faster, taking up about 55% more carbon each year than forests growing on sandstone bedrock.

The findings demonstrate that forests underlain by shale in this region provide more ecosystem services such as carbon uptake and biodiversity, explained researcher Margot Kaye, associate professor of forest ecology in the College of Agricultural Sciences. Also, shale forests make up a smaller portion of the landscape and should be high-priority candidates for management or conservation.

"As forests grow and respond to warming, shifts in precipitation and invasive species, managers will benefit from incorporating lithological influences and considerations on forest composition and productivity," she said. "For example, conserving forests growing on shale with higher species diversity will likely lead to forests that are resilient to stressors and can grow more vigorously."

Forest managers -- now realizing the disparity of productivity -- may target forests growing over shale for conservation and carbon sequestration, Kay contends. In contrast, they may decide that forests growing over sandstone are better suited for wildlife habitat management or recreation.

To reach their conclusions, researchers analyzed forest inventory data from 565 plots on state forest and game lands managed by the Pennsylvania Department of Conservation and Natural Resources and the state Game Commission in the Appalachian Ridge and Valley Region. They used a suite of GIS-derived landscape metrics, including measures of climate, topography and soil physical properties, to identify drivers of live forest carbon dynamics in relation to bedrock.

Those forest plots contained more than 23,000 trees, ranging from 20 to 200 years old, with most being 81 to 120 years old, according to the most recent available forest inventory data. In the study dataset, 381 plots were on sandstone bedrock and 184 were on shale -- a similar ratio to the amount of Pennsylvania public land on each bedrock type in the Ridge and Valley Region. There are 812,964 acres of forest on sandstone and 262,025 acres of forest on shale in the region.

"That is an eye-opening number," said lead researcher Warren Reed, a doctoral student in ecology.

While forests underlain by both shale and sandstone bedrock were oak dominated, the tree communities are quite different, Reed pointed out. Northern red oak is more dominant on shale bedrock, and chestnut oak dominates on sandstone. Most species in the forest tend to be more productive on shale, and the diversity of tree species is higher in sites on shale bedrock.

Forests grow faster over shale bedrock than sandstone bedrock because of soil characteristics that generally make water more available to trees, Reed hypothesized. Over millions of years, bedrock breaks down, becomes parent material and soils develop. Because of the composition of the rock types, shales break down into soils with finer texture than sandstone, which is coarser.

Forests above shale bedrock growing in finer soils typically have better access to water during the growing season.

"We see this across the landscape, so forest productivity is indirectly related to bedrock," Reed said. "Oaks growing on sandstone are more sensitive to annual climate and water availability -- or put differently, oak growth on sandstone is more limited by water than on shale."

The findings of the research, recently published in Forest Ecology and Management, are exciting, Reed noted, because the information about underlying bedrock type has been readily available but previously not used to understand forest growth. Maps showing the locations of bedrock types have existed for decades. But the magnitude of the forest differences due to bedrock is quite surprising, he said.

The concept of geologic influences on forest growth will be especially valuable in Pennsylvania, Reed said, because it is a major producer of hardwood lumber, and the state has so much forest growing on its portion of the Appalachian Ridge and Valley Region. The Ridge and Valley is a major portion of the forested Appalachian Mountains, so these rules should apply from southern New York to northern Georgia within that landscape.

"Sequestering carbon in forests is one of the many nature-based solutions we have to combat global climate change," he said. "I believe this is an ecosystem service that will continue to gain traction and eventually greater market value."

Credit: 
Penn State

Government health, safety regulations backfire with conservatives, study shows

Health and safety risks from product consumption, including obesity, vaping, drug misuse and texting while driving, as well as circumstances surrounding the coronavirus pandemic, pose significant problems in the United States and around the world.

Public policy makers often impose regulations in an attempt to steer consumers toward safer and healthier choices. For instance, to help slow the coronavirus pandemic, federal agencies restricted travel, retail and workplace operations. However, a new study from the University of Notre Dame shows government-imposed restrictions can backfire, depending on political ideology.

"When Consumption Regulations Backfire: The Role of Political Ideology" is forthcoming in the Journal of Marketing Research from Vamsi Kanuri, assistant professor of marketing at Notre Dame's Mendoza College of Business, along with Caglar Irmak from the University of Miami and Mitchel Murdock from Utah Valley University.

The research found conservatives -- but not liberals -- increased usage of mobile phones in vehicles after a law was enacted by the National Highway Traffic Safety Administration prohibiting the activity. It also showed that after consumers were exposed to government regulations, whether new laws or warning labels designed by the Food and Drug Administration, conservatives were more likely to purchase unhealthy foods and view smoking e-cigarettes more favorably.

"We did not find these same effects when a non-government source is used or when the message from the government is framed as a notification rather than a warning," Kanuri said. "We attribute our findings to a heightened feeling of threat to freedom among conservatives when they are faced with government-imposed regulations."

The team conducted four studies to show that reactance to restrictions on freedom, rather than political associations with specific regulations, drives the effect. Through a pilot study, they identified which government regulations are perceived to be supported by liberals versus conservatives. The team then generalized their findings in Studies1 and 2 to those supported by conservatives: mobile phone usage; in Study 3 to those supported by liberals: eating unhealthy food; and in Study 4 to those not perceived to be supported by any political ideology: electronic cigarette smoking.

"Specifically in the first study, we demonstrate via natural experiment that conservatives are more likely to reject a law restricting mobile phone use while driving," Kanuri said. "In Study 2, we replicate the results from the natural experiment with a controlled experiment, demonstrating the mediating role of perceived threat to freedom based on beliefs about future implications and rule out an alternative explanation based on party leadership. In Study 3, we investigate the moderating role of the source to show that intent to purchase an unhealthy food increases among more conservative individuals when they view a nutritional label with a government source versus a company source."

"This study also demonstrates the mediating role of the sense of threat to freedom, a hallmark of reactance," Kanuri explained. "In the final study, we determine that the FDA can get conservative consumers to view e-cigarette usage more negatively and nudge them to quit by simply using a notification rather than a warning message."

By demonstrating that using a less forceful message may increase conservative consumers' compliance with government regulations, the study shows how agencies can better communicate their messages to increase the effectiveness of regulations that promote consumer well-being.

Credit: 
University of Notre Dame

After a century of searching, scientists find new liquid phase

Researchers at the University of Colorado Boulder's Soft Materials Research Center (SMRC) have discovered an elusive phase of matter, first proposed more than 100 years ago and sought after ever since.

The team describes the discovery of what scientists call a "ferroelectric nematic" phase of liquid crystal in a study published today in the Proceedings of the National Academy of Sciences. The discovery opens a door to a new universe of materials, said co-author Matt Glaser, a professor in the Department of Physics.

Nematic liquid crystals have been a hot topic in materials research since the 1970s. These materials exhibit a curious mix of fluid- and solid-like behaviors, which allow them to control light. Engineers have used them extensively to make the liquid crystal displays (LCDs) in many laptops, TVs and cellphones.

Think of nematic liquid crystals like dropping a handful of pins on a table. The pins in this case are rod-shaped molecules that are "polar"--with heads (the blunt ends) that carry a positive charge and tails (the pointy ends) that are negatively charged. In a traditional nematic liquid crystal, half of the pins point left and the other half point right, with the direction chosen at random.

A ferroelectric nematic liquid crystal phase, however, is much more disciplined. In such a liquid crystal, patches or "domains" form in the sample in which the molecules all point in the same direction, either right or left. In physics parlance, these materials have polar ordering.

Noel Clark, a professor of physics and director of the SMRC, said that his team's discovery of one such liquid crystal could open up a wealth of technological innovations--from new types of display screens to reimagined computer memory.

"There are 40,000 research papers on nematics, and in almost any one of them you see interesting new possibilities if the nematic had been ferroelectric," Clark said.

Under the microscope

The discovery is years in the making.

Nobel Laureates Peter Debye and Max Born first suggested in the 1910s that, if you designed a liquid crystal correctly, its molecules could spontaneously fall into a polar ordered state. Not long after that, researchers began to discover solid crystals that did something similar: Their molecules pointed in uniform directions. They could also be reversed, flipping from right to left or vice versa under an applied electric field. These solid crystals were called "ferroelectrics" because of their similarities to magnets. (Ferrum is Latin for "iron").

In the decades since, however, scientists struggled to find a liquid crystal phase that behaved in the same way. That is, until Clark and his colleagues began examining RM734, an organic molecule created by a group of British scientists several years ago.

That same British group, plus a second team of Slovenian scientists, reported that RM734 exhibited a conventional nematic liquid crystal phase at higher temperatures. At lower temperatures, another unusual phase appeared.

When Clark's team tried to observe that strange phase under the microscope they noticed something new. Under a weak electric field, a palette of striking colors developed toward the edges of the cell containing the liquid crystal.

"It was like connecting a light bulb to voltage to test it but finding the socket and hookup wires glowing much more brightly instead," Clark said.

Stunning results

So, what was happening?

The researchers ran more tests and discovered that this phase of RM734 was 100 to 1,000 times more responsive to electric fields than the usual nematic liquid crystals. This suggested that the molecules that make up the liquid crystal demonstrated strong polar order.

"When the molecules are all pointing to the left, and they all see a field that says, 'go right,' the response is dramatic," Clark said.

The team also discovered that distinct domains seemed to form spontaneously in the liquid crystal when it cooled from higher temperature. There were, in other words, patches within their sample in which the molecules seemed to be aligned.

"That confirmed that this phase was, indeed, a ferroelectric nematic fluid," Clark said.

That alignment was also more uniform than the team was expecting.

"Entropy reigns in a fluid," said Joe MacLennan, a study coauthor and a professor of physics at CU Boulder. "Everything is wiggling around, so we expected a lot of disorder."

When the researchers examined how well aligned the molecules were inside a single domain, "we were stunned by the result," MacLennan said. The molecules were nearly all pointing in the same direction.

The team's next goal is to discover how RM734 achieves this rare feat. Glaser and SMRC researcher Dmitry Bedrov of the University of Utah, are currently using computer simulation to tackle this question.

"This work suggests that there are other ferroelectric fluids hiding in plain sight," Clark said. "It is exciting that right now techniques like artificial intelligence are emerging that will enable an efficient search for them."

Credit: 
University of Colorado at Boulder