Tech

Breakthrough in circuit design makes electronics more resistant to damage and defects

image: Researchers used nonlinear resonators to mold a circuit array whose function proved to be inherently robust against defects that would normally interrupt signal transmission.

Image: 
Advanced Science Research Center, GC/CUNY

NEW YORK, (March 9, 2018) --People are growing increasingly dependent on their mobile phones, tablets and other portable devices that help them navigate daily life. But these gadgets are prone to failure, often caused by small defects in their complex electronics, which can result from regular use. Now, a paper in today's Nature Electronics details an innovation from researchers at the Advanced Science Research Center (ASRC) at The Graduate Center of The City University of New York that provides robust protection against circuitry damage that affects signal transmission.

The breakthrough was made in the lab of Andrea Alù, director of the ASRC's Photonics Initiative. Alù and his colleagues from The City College of New York, University of Texas at Austin and Tel Aviv University were inspired by the seminal work of three British researchers who won the 2016 Noble Prize in Physics for their work, which teased out that particular properties of matter (such as electrical conductivity) can be preserved in certain materials despite continuous changes in the matter's form or shape. This concept is associated with topology--a branch of mathematics that studies the properties of space that are preserved under continuous deformations.

"In the past few years there has been a strong interest in translating this concept of matter topology from material science to light propagation," said Alù. "We achieved two goals with this project: First, we showed that we can use the science of topology to facilitate robust electromagnetic-wave propagation in electronics and circuit components. Second, we showed that the inherent robustness associated with these topological phenomena can be self-induced by the signal traveling in the circuit, and that we can achieve this robustness using suitably tailored nonlinearities in circuit arrays."

To achieve their goals, the team used nonlinear resonators to mold a band-diagram of the circuit array. The array was designed so that a change in signal intensity could induce a change in the band diagram's topology. For low signal intensities, the electronic circuit was designed to support a trivial topology, and therefore provide no protection from defects. In this case, as defects were introduced into the array, the signal transmission and the functionality of the circuit were negatively affected.

As the voltage was increased beyond a specific threshold, however, the band-diagram's topology was automatically modified, and the signal transmission was not impeded by arbitrary defects introduced across the circuit array. This provided direct evidence of a topological transition in the circuitry that translated into a self-induced robustness against defects and disorder.

"As soon as we applied the higher-voltage signal, the system reconfigured itself, inducing a topology that propagated across the entire chain of resonators allowing the signal to transmit without any problem," said A. Khanikaev, professor at The City College of New York and co-author in the study. "Because the system is nonlinear, it's able to undergo an unusual transition that makes signal transmission robust even when there are defects or damage to the circuitry."

"These ideas open up exciting opportunities for inherently robust electronics and show how complex concepts in mathematics, like the one of topology, can have real-life impact on common electronic devices," said Yakir Hadad, lead author and former postdoc in Alù's group, currently a professor at Tel-Aviv University, Israel. "Similar ideas can be applied to nonlinear optical circuits and extended to two and three-dimensional nonlinear metamaterials."

Credit: 
Advanced Science Research Center, GC/CUNY

Three NASA satellites recreate solar eruption in 3-D

video: Using data from three different satellites, scientists have developed new models that recreate, in 3-D, CMEs and shocks, separately. This movie illustrates the recreation of a CME and shock that erupted from the Sun on March 7, 2011. The pink lines show the CME structure and the yellow lines show the structure of the shock - a side effect of the CME that can spark space weather events around Earth. Download in HD: https://svs.gsfc.nasa.gov/12890#24859

Image: 
NASA's Goddard Space Flight Center/GMU/APL/Joy Ng

The more solar observatories, the merrier: Scientists have developed new models to see how shocks associated with coronal mass ejections, or CMEs, propagate from the Sun -- an effort made possible only by combining data from three NASA satellites to produce a much more robust mapping of a CME than any one could do alone.

Much the way ships form bow waves as they move through water, CMEs set off interplanetary shocks when they erupt from the Sun at extreme speeds, propelling a wave of high-energy particles. These particles can spark space weather events around Earth, endangering spacecraft and astronauts.

Understanding a shock's structure -- particularly how it develops and accelerates -- is key to predicting how it might disrupt near-Earth space. But without a vast array of sensors scattered through space, these things are impossible to measure directly. Instead, scientists rely upon models that use satellite observations of the CME to simulate the ensuing shock's behavior.

The scientists -- Ryun-Young Kwon, a solar physicist at George Mason University in Fairfax, Virginia, and Johns Hopkins University Applied Physics Laboratory, or APL, in Laurel, Maryland, and APL astrophysicist Angelos Vourlidas -- pulled observations of two different eruptions from three spacecraft: ESA/NASA's Solar and Heliospheric Observatory, or SOHO, and NASA's twin Solar Terrestrial Relations Observatory, or STEREO, satellites. One CME erupted in March 2011 and the second, in February 2014.

The scientists fit the CME data to their models -- one called the "croissant" model for the shape of nascent shocks, and the other the "ellipsoid" model for the shape of expanding shocks -- to uncover the 3-D structure and trajectory of each CME and shock.

Each spacecraft's observations alone weren't sufficient to model the shocks. But with three sets of eyes on the eruption, each of them spaced nearly evenly around the Sun, the scientists could use their models to recreate a 3-D view. Their work confirmed long-held theoretical predictions of a strong shock near the CME nose and a weaker shock at the sides.

In time, shocks travel away from the Sun, and thanks to the 3-D information, the scientists could reconstruct their journey through space. The modeling helps scientists deduce important pieces of information for space weather forecasting -- in this case, for the first time, the density of the plasma around the shock, in addition to the speed and strength of the energized particles. All of these factors are key to assessing the danger CMEs present to astronauts and spacecraft. Their results are summarized in a paper published in the Journal of Space Weather and Space Climate published on Feb. 13, 2018.

Credit: 
NASA/Goddard Space Flight Center

Across the metal-molecule interface: Observing fluctuations on the single-molecule scale

image: A schematic illustration of a single-molecule junction, where EF is the Fermi level. The metal-molecule interfaces are indicated; the golden balls represent gold electrodes.

Image: 
JACS

Scientists at Tokyo Institute of Technology (Tokyo Tech) have developed a technique for analyzing structural and electronic fluctuations on the single-molecule scale across the metal-molecule interface in an organic electronic device. This technique provides information that cannot be obtained using the conventional method, and it has important implications for devices such as organic solar cells.

The organic electronics field is gaining prominence in both academia and industry as devices such as organic light-emitting diodes and solar cells have multiple advantages over conventional inorganic devices, including much lower potential production costs and broader substrate compatibility. These devices incorporate organic molecules and metal components, and one of the major challenges in this field is understanding the charge transport behaviors across the metal-molecule interface. Recently, break junction techniques were developed, wherein the electric current across a single-molecule junction is measured thousands of times. The measurement results are then analyzed statistically to determine the most probable electrical conductance.

The structural and electronic characteristics of a metal-molecule interface strongly influence the charge transport properties of the single-molecule junction. Further, the metal-molecule interface structures and transport properties fluctuate on the single-molecule scale. Unfortunately, the standard analysis technique of conductance measurement cannot elucidate this behavior sufficiently. Scientists at Tokyo Tech have recently developed a comprehensive method for analyzing these fluctuations. Their technique involves combining two methods: current-voltage measurement through break junction experiments and first-principles simulation. It is worth noting that the developed technique provides a correlated statistical description of the molecular orbital-energy level and the electronic coupling degree across a metal-molecule interface, unlike the standard analysis methods typically employed in this field.

The developed analysis method was applied to various single-molecule junctions, i.e., those of 1,4-butanediamine (DAB), pyrazine (PY), 4,4'-bipyridine (BPY), and fullerene (C60), sandwiched by gold electrodes, and the different molecular-dependent electronic and structural fluctuations were demonstrated. The junctions were stretched by up to 10 nm until breaking during the experiments and simulations in order to identify any structural variations; it was found that the electronic coupling between the electrode and molecule decreases with increased stretching. Further, total energy calculations performed as functions of the stretching distance revealed metastable structures in the structural models.

The developed method provides characteristic information about the simple, low-dimensional, and ultra-small charge transport across the metal-molecule interface, which is relevant to the switching functionality and potential manipulation of transport properties. This novel technique and the information it provides have significant implications for future transport property manipulation in electronic devices featuring organic molecules, such as solar cells and light-emitting diodes.

Credit: 
Tokyo Institute of Technology

Prosthetic limbs represented like hands in brain

The human brain can take advantage of brain resources originally devoted to the hand to represent a prosthetic limb, a new UCL-led study concludes.

Among people with only one hand, the brain area that enables us to recognise hands can also recognise a prosthetic hand, particularly among those who use a prosthesis regularly, according to the new Brain paper.

The study provides the first account of how artificial limbs are represented in the brains of amputees.

"While the use of a prosthesis can be very beneficial to people with one hand, most people with one hand prefer not to use one regularly, so understanding how they can be more user-friendly could be very valuable," said the study's lead author, Dr Tamar Makin (UCL Institute of Cognitive Neuroscience).

"If we can convince a person's brain that the artificial limb is the person's real limb, we could make prostheses more comfortable and easier to use."

The study included 32 people with one hand - half of whom were born with one hand and half had lost a hand due to amputation - alongside 24 people with two hands, used as a control group, most of whom were family or friends of the people with one hand. The participants were shown images of prosthetic hands (including photos of their own prostheses) as well as real limbs. A functional magnetic resonance imaging (fMRI) scan was used to assess the participants' neural responses.

Within the visual cortex of the brain is an area that enables people to recognise hands. This area displayed a stronger response to images of prostheses among the one-handed participants, compared to the controls, particularly among those who used a prosthesis most frequently in their daily lives. This part of the brain also responded to images of prostheses that are functional but do not look like a hand, such as a hook prosthesis.

The researchers also investigated the connectivity between the visual hand-selective area and the area of the sensorimotor cortex which would be expected to control the missing hand.

They found there was better connectivity between these two brain areas in those people who used their prostheses regularly.

"Our findings suggest that the key determinant of whether the brain responds similarly to a prosthetic hand as it does to a real hand, is prosthetic use. As many of our study participants lost their hand in adulthood, we find that our brains can adapt at any age, which goes against common theories that brain plasticity depends on development early in life," said the study's first author, Fiona van den Heiligenberg (UCL Institute of Cognitive Neuroscience).

The researchers say their findings offer hope, as they have not found clear neural barriers to representing a prosthesis as a body part.

"Logically I know my prosthesis is not my missing hand - it's a tool, it's a new sensation and I accepted that. The more I use my prosthesis, the more I feel like it becomes a part of me," said Clare Norton, a study participant who has had one hand amputated.

"We think the ultimate barrier is simply how much you use the prosthesis," said Dr Makin.

"To me this is natural, having one hand is how it's always been. The prosthesis is part of me, I don't regard it as an addition - I consider it a hand," said another study participant, John Miller, who was born with only one hand and regularly uses his prosthesis.

The researchers say their findings could provide new insights to guide rehabilitation strategies as well as prosthesis design, and potentially guide other types of augmentation technology as well.

Credit: 
University College London

Nanostructures made of previously impossible material

image: This is an image of Michael Seifner (l.) and Sven Barth (r.).

Image: 
TU Wien

When you bake a cake, you can combine the ingredients in almost any proportions, and they will still always be able to mix together. This is a little more complicated in materials chemistry.

Often, the aim is to change the physical properties of a material by adding a certain proportion of an additional element; however, it isn't always possible to incorporate the desired quantity into the crystal structure of the material. At TU Wien, a new method has been developed using which previously unattainable mixtures can be achieved between germanium and desired foreign atoms. This results in new materials with significantly altered properties.

More tin or gallium in the germanium crystal

"Incorporating foreign atoms into a crystal in a targeted manner to improve its properties is actually a standard method," says Sven Barth from the Institute of Materials Chemistry at TU Wien. Our modern electronics are based on semiconductors with certain additives. Silicon crystals into which foreign atoms such as phosphorus or boron are incorporated are one example of this.

The semiconductor material germanium was also supposed to fundamentally change its properties and behave like a metal when a sufficient amount of tin was mixed in - that was already known; however, in practice, that was previously not attained.

One could of course attempt to simply melt the two elements, thoroughly mix them together in liquid form and then let them solidify, as has been done for thousands of years in order to produce simple metal alloys. "But in our case, this simple thermodynamic method fails, because the added atoms do not efficiently blend into the lattice system of the crystal," explains Sven Barth. "The higher the temperature, the more the atoms move inside the material. This can result in these foreign atoms precipitating out of the crystal after they have been successfully incorporated, leaving behind a very low concentration of these atoms within the crystal."

Sven Barth's team have therefore developed a new approach that links particularly rapid crystal growth to very low process temperatures. In the process, the correct quantity of the foreign atoms is continuously incorporated as the crystal grows.

The crystals grow in the form of nano-scale threads or rods, and specifically at considerably lower temperatures than before, in the range of just 140-230°C. "As a result, the incorporated atoms are less mobile, the diffusion processes are slow, and most atoms stay where you want them to be," explains Barth.

Using this method, it has been possible to incorporate up to 28% tin and 3.5% gallium into germanium. This is considerably more than was previously possible by means of the conventional thermodynamic combination of these materials - by a factor of 30 to 50.

Lasers, LEDs, electronic components

This opens up new possibilities for microelectronics: "Germanium can be effectively combined with existing silicon technology, and also the addition of tin and/or gallium in such high concentrations offers extremely interesting potential applications in terms of optoelectronics," says Sven Barth. The materials would be used for infrared lasers, for photodetectors or for innovative LEDs in the infrared range, for example, since the physical properties of germanium are significantly changed by these additives.

Credit: 
Vienna University of Technology

Poor rural population had best diet and health in mid-Victorian years

Poor, rural societies retaining a more traditional lifestyle where high-quality foods were obtained locally enjoyed the best diet and health in mid-Victorian Britain. A new study, published in JRSM Open, examined the impact of regional diets on the health of the poor during mid-19th century Britain and compared it with mortality data over the same period.

The peasant-style culture of the rural poor in more isolated regions provided abundant locally produced cheap foodstuffs such as potatoes, vegetables, whole grains, milk and fish. These regions also showed the lowest mortality rates, with fewer deaths from pulmonary tuberculosis, which is typically associated with better nutrition.

The study's author, Dr Peter Greaves, of the Leicester Cancer Research Centre, said: "The fact that these better fed regions of Britain also showed lower mortality rates is entirely consistent with recent studies that have shown a decreased risk of death following improvement towards a higher Mediterranean dietary standard."

Dr Greaves explained: "The rural diet was often better for the poor in more isolated areas because of payment in kind, notably in grain, potatoes, meat, milk or small patches of land to grow vegetables or to keep animals."

"Unfortunately, these societies were in the process of disappearing under the pressure of urbanisation, commercial farming and migration. Such changes in Victorian society were forerunners of the dietary delocalisation that has occurred across the world, which has often led to a deterioration of diversity of locally produced food and reduced the quality of diet for poor rural populations."

Dr Greaves added: "Conversely, in much of rapidly urbanising Britain in the mid-19th century, improvements in living conditions, better transport links and access to a greater variety of imported foods eventually led to improved life expectancy for many of the urban poor."

Credit: 
SAGE

Women regret sex less when they take the initiative

In general, women regret short-term sexual encounters like one-night stands more than men do. But various factors determine whether and how much they regret them.

"The factor that clearly distinguishes women from men is the extent to which they themselves take the initiative," says Mons Bendixen, an associate professor in the Department of Psychology at the Norwegian University of Science and Technology (NTNU).

Initiative is the clearest gender-differentiating factor for regret after casual sex, although other conditions also affect how much an individual regrets the encounter. In contrast to women, sexual regret for men is not affected by whether they take the initiative.

"Women who take the initiative see the man as an attractive sexual partner," says Professor Leif Edvard Ottesen Kennair, also at NTNU's Department of Psychology.

Less reason to feel regret

Bendixen and Kennair have collaborated with PhD candidate Joy P. Wyckoff and Professor David M. Buss at the University of Texas at Austin and with Buss's former PhD candidate Kelly Asao, now a lecturer at the Institute of Social Neuroscience in Melbourne.

"Women who initiate sex are likely to have at least two distinguishing qualities," says Professor David Buss. "First, they are likely to have a healthy sexual psychology, being maximally comfortable with their own sexuality. Second, women who initiate have maximum choice of precisely who they want to have sex with. Consequently, they have less reason to feel regret, since they've made their own choice."

"Regret is a highly unpleasant emotion and our findings suggest that having control over their decision to engage in sex buffered women from experiencing regret. These results are another reminder of the importance of women's ability to make autonomous decisions regarding their sexual behaviors," says Wyckoff.

Quality matters

Men regret casual sex much less overall than women do, although it does happen. But several individual factors play an important role in women's regret.

"Women feel less regret if the partner was skilled and they felt sexually satisfied," says Kennair. Bendixen points out that these effects are not as strong in men.

"Women have less regret if the sex was good. For men, this also plays a less important role. The underlying causes are biological," Bendixen said.

The higher-investing sex faces larger repercussions of mating decisions than the lower investing sex. Women have a higher minimum obligatory parental investment (e.g., 9 months internal gestation) than men. So, women's regret should be more closely tied to the quality of their sex partner than men's.

"For women, sexual skill might be a cue to high male quality," says Kelly Asao. In short, women may profit more from high quality in their sexual partners than men do.

Bendixen and Kennair, in collaboration with David Buss and his research team in Texas, have been looking at what people think of their own and others' sexuality for the last several years, and whether they regret having had casual sex and why.

This study adds several factors that can explain responses to casual sex. This time the researchers also asked study participants if they took the initiative for the sex act, if they felt pressured to have sex and whether the partner was skilled or sexually competent. Participants were also asked if they experienced disgust.

Disgust

Women also feel disgust more often than men after a short-term sexual encounter. This is a key factor in whether or not they feel regret.

"The feeling of disgust or revulsion is the single factor that best explained why women and men regretted the last time they had casual sex when we controlled for all other factors," says Bendixen.

People may feel disgust because they feel moral regret, but also if the act is unhygienic or if the sex itself was perceived as gross. The impact of disgust was strong for both sexes and among both the Norwegian and the American student participants.

"Sexual disgust is an important adaptive emotion," says Buss. "It functions to help people avoid, now or in the future, potential sex partners who are either low in mate value or who carry some risk of sexually transmitted infections."

Same in the United States and Norway

The basic data consists of responses from 547 Norwegian and 216 American students. The nationality and possible cultural aspects of the responses seem to play a lesser role, if any.

A larger proportion of Norwegian participants had casual sex than the Americans, but the patterns are the same, and the responses differed little in their reasons for regret and to what degree women and men feel any regret at all.

"It's interesting that - despite clear gender and cultural differences in the levels of concern, pressure, disgust, how good the sex was, the partner's sexual competence and initiative - clear similarities existed between the groups in how these factors affected the degree of sexual regret," says Bendixen.

"With the exception of initiative-taking, it seems that the mechanisms for sexual regret are only minimally affected by whether you're a woman or a man, or whether you're a Norwegian or an American student," says Kenner.

A significant aspect of the latest findings is that the researchers obtained the same results as they had done in previous studies.

Psychology is among the fields of study that have been criticized for not obtaining results that can be repeated in later studies. But Kennair and Bendixen have now done this.

"By studying the same phenomenon that's based on clear theory, in several rounds, from different angles, and especially in different cultures, we can develop a theory-based cumulative science. The findings are simply more credible when we find out the same thing over several rounds," Kennair says.

Credit: 
Norwegian University of Science and Technology

Thirdhand smoke found to increase lung cancer risk in mice

image: Thirdhand smoke contains the chemicals in secondhand smoke from a cigarette that are deposited on indoor surfaces. Some of these chemicals interact with molecules from the air to create a toxic mix that includes potentially cancer-causing compounds. These compounds induce double-stranded breaks (DSBs) in DNA, which if not repaired correctly, could lead to tumorigenesis in mice. In this study, the researchers have shown for the first time that thirdhand smoke exposure induces lung cancer in A/J mice in early life.

Image: 
Antoine Snijders, Jian-Hua Mao, and Bo Hang/Berkeley Lab

Researchers at the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) identified thirdhand smoke, the toxic residues that linger on indoor surfaces and in dust long after a cigarette has been extinguished, as a health hazard nearly 10 years ago. Now a new study has found that it also increases lung cancer risk in mice.

A team led by Antoine Snijders, Jian-Hua Mao, and Bo Hang of Berkeley Lab first reported in 2017 that brief exposure to thirdhand smoke is associated with low body weight and immune changes in juvenile mice. In a follow-up study published recently in Clinical Science, the researchers and their team have determined that early thirdhand smoke exposure is also associated with increased incidence and severity of lung cancer in mice.

Field studies in the U.S. and China have confirmed that the presence of thirdhand smoke in indoor environments is widespread, and traditional cleaning methods are not effective at removing it. Because exposure to thirdhand smoke can occur via inhalation, ingestion, or uptake through the skin, young children who crawl and put objects in their mouths are more likely to come in contact with contaminated surfaces, and are therefore the most vulnerable to thirdhand smoke's harmful effects.

In the Berkeley Lab researchers' new study, an experimental cohort of 24 A/J mice (a strain susceptible to spontaneous lung cancer development) was housed with scraps of fabric impregnated with thirdhand smoke from the age of 4 weeks to 7 weeks. The dose the mice received was estimated to be about 77 micrograms per kilogram of body weight per day - comparable to the ingestion exposure of a human toddler living in a home with smokers. Forty weeks after the last exposure, these mice were found to have an increased incidence of lung cancer (adenocarcinoma), larger tumors, and a greater number of tumors, compared to 19 control mice.

Their work also sheds light on what happens on both a molecular and cellular level. If thirdhand smoke toxins damage DNA within cells and the damage is not repaired properly, it can give rise to mutations, which may lead to the cell becoming cancerous. To further investigate how thirdhand smoke exposure promotes tumor formation, the team performed in vitro studies using cultured human lung cancer cells.

These studies indicated that thirdhand smoke exposure induced DNA double-strand breaks and increased cell proliferation and colony formation. In addition, RNA sequencing analysis revealed that thirdhand smoke exposure caused endoplasmic reticulum stress and activated p53 (tumor suppressor) signaling. The physiological, cellular, and molecular data indicate that early exposure to thirdhand smoke is associated with increased lung cancer risk.

Credit: 
DOE/Lawrence Berkeley National Laboratory

New 3-D measurements improve understanding of geomagnetic storm hazards

image: It is during geomagnetic storms that beautiful aurora borealis, or northern lights, are visible at high latitudes. However, geomagnetic storms can also cause risks to the power grid.

Image: 
Joshua Strang, US Air Force

Measurements of the three-dimensional structure of the earth, as opposed to the one-dimensional models typically used, can help scientists more accurately determine which areas of the United States are most vulnerable to blackouts during hazardous geomagnetic storms.

Space weather events such as geomagnetic storms can disturb the earth's magnetic field, interfering with electric power grids, radio communication, GPS systems, satellite operations, oil and gas drilling and air travel. Scientists use models of the earth's structure and measurements of Earth's magnetic field taken at USGS observatories (https://geomag.usgs.gov/monitoring/observatories/) to determine which sections of the electrical grid might lose power during a geomagnetic storm.

In a new U.S. Geological Survey study/a>, scientists calculated voltages along power lines in the mid-Atlantic region of the U.S. using 3D data of the earth. These data, taken at Earth's surface, reflect the complex structure of the earth below the measurement sites and were collected during the National Science Foundation EarthScope USArray project. The scientists found that for many locations, the voltages they calculated were significantly different from those based on previous 1D calculations, with the 3D data producing the most precise results.

"Using the most accurate data available to determine vulnerable areas of the power grid can help maintain life-saving communications and protect national security during severe geomagnetic storms," said Greg Lucas, a USGS scientist and the lead author of the study. "Our study suggests that 3D data of the earth should be used whenever they are available."

Electric currents from a March 1989 geomagnetic storm caused a blackout in Quebec and numerous glitches in the U.S. power grid. In past studies, scientists using simple 1D models of the earth would have found that 16 high-voltage electrical transmission lines were disturbed in the mid-Atlantic region during the storm, resulting in the blackout. However, by using realistic 3D data to calculate the 1989 scenario, the new study found that there might have actually been 62 vulnerable lines.

"This discrepancy between 1D- and 3D-based calculations of the 1989 storm demonstrates the importance of realistic data, rather than relying on previous 1D models, to determine the impact that a geomagnetic storm has on power grids," Lucas said.

Credit: 
U.S. Geological Survey

Got the message? Your brainwaves will tell

The new technique was developed by Professor Tom Francart and his colleagues from the Department of Neurosciences at KU Leuven, Belgium, in collaboration with the University of Maryland. It will allow for a more accurate diagnosis of patients who cannot actively participate in a speech understanding test because they're too young, for instance, or because they're in a coma. In the longer term, the method also holds potential for the development of smart hearing devices.

A common complaint from people with a hearing aid is that they can hear speech but they can't make out its meaning. Indeed, being able to hear speech and actually understanding what's being said are two different things.

The tests to determine whether you can hear soft sounds are well established. Just think of the test used by audiologists whereby you have to indicate whether you hear beep sounds. An alternative option makes use of EEG, which is often used to test newborns and whereby click sounds are presented through small caps over the ears. Electrodes on the head then measure whether any brainwaves develop in response to these sounds. The great advantage of EEG is that it is objective and that the person undergoing the test doesn't have to do anything. "This means that the test works regardless of the listener's state of mind," says co-author Jonathan Simon from the University of Maryland. "We don't want a test that would fail just because someone stopped paying attention."

But to test speech understanding, the options are much more limited, explains lead author Tom Francart from KU Leuven: "Today, there's only one way to test speech understanding. First, you hear a word or sentence. You then have to repeat it so that the audiologist can check whether you have understood it. This test obviously requires the patient's active cooperation."

Therefore, scientists set out to find an EEG-based method that can measure hearing as well as speech understanding completely automatically.

"And we've succeeded," says Tom Francart. "Our technique uses 64 electrodes to measure someone's brainwaves while they listen to a sentence. We combine all these measurements and filter out the irrelevant information. If you move your arm, for instance, that creates brainwaves as well. So we filter out the brainwaves that aren't linked to the speech sound as much as possible. We compare the remaining signal with the original sentence. That doesn't just tell us whether you've heard something but also whether you have understood it."

The way this happens is quite similar to comparing two sound files on your computer: when you open the sound files, you sometimes see two figures with sound waves. Tom Francart: "Now, imagine comparing the original sound file of the sentence you've just heard and a different sound file derived from your brainwaves. If there is sufficient similarity between these two files, it means that you have properly understood the message."

This new technique makes it possible to objectively and automatically determine whether someone understands what's being said. This is particularly useful in the case of patients who cannot respond, including patients in a coma.

The findings can also help to develop 'smart' hearing aids and cochlear implants, Francart continues. "Existing devices only ensure that you can hear sounds. But with built-in recording electrodes, the device would be able to measure how well you have understood the message and whether it needs to adjust its settings - depending on the amount of background noise, for instance."

Credit: 
KU Leuven

Researchers rescue embryos from brain defects by re-engineering cellular voltage patterns

image: Normal frog embryo brain pattern development (left); frog embryo brain pattern when exposed to nicotine (center); frog embryo brain pattern rescued by HCN2 in the presence of nicotine (right)

Image: 
Mike Levin (Tufts University Professor of Biology and director of Allen Discovery Center), Vaibhav Pai (Tufts University Research Scientist)

MEDFORD/SOMERVILLE, Mass. (March 8, 2018) Tufts University biologists have demonstrated for the first time that electrical patterns in the developing embryo can be predicted, mapped, and manipulated to prevent defects caused by harmful substances such as nicotine. The research, published today in Nature Communications, suggests that targeting bioelectric states may be a new treatment modality for regenerative repair in brain development and disease, and that computational methods can be used to find effective repair strategies.

In a developing embryo, the outer membrane of each cell contains protein channels that transport negative and positive ions, generating voltage gradients across the cell wall. Groups of cells create patterns of membrane voltage that precede and control the expression of genes and the morphological changes occurring over the course of development.

"Studies focusing on gene expression, growth factors, and molecular pathways have provided us with a better but still incomplete understanding of how cells arrange themselves into complex organ systems in a growing embryo," said Professor Michael Levin, Ph.D., corresponding author of the study and director of the Allen Discovery Center at Tufts University. "We are now beginning to see how electrical patterns in the embryo are guiding large scale patterns of tissues, organs, and limbs. If we can decode this electrical communication between cells, then we might be able to use it to normalize development or support regeneration in the treatment of disease or injury."

To help decipher that code, Levin and lead author Vaibhav Pai, Ph.D., a Research Scientist II at the Allen Discovery Center at Tufts, explored whether it was possible to use a computational model to predict the bioelectrical patterns that occur in normal and nicotine-exposed embryos, and then use the model to identify reagents that might restore the normal pattern even in the presence of the teratogen (a class of molecules that induce birth defects). In humans, nicotine has been linked to prenatal morbidity, sudden infant death, attention deficit hypersensitivity disorder (ADHD), and other deficits in cognitive function, learning, and memory.

Earlier studies suggested that these defects may be a result of nicotine depolarizing cells in the embryo by inducing acetylcholine receptors to pump in positively charged sodium and potassium ions. Levin and Pai hypothesized that disruption of the normal bioelectric prepattern that drives brain patterning could be the underlying cause for these defects, and that restoring this pattern might rescue those defects.

Pai worked with Alexis Pietak, Ph.D., a primary investigator at the Allen Discovery Center at Tufts, who developed a powerful computational simulation platform - called the BioElectric Tissue Simulation Engine (BETSE) - to create a dynamic map of voltage signatures in a developing frog embryo. The simulation engine (available for free download) was built on a biologically realistic model of ion concentrations and fluxes and parameters of ion channel behaviors derived from molecular studies. The BETSE model accurately replicated the distinct pattern of membrane voltage from the normal embryonic brain development, and also explained the "flattened" (erased) electrical pattern observed to result from nicotine exposure.

The Tufts researchers were then able to use BETSE to explore the effect of various reagents on the embryo's voltage map. One reagent in particular, the hyperpolarization-activated cyclic nucleotide gated channel (HCN2), when added to the cells in the model, selectively enhanced hyperpolarization (large internal negative charge) in areas where it was diminished by nicotine. The effect -- akin to dialing up the contrast in a photo editor -- effectively restored the normal electrical pattern.

Remarkably, expressing HCN2 in live embryos rescued them from the effects of nicotine, restoring a normal bio-electric pattern, brain morphology, markers of gene expression, and near normal learning capacity in the grown tadpole.

"This is an important step providing a realistic model that bridges the molecular, cellular, bio-electrical, and anatomical scales of the developing embryo. Adding the bioelectrical component was critical to making a search for therapeutic strategies more tractable," said Levin, the Vannevar Bush professor of biology in the School of Arts & Sciences at Tufts.

Credit: 
Tufts University

Researchers develop optical tools to detect metabolic changes linked to disease

image: Optical readouts of HL-1 cardiomyocytes in response to chemical uncoupling by CCCP. Redox ratio map for control (left), and CCCP exposed cardiomyocytes (right).

Image: 
Irene Georgakoudi, Tufts University

MEDFORD/SOMERVILLE, Mass. (March 7, 2018) -- Metabolic changes in cells can occur at the earliest stages of disease. In most cases, knowledge of those signals is limited, since we usually detect disease only after it has done significant damage. Now, a team led by engineers at Tufts University School of Engineering has opened a window into the cell by developing an optical tool that can read metabolism at subcellular resolution, without having to perturb cells with contrast agents, or destroy them to conduct assays. As reported today in Science Advances, the researchers were able to use the method to identify specific metabolic signatures that could arise in diabetes, cancer, cardiovascular and neurodegenerative diseases.

The method is based on the fluorescence of two important coenzymes (biomolecules that work in concert with enzymes) when excited by a laser beam. The coenzymes - nicotinamide adenine dinucleotide (NADH) and flavin adenine dinucleotide (FAD) - are involved in a large number of metabolic pathways in every cell. To find out the specific metabolic pathways affected by disease or stress, the Tufts scientists looked at three parameters: the ratio of FAD to NADH, the fluorescence "fade" of NADH, and the organization of the mitochondria as revealed by the spatial distribution of NADH within a cell (the energy producing "batteries" of the cell).

The first parameter - the relative amounts of FAD to NADH - can reveal how well the cell is consuming oxygen, metabolizing sugars, or producing or breaking down fat molecules. The second parameter - the fluorescence "fade" of NADH - reveals details about the local environment of the NADH. The third parameter - the spatial distribution of NADH in the cells - shows how the mitochondria split and fuse in response to cellular growth and stress.

"Taken together, these three parameters begin to provide more specific, and unique metabolic signatures of cellular health or dysfunction," said Irene Georgakoudi, Ph.D., corresponding author of the study and a professor of biomedical engineering in the School of Engineering at Tufts. "The power of this method is the ability to get the information on live cells, without the use of contrast agents or attached labels that could interfere with results."

Other methods exist for non-invasively tracking the metabolic signatures of disease, such as the PET scan, which is often used in research. But while PET scans provide low resolution information with excellent depth penetration into living tissues, the optical method introduced by the Tufts researchers detects metabolic activity at the resolution of single cells, although mostly near the surface.

That is not necessarily a limitation. Many diseases can be detected at the surface of tissues, including cancer, while many pre-clinical studies are performed with animal models and engineered three-dimensional tissues that can benefit from being monitored non-destructively. The method developed by Georgakoudi and colleagues may prove to be a powerful research tool for understanding their metabolic signatures.

Credit: 
Tufts University

Houston Methodist researcher makes bold move by releasing nanotech 'recipe'

image: Ennio Tasciotti, Ph.D., and his team are sharing their 'recipe' for a new, more affordable way to make nanoparticles. This will empower any laboratory in the world to easily create similar nanoparticles and could lead to a whole new way of delivering biotherapeutic drugs and do it more quickly.

Image: 
Houston Methodist

HOUSTON - (March 7, 2018) - In a rare move, a Houston Methodist researcher is sharing his recipe for a new, more affordable way to make nanoparticles. This will empower any laboratory in the world to easily create similar nanoparticles and could lead to a whole new way of delivering biotherapeutic drugs and do it more quickly.

"We're the only lab in the world doing this," said Ennio Tasciotti, Ph.D., director of the Center for Biomimetic Medicine at the Houston Methodist Research Institute and corresponding author on a paper coming out March 7 in Advanced Materials. "There are several questions about how our system works, and I can't answer all of them. By giving away the so-called 'recipe' to make biomimetic nanoparticles, a lot of other labs will be able to enter this field and may provide additional solutions and applications that are beyond the reach of only one laboratory. You could say it's the democratization of nanotechnology."

In the article, Tasciotti and his colleagues show how to standardize nanoparticle production to guarantee stability and reproducibility, while increasing yield. Eliminating the need for multi-million-dollar facilities, Tasciotti and his team demonstrate this using a readily available and relatively affordable piece of benchtop equipment to manufacture nanoparticles in a controlled, adjustable and low-cost manner.

"Nanoparticles are generally made through cryptic protocols, and it's very often impossible to consistently or affordably reproduce them," Tasciotti said. "You usually need special, custom-made equipment or procedures that are available to only a few laboratories. We provide step-by-step instructions so that now everybody can do it."

For decades nanoparticles have been made out of bioinert, or inorganic, substances that don't interact with the body. In more recent years, nanoparticles were made to be bioactive, meaning they could respond to the environment. Now, Tasciotti is pushing the field forward by creating biomimetic nanoparticles that resemble cell composition and work in synergy with the laws that govern the physiology of the body.

"The body is so smart in the ways it defends itself. The immune system will eventually recognize nanoparticles no matter how well you make them," Tasciotti said. "In my lab, we make nanoparticles out of the cell membrane of the very same immune cells that patrol the blood stream. When we put these biomimetic, or bioinspired, nanoparticles back in the body, the immune cells do not recognize them as something different, as they're made of their same building blocks, so there is no adverse response."

Despite the complexity of this new class of particles, Tasciotti says it's incredibly simple how they put it together, which is why he decided to publish this paper.

"While our lab will remain fully devoted to this line of research, if somebody else develops some solutions using our protocols that are useful in clinical care, it's still a good outcome," he said. "After all, the ultimate reason why we are in translational science is for the benefit of the patients."

Credit: 
Houston Methodist

Diverse tropical forests grow fast despite widespread phosphorus limitation

image: In Panama's lowland tropical forest, tree species growing on low phosphorus soils grew faster, on average, than species growing on high phosphorus soils.

Image: 
STRI Archives

Accepted ecological theory says that poor soils limit the productivity of tropical forests, but adding nutrients as fertilizer rarely increases tree growth, suggesting that productivity is not limited by nutrients after all. Researchers at the Smithsonian Tropical Research Institute (STRI) resolved this apparent contradiction, showing that phosphorus limits the growth of individual tree species but not entire forest communities. Their results, published online in Nature, March 8, have sweeping implications for understanding forest growth and change.

Vast areas of the tropics occur on old landscapes where rock-derived nutrients have been leached away by years of heavy rainfall. Phosphorus is particularly scarce, because the iron oxides that give tropical soils their characteristic red color bind to the phosphorus, making it unavailable to plants. However, the addition of fertilizer to diverse forests in Africa, Southeast Asia and the Americas has not increased tree growth. The only place where fertilization resulted in increased tree growth was in Hawaii, where the forest is dominated by a single tree species.

An alternative way to study nutrient limitation is by comparing the growth rates of trees in forests that naturally differ in soil nutrient availability: the tiny but highly biodiverse tropical country of Panama provides a perfect setting for this. The complex geology of central Panama means that natural levels of plant-available phosphorus in the soil vary more than 300-fold--similar to the range of phosphate availability in tropical soils around the world. And because the soils in Panama also vary in moisture and other nutrients such as nitrogen, calcium and potassium, researchers can study the effects of these variables on growth at the same time.

To examine the effect of phosphorus on tree growth, researchers measured 19,000 individual trees in 541 different tree species in a series of long-term forest monitoring plots that are part of the Forest Global Earth Observatory (Smithsonian ForestGEO) network managed by the Center for Tropical Forest Science at STRI. On average, growth rates of individual tree species increased in soils with higher levels of plant-available phosphorus, consistent with ecological theory. Surprisingly, however, tree species that occurred on low phosphorus soils grew faster, on average, than species growing on high phosphorus soils. And in a final twist, variation in the tree species present across plots meant that community-wide growth rates did not change according to the level of soil phosphorus.

"Finding that species adapted to low phosphorus soils are growing so fast was a real surprise," said Ben Turner, STRI staff scientist, who led the study. "We still don't understand why this occurs, nor why high phosphorus species are not growing faster than they are. Perhaps trees invest extra phosphorus in reproduction rather than growth, for example, because seeds, fruits and pollen are rich in phosphorus. For now, these results help us to understand how soil fertility influences tree growth in tropical forests, and demonstrate once again the power of tropical diversity to surprise us."

"This study highlights our limited understanding of how plants cope with phosphorus-poor soils, a significant challenge to farmers through much of the tropics," said Jim Dalling, STRI research associate and professor and head of the Department of Plant Biology at the University of Illinois Urbana-Champaign. "Comparing how plants adapted to high versus low phosphorus availability acquire and use this critical nutrient could suggest new approaches for increasing food production without relying on costly fertilizers."

Credit: 
Smithsonian Tropical Research Institute

New approach to measuring stickiness could aid micro-device design

image: With slight modifications, an atomic force microscope could be used to measure adheasion in micro-materials. Credit: Kesari Lab/Brown University

Image: 
Kesari Lab/Brown University

PROVIDENCE, R.I. [Brown University] -- Brown University engineers have devised a new method of measuring the stickiness of micro-scale surfaces. The technique, described in Proceedings of the Royal Society A, could be useful in designing and building micro-electro-mechanical systems (MEMS), devices with microscopic moving parts.

At the scale of bridges or buildings, the most important force that engineered structures need to deal with is gravity. But at the scale of MEMS -- devices like the tiny accelerometers used in smartphones and Fitbits -- the relative importance of gravity decreases, and adhesive forces become more important.

"The main thing that matters at the microscale is what sticks to what," said Haneesh Kesari, an assistant professor in Brown's School of Engineering and coauthor of the new research. "If you have parts of your device sticking together that shouldn't be, it's not going to work. So in order to design MEMS devices, it helps to have a good way of measuring adhesion in the materials we use."

That's what Kesari and two Brown graduate students, Wenqiang Fang and Joyce Mok, looked to accomplish with this new research. Specifically, they wanted to measure a quantity known as "work of adhesion," which roughly translates into the amount of energy required to separate a unit area of two adhered surfaces.

The key theoretical insight developed in the new study is that thermal vibrations of a microbeam can be used to calculate work of adhesion. That insight suggests a method in which a slightly modified atomic force microscopy (AFM) system can be used to probe adhesive properties.

Standard AFM works a bit like a record player. A cantilever with a sharp needle moves across a target material. A laser shown on the cantilever measures the tiny undulations it makes as it moves along the material's contours. Those undulations can then be used to map out the material's surface properties.

Adapting the method to measure adhesion would require simply removing the metal tip from the cantilever, leaving a flat microbeam. That beam can then be lowered onto a target material, where it will adhere. When the cantilever is raised slightly, some portion of the beam will become unstuck, while the rest remains stuck. The unstuck portion of the beam will vibrate ever so slightly. The authors found a way to use the extent of that vibration, which can be measured by an AFM laser, to calculate the length of the unstuck portion, which can in turn be used to calculate the target material's work of adhesion.

With slight modifications, an atomic force microscope could be used to measure adheasion in micro-materials. Credit: Kesari Lab/Brown University Fang says the technique could be useful in assessing new material coatings or surface textures aimed at alleviating the failure of MEMS devices through sticking.

"Once you have a robust technique for measuring the material's work of adhesion, then you have a systematic way of evaluating these methods to get the level of adhesion needed for a particular application," Fang said. "The main advantage to this method is that you don't need to change a standard AFM setup very much in order to do this."

The approach is also much simpler than other techniques, according to Mok.

"Previous methods based on interferometry are labor intensive and may require many data points to be taken," she said. "Our theoretical framework would give a value for the work of adhesion from a single measurement."

Having demonstrated the technique numerically, Kesari says the next step is to build the system and start collecting some experimental data. He's hopeful that such a system will aid in pushing the MEMS field forward.

"We have MEMS accelerometers and gyroscopes, but I don't think the field has quite lived up to its promise yet," Kesari said. "Part of the reason for that is that people haven't completely understood adhesion at the small scale. We think that a more robust way of measuring adhesion is the first step towards gaining such an understanding."

Credit: 
Brown University