Tech

Virtual time-lapse photos can capture ultrafast phenomena

video: The new image-processing method can capture extremely rapid phenomena using any type of camera.

Image: 
© 2019 Jamani Caillet

Many phenomena occurring in nature and industry happen very quickly: a tear running through a piece of fabric, a rubber ball bouncing off a hard floor, a drop of water wetting a dry surface and a piece of scotch tape peeling off, for example. Capturing images of these phenomena would help scientists better understand them, but conventional cameras aren't fast enough - and high-speed cameras are prohibitively expensive.

But scientists at EPFL's Engineering Mechanics of Soft Interfaces Laboratory, working in association with Harvard University researchers in the SMR lab, have developed a new imaging method called Virtual Frame Technique (VFT) that can generate thousands of images of these phenomena as they occur step by step, using a photo taken from any kind of device, including a smartphone. What's more, VFT has been shown to perform better than high-speed cameras.

Working with a conventional photo

The method starts by analyzing a conventional photo. "If you use a regular camera to take a picture of a drop of water hitting a dry surface, the water's movement will cause the picture to be blurry. But these blurred areas are precisely where the phenomenon is taking place, both spatially and temporally. That's what our technique uses to piece together the underlying phenomenon," says John Kolinski, a professor at EPFL's School of Engineering. In other words, VFT works by deconstructing the blurry parts of pictures.

A method for binary phenomena

The first step is to shine light on the phenomenon just as the conventional picture is taken, so that the blurry parts can be exploited. "This initial illumination step must be done correctly so that the blurry parts of the picture contain the right information and can be used. At this point the object must have a quantifiable instantaneous state of either completely blocking the light or completely letting it through," says Kolinski. The next step is to employ advanced image-processing methods to improve the conventional picture's temporal resolution and specific illumination scheme, and then turn it into a binary image - that is, containing either black or white pixels.

This method offers an advantage because many natural phenomena are binary; for example, a piece of fabric is either torn or it isn't, a surface is either wet or dry. That means only two greyscale values are necessary to depict them - no need for the 15,000+ intensity values available with conventional cameras. By sacrificing the ability to resolve intensity, the scientists were able to use the camera sensor's bit depth, or the amount of information the sensor can obtain, to increase the frame rate while retaining full spatial resolution. Temporal resolution can be improved even further by adjusting the timing of a light pulse.

Time-lapse photos over extremely short periods

VFT therefore breaks down a conventional photo of an object in rapid motion into thousands of images that show every step of the process. "It's like taking time-lapse photos of a nearly instantaneous phenomenon," says Kolinski. The scientists tested their technique on pictures taken by all kinds of devices, from smartphones to sophisticated professional cameras, and found that it consistently resulted in a faster frame rate. While careful illumination is required, the method is quite general, and has been used to record a rich variety of phenomena, from droplet impacts to fracture mechanics.

Credit: 
Ecole Polytechnique Fédérale de Lausanne

OU neuroscientists find brain pathway supporting an intersection of taste and pain

image: OU neuroscientists used molecular biology and physiology techniques to understand how taste and thermal pathways might be converging with pain.

Image: 
University of Oklahoma

University of Oklahoma neuroscientists have found a pathway in the brain where taste and pain intersect in a new study that originally was designed to look at the intersection of taste and food temperature. This study was the first time researchers have shown that taste and pain signals come together in the brain and use the same circuitry. OU neuroscientists received a five-year, $1.6 million National Institutes of Health grant to study this concept.

"We originally aimed to look at how sense of taste works with thermal sensation in this study to better understand how taste is connected to food preferences, health and well-being. Taste is also closely tied to emotion and understanding how the brain processes different tastes is significant on several levels," said Christian Lemon, principal investigator on the grant and associate professor in the OU Department of Biology, OU College of Arts and Sciences. "What we found was a surprise because temperature signals were converging with taste near the mid-brain, but so were neural messages for taste and pain."

Lemon and Jinrong Li, OU research associate, used a molecular biology and physiology technique to understand how taste and thermal pathways might be converging with pain. What OU researchers learned from this study is that the neural circuitry carrying signals for aversive tastes also carries a response to pain. This intersection may support a protective function and opens the possibility that taste messages could change how pain signals are transmitted in the brain, but more research is needed.

The sense of taste is a complicated sensory and nutrient detector that has many implications for how the nervous system guides food preference behaviors and, potentially, response to pain. Now that the circuitry has been identified, OU researchers will explore manipulation of the circuitry to test its influence on behaviors associated with taste and pain. Ultimately, understanding taste is critical to defining its role in human disorders associated with eating behaviors, such as obesity, diabetes, and other conditions and diseases.

Credit: 
University of Oklahoma

Gene behind long-recognized mitochondrial disease has highly varied effects

Philadelphia, March 12, 2019--For more than two decades, mutations in a gene located in the DNA of mitochondria have been classified as a mitochondrial disease and linked to a particular set of symptoms. However, according to new findings from researchers at Children's Hospital of Philadelphia (CHOP), mutations in this gene, which encodes an essential part of the mitochondrial motor known as ATP synthase that generates cellular energy, are much more variable than previously thought. This prompts the need to develop more precise clinical tests that can better determine the course of treatment for patients affected by mitochondrial disorder. The study was published online on February 14 in the journal Human Mutation.

Mitochondria are structures found within human and animal cells that are responsible for energy production. Mitochondria contain 37 genes encoded in their own DNA (mtDNA) that are separate from the DNA found inside the nucleus of the cell. Variations in more than 350 different genes located across both nuclear and mitochondrial DNA are responsible for causing mitochondrial diseases, which can typically cause more than 16 different symptoms in each patient and affect multiple organs.

Mutations in the mtDNA-encoded ATP synthase membrane subunit 6 gene (MT-ATP6) are found in between 10 and 20 percent of cases of Leigh syndrome, a progressive brain disorder long recognized as a form of mitochondrial disease, and another recognizable condition known as neuropathy, ataxia, and retinitis pigmentosa (NARP) syndrome.

"We went into this study wanting to look at the more than 200 reported cases of mitochondrial disease with a MT-ATP6 mutation to better understand the clinical presentation of its many variants," said study leader Rebecca Ganetzky, MD, an attending physician in the Mitochondrial Medicine Frontier Program at CHOP, and an assistant professor of Pediatrics in the Perelman School of Medicine at the University of Pennsylvania. "Patients with an MT-ATP6 mutation not only vary significantly in what symptoms they develop, but there has also been extensive variability in biochemistry analyses of their cells and tissues, making it difficult to apply any sort of universal diagnostic or treatment strategy for these patients."

Ganetzky and her colleagues reviewed all of the 218 published cases of MT-ATP6 mitochondrial disease to-date to assess their variants and compare those findings with clinical and biochemical features of the disease. The authors also presented a new clinical case series of 14 additional patients with MT-ATP6 variants of uncertain significance or relation to their medical problems.

What the researchers ultimately found was that despite those having one common mutation, this is a particularly heterogeneous disease in terms of the sequence variations and clinical symptoms that may occur. The study identified a total of 34 variants within the MT-ATP6 mutation, where surprisingly no single biochemical feature was shared by all individuals with these variants.

"This study provides an important point of reference for patients in whom MT-ATP6 variants are discovered in diagnostic testing, as we now recognize just how variable this disease may be," Ganetzky said. "We need to develop better ways to test for this disease, since the classical clinical syndromic presentations of NARP and Leigh syndrome are not sufficient to capture the problems present in all of these patients."

Ganetzky said that future studies are needed to systematically evaluate the functional significance for all of the MT-ATP6 variants. The authors recommend a multi-pronged approach to assessing biochemical diversity, including development of a common community resource of all gene variants along with their biochemical and clinical features. Additionally, a project supported by the National Institutes of Health is under way led by CHOP Mitochondrial Medicine Frontier Program executive director, Marni J. Falk, MD, to expertly curate MT-ATP6 variants that cause Leigh syndrome. CHOP also offers and continues to investigate a variety of advanced testing techniques for mitochondrial disease, including those that will help better understand mitochondrial energy production effects in patients with MT-ATP6 variants.

Credit: 
Children's Hospital of Philadelphia

Your body is your internet -- and now it can't be hacked

video: Your body is your internet, and it should to be protected from remote hacks, just like your computer. While this hasn't happened in real life yet, researchers have been demonstrating for at least a decade that it's possible.

Image: 
Purdue University/Erin Easterling

WEST LAFAYETTE, Ind. -- Someone could hack into your pacemaker or insulin pump and potentially kill you, just by intercepting and analyzing wireless signals. This hasn't happened in real life yet, but researchers have been demonstrating for at least a decade that it's possible.

Before the first crime happens, Purdue University engineers have tightened security on the "internet of body." Now, the network you didn't know you had is only accessible by you and your devices, thanks to technology that keeps communication signals within the body itself.

The work appears in the journal Scientific Reports. Study authors include Shreyas Sen, an assistant professor of electrical and computer engineering at Purdue, and his students, Debayan Das, Shovan Maity and Baibhab Chatterjee.

"We're connecting more and more devices to the human body network, from smart watches and fitness trackers to head-mounted virtual reality displays," said Sen, who specializes in sensing and communication systems.

"The challenge has not only been keeping this communication within the body so that no one can intercept it, but also getting higher bandwidth and less battery consumption," he said.

Body fluids carry electrical signals very well. So far, so-called "body area networks" have used Bluetooth technology to send signals on and around the body. These electromagnetic waves can be picked up within at least a 10-meter radius of a person.

Sen's team has demonstrated a way for human body communication to occur more securely - not going beyond a centimeter off the skin and using 100 times less energy than traditional Bluetooth communication.

This is possible through a device that couples signals in the electro-quasistatic range, which is much lower on the electromagnetic spectrum. Sen's group is working with government and industry to incorporate this device into a dust-sized integrated circuit.

A YouTube video is available at https://youtu.be/NHqfT1vIe6E.

Through a prototype watch, a person can receive a signal from anywhere on the body, from the ears all the way down to the toes. The thickness of your skin or hair also doesn't really make a difference in how well you carry the signal, Sen says.

The idea would be to create a way for doctors to reprogram medical devices without invasive surgery. The technology would also help streamline the advent of closed-loop bioelectronic medicine - in which wearable or implantable medical devices function as drugs, but without the side effects - and high-speed brain imaging for neuroscience applications.

"We show for the first time a physical understanding of the security properties of human body communication to enable a covert body area network, so that no one can snoop important information," Sen said.

Credit: 
Purdue University

Mechanized cane measures patients' rehabilitation process without noticing it

video: Mechanized cane designed at the UMA measures patients' rehabilitation process without noticing it.

It monitors a person's time of use and weight-bearing while walking, in a nonintrusive way for users and at low cost.

The module is under an open license, available on the Internet to anyone interested in it.

Image: 
University of Malaga

Robot-aided rehabilitation represents a step forward for patients with walking difficulties. However, its high price, together with some adaptation and transfer problems, keep limiting its use at present.

In this regard, the Embedded Systems Engineering Group of the University of Malaga, specializing in the design of physical devices to aid users, such as a smart wheelchair, has developed a mechanized cane that can measure patients' rehabilitation process without any impact on them.

This is a low cost device, adapted to users and accessible to anyone, since the researchers have uploaded its designs, algorithms and electronic diagrams of its pieces to the Internet under an open license.

This cane monitors users' weight-bearing while walking, providing individualized information on their progress, by means of two embedded pressure sensors placed at two different depths in the tip of a regular cane so that they don't affect cane ergonomics. Likewise, so as to simplify its use, it includes a wireless charger, and data can be collected by a mobile phone via Bluetooth.

"We seek a minimal interaction with patients, to avoid any cognitive load and prevent any impact on their daily routine", explains Cristina Urdiales, Head of the Department of Electronic Technology.

Furthermore, promoting direct transfer to the industry is another objective of this project, shared under an open license and published in the scientific journal Sensors. "We have managed to keep the cost of manufacturing the cane below €100", says Joaquín Ballesteros, the designer, who adds that, thus far, it has been downloaded by more than 150 researchers, from the USA and France mainly.

Experts explain that, in contrast to smart bracelets or watches already on the market which are based on statistics for healthy users, this device developed at the UMA, in collaboration with ESS-H Profile at Mälardalens University (Sweden), adds real measurement of step parameters.

But apart from its medical applications, which enable specialists to control each patient, the cane is also aimed at promoting active ageing. In fact, it is already being tested in the so-called Active Participation Centers.

Adding intelligence by developing a neuronal network, capable of processing and interpreting data collected in the cane, to predict more complex indicators, is the next stage of the project, which represents a step forward in the functionality of walking aids.

Credit: 
University of Malaga

Criteria for the reduction of environmental impact applied in the Roman Theatre of Itálica

image: This is a live performance in the Roman Theatre of Italica (Sevilla), Spain.

Image: 
Universidad de Sevilla

In the majority of studies carried out until now Life-Cycle Assessment (LCA) has been used according to a methodology based on the final evaluation of an already finished design. This article proposes a new approach of using LCA as an evaluation tool at the time of design, so making environmental-impact reduction criteria part of the decision-making process in projects so that they affect the final outcome.

However, very few studies apply Life-Cycle Analysis to heritage interventions. Taking into account the value of heritage building is necessary for planning intervention proposals with minimal environmental impact. On this matter, researchers from the Seville Higher School of Architecture have published a study in which they apply the methodology of LCA to the Roman Theatre of Itálica. Specifically, they have developed tools that link LCA and BIM software so that environmental-impact reduction criteria can be integrated into projects from the moment of their first design.

"Since 2011, we have been carrying out an intervention in the Roman Theatre of Itálica that allows it to be used as a space for contemporary dance and theatre, so that it can hold the International Dance Festival that is organised by Seville's provincial authority (diputación provincial). The elements that have been designed, and that are some 14 metres above the stage to support the electro-acoustic equipment, are totally removable and allow the theatre to recover its original appearances when there are no performances. For the design of the final solution LCA tools were used, which, as indicated in the article, allowed the final configuration to be adjusted", explains the University of Seville teacher, Juan Carlos Gómez de Cózar.

"Basically, the strategy of being able to reverse the intervention means that the piece of heritage can be returned to its original configuration if necessary. On the other hand, the use of LCA tools allows for the minimisation of environmental impact that a specific solution produces as opposed to other design options, thus making it possible to carry out heritage interventions that give the piece of heritage value and use and extend its useful life", adds a member of the research group TEP-130 'Architecture, heritage and sustainability: acoustics, illumination, optics and energy".

Life-cycle assessment

From the environmental point of view, life-cycle assessment applied to buildings makes it possible to calculate the impact of a building over its whole life cycle. To achieve this, every phase of this building is studied (production, construction, use, demolition and end of life) and its impact is measured in different categories, such as, for example, GWP (global warming potential). In this way, it is possible to take into account the complete impact that a building produces, both when it is being built and demolished (including both the materials and the processes that are necessary for the work) and when it is in use (if the design is not correct, the building will use a lot of energy for heating, air conditioning, lighting, etc.).

Credit: 
University of Seville

Researchers turn liquid metal into a plasma

image: Erupting plasma loops are seen above the surface of the sun. Plasma is the most abundant form of matter in the universe, and Rochester scientists are finding new ways to observe and create plasmas.

Image: 
NASA/SDO photo

Most laypersons are familiar with the three states of matter as solids, liquids, and gases. But there are other forms that exist. Plasmas, for example, are the most abundant form of matter in the universe, found throughout our solar system in the sun and other planetary bodies. Scientists are still working to understand the fundamentals of this state of matter, which is proving to be ever more significant, not only in explaining how the universe works but in harnessing material for alternative forms of energy.

For the first time, researchers at the University of Rochester's Laboratory for Laser Energetics (LLE) have found a way to turn a liquid metal into a plasma and to observe the temperature where a liquid under high-density conditions crosses over to a plasma state. Their observations, published in Physical Review Letters, have implications for better understanding stars and planets and could aid in the realization of controlled nuclear fusion--a promising alternative energy source whose realization has eluded scientists for decades. The research is supported by the US Department of Energy and the National Nuclear Security Administration.

WHAT IS A PLASMA?

Plasmas consist of a hot soup of free moving electrons and ions--atoms that have lost their electrons--that easily conducts electricity. Although plasmas are not common naturally on Earth, they comprise most of the matter in the observable universe, such as the surface of the sun. Scientists are able to generate artificial plasmas here on Earth, typically by heating a gas to thousands of degrees Fahrenheit, which strips the atoms of their electrons. On a smaller scale, this is the same process that allows plasma TVs and neon signs to "glow": electricity excites the atoms of a neon gas, causing neon to enter a plasma state and emit photons of light.

FROM A LIQUID TO A PLASMA

As Mohamed Zaghoo, a research associate at the LLE, and his colleagues observed, however, there is another way to create a plasma: under high density conditions, heating a liquid metal to very high temperatures will also produce a dense plasma. "The transition to the latter has not been observed scientifically before and is precisely what we did," Zaghoo says.

One of the unique aspects of this observation is that liquid metals at high densities exhibit quantum properties; however, if they are allowed to cross over to the plasma state at high densities, they will exhibit classical properties. In the 1920s, Enrico Fermi and Paul Dirac, two of the founders of quantum mechanics, introduced the statistical formulation that describes the behavior of matter made out of electrons, neutrons, and protons--normal matter that makes up the objects of Earth. Fermi and Dirac hypothesized that at certain conditions--extremely high densities or extremely low temperatures--electrons or protons have to assume certain quantum properties that are not described by classical physics. A plasma, however, does not follow this paradigm.

In order to observe a liquid metal crossing over to a plasma, the LLE researchers started off with the liquid metal deuterium, which displayed the classical properties of a liquid. To increase the density of the deuterium, they cooled it to 21 degrees Kelvin (-422 degrees Fahrenheit). The researchers then used the LLE's OMEGA lasers to set off a strong shockwave through the ultracool liquid deuterium. The shockwave compressed the deuterium to pressures up to five million times greater than atmospheric pressure, while also increasing its temperatures to almost 180,000 degrees Fahrenheit. The sample started off completely transparent, but as the pressure rose, it transformed into a shiny metal with high optical reflectivity.

"By monitoring the reflectance of the sample as a function of its temperature, we were able to observe the precise conditions where this simple lustrous liquid metal transformed into a dense plasma," Zaghoo says.

UNDERSTANDING MATTER AT EXTREME CONDITIONS

The researchers observed that the liquid metal initially exhibited the quantum properties of electrons that would be expected at extreme temperatures and densities. However, "at about 90,000 degrees Fahrenheit, the reflectance of the metallic deuterium started rising with a slope that is expected if the electrons in the system are no longer quantum but classical," Zaghoo says. "This means that the metal had become a plasma."

That is, the LLE researchers started off with a simple liquid. Increasing the density to extreme conditions made the liquid enter a state where it exhibited quantum properties. Raising the temperature even further made it turn into a plasma, at which point it exhibited classical properties, yet was still under high-density conditions, says Suxing Hu, a senior scientist at LLE and a co-author on the study. "What is remarkable is that the conditions at which this crossover between quantum and classical occurs is different from what most people expected based on plasma textbooks. Furthermore, this behavior could be universal to all other metals."

Understanding these fundamentals of liquids and plasmas allows researchers to develop new models to describe how materials at high densities conduct electricity and heat, and can help explain matter in the extremes of the solar system, as well as help in attaining fusion energy, Zaghoo says. "This work is not just a laboratory curiosity. Plasmas comprise the vast interiors of astrophysical bodies like brown dwarfs and also represent the states of matter needed to achieve thermonuclear fusion. These models are essential in our understanding of how to better design experiments to achieve fusion."

Credit: 
University of Rochester

Research connects dots among ocean dynamics, drought and forests

image: University of Wyoming undergraduate students work with Bryan Shuman (in small boat), a UW professor of geology and geophysics, to collect sediment core samples from the bottom of small lakes in the northeast United States as part of a study of ancient droughts. Pictured, from left, are Nicolas Mores (wearing hat), Ryan Davis, John Calder (a Ph.D. student) and Sara Burrell.

Image: 
Marc Serravezza

In a time of drastic change, humans look for predictability. A recent study led by a University of Wyoming researcher found that even in dramatically changing climates, mechanisms can be found that predict how those changes will play out. The last ice age was 11,000 years ago and, since then, climates have continuously changed, triggering constant shifts in the landscape.

This study found predictable, traceable connections between changes in how the Atlantic Ocean flowed and operated with centuries-long droughts and changes in forest makeup. Connections like these provide a useful framework for anticipating how climate change will continue to shape the way weather and ecosystems look in the future.

"Our study found that, over the past 8,000 years, shifts in the Gulf Stream in the Atlantic led to severe drought in North America," says Bryan Shuman, a professor in UW's Department of Geology and Geophysics, who headed up the research that came to these conclusions. "The mechanics of this connection remain today, and the potential for changes in the ocean to lead to severe droughts highlights a serious risk for the U.S."

"However, the predictability -- the strong ability to forecast drought and its impacts -- is good news," Shuman adds. "The study focused on an area of the Atlantic Ocean that is experiencing rapid changes today. We can use that predictability to anticipate similar changes in the future and prepare for them to the best of our ability."

Shuman was lead author of the paper, titled "Predictable Hydrological and Ecological Responses to Holocene North Atlantic Variability," that was published today (March 11) in the Proceedings of the National Academy of Sciences (PNAS). The journal is one of the world's most prestigious multidisciplinary scientific serials, with coverage spanning the biological, physical and social sciences.

Other contributors were from the University of Wisconsin-Madison, Emerson College and Harvard University. Jeremiah Marsicek, a postdoctoral researcher at the University of Wisconsin-Madison, was co-author of the paper and Shuman's Ph.D. student. Marsicek graduated from UW in 2017.

The paper focuses on the role of changing ocean circulation in creating droughts in the northeastern United States. Researchers looked at the combined evidence of changing water levels of lakes and changes in makeup of eastern forests to explore the timing and potential triggers for these changes.

Lake sediment cores, which track the history of a lake for thousands of years, show that the region has become progressively wetter over the past 11,000 years, but that noticeable droughts interrupted the trend for centuries at a time. Researchers then searched for major changes happening at the same time that could cause the droughts and connected them to major shifts in the Gulf Stream in the Atlantic.

"The paper also is important for two other reasons," Shuman explains. "One, it shows that forests can change dramatically as climate changes. What tree species grow in a certain area changes quickly as climate changes; and, two, by using and comparing multiple methods and locations, we showed that our results were not a fluke. We can get reliable estimates of how climate has changed in the past, which makes us more confident of how we can predict how climate will change in the future."

During this study, researchers examined 8,000 years of climate variations and their effects in the North Atlantic region. The currently humid Northeast was once as dry as the eastern Great Plains region, which illustrates how severely climate changes can alter water supplies, he says.

The importance of climate change arises from effects on natural resources such as water and ecosystems, the paper says. Diagnosing the predictability of these events in the past can help to anticipate future changes, while also clarifying what is known about climate in the past, according to the paper.

The project was funded by the National Science Foundation (NSF).

"Significant environmental changes are taking place on Earth. This paper shows that past changes in ecosystems as different as the Atlantic Ocean and North American forests were linked with one another in important and scientifically predictable ways," says Matthew Kane, a program director at the NSF. "The ability of science to understand these links and forecast outcomes has significant implications for agriculture, forestry and our nation's future economic prosperity."

Credit: 
University of Wyoming

Sex differences in personality traits in Asian elephants

image: Personality traits between sexes in Asian elephants differ: males are more aggressive whereas females are more sociable.

Image: 
John Jackson

Personality in humans has been a well-known concept for long, but during the last two decades science has shown us that also other animals express personalities, for example, in sociability. Differences in personality between the sexes is also a topic in behavioural biology, but besides primates, little is known about personality differences between sexes in long-lived highly social mammals, as studies tend to focus on shorter lived species.

The researchers of the University of Turku studied a semi-captive population of timber elephants in Myanmar. Even though a previous study showed that male and female elephants do not differ in their personality structure, the new study offers evidence that there are still differences in how strongly each personality trait is expressed in males and females.

"We found that males scored higher on the Aggressiveness personality trait compared to females. Males also tended to be rated as less sociable than females, scoring lower on the Sociability personality trait than females. We found no sex difference in the Attentiveness personality trait or in variances of any of the personality traits examined", says Postdoctoral Researcher and the lead author of the study Martin Seltmann from the Department of Biology at the University of Turku.

The researchers used questionnaires to find out about the elephants' personalities. The questions were directed to mahouts, or elephant head riders, and they scored how often the elephant displayed each of the 28 different behaviours on a 4-point scale. The surveys were conducted in Myanmar in 2014-2017 on over 250 timber elephants living in their natural habitat.

"The elephants work in the timber industry, pulling logs from one place to another. This is a very unique research environment and population, enabling us to study several hundreds of elephants", says Dr Seltmann.

Sex-specific differences in personality and their link to species' life-history

Female Asian elephants live in small, strongly bonded family units and group cohesion is of high importance. Exhibiting consistent and predictable personalities could further improve the resolution of a conflict within the group, with different personalities adopting different social roles.

"Higher female agreeableness seems to be a common pattern in long-lived highly social mammals. These sex-specific differences in elephant Sociability could be explained by the different social lives of the sexes", says researcher Mirkka Lahdenperä who participated in the study.

Higher Aggressiveness in male elephants might be explained by their need to assess each other's dominance status by less aggressive sparring bouts or by more aggressive interactions during musth. Higher aggression is important in maximising reproductive success in male elephants as older, larger and more aggressive males are more successful in mate guarding than the less aggressive males.

"The personality trait Attentiveness includes behaviours that are generally associated with the response of the elephant towards its mahout, like attentiveness, obedience, and vigilance. Since both sexes work and live under similar conditions and similar work regulations apply to them, it is not surprising that those behaviours are not significantly influenced by sex", explains Dr Seltmann.

The study contributes to the growing literature on the existence of sex differences in animal personality. Males and females of the same species can experience different selection pressures and follow different life-history strategies, which in turn can be reflected by sex-specific personality differences.

"Personality studies sometimes focus only on male or female individuals of a species, even though behaviour and life-history can differ substantially between the sexes, leading to different selection pressures on male and female personality", concludes Dr Seltmann.

Credit: 
University of Turku

Sensing shakes

image: A map of Japan showing locations for the epicenter of the 2011 Tohoku earthquake (✩),Kamioka (K), Matsushiro (M) and seismic survey instruments used (△ and ●).

Image: 
©2019 Kimura Masaya

Every year earthquakes worldwide claim hundreds or even thousands of lives. Forewarning allows people to head for safety and a matter of seconds could spell the difference between life and death. UTokyo researchers demonstrate a new earthquake detection method -- their technique exploits subtle telltale gravitational signals traveling ahead of the tremors. Future research could boost early warning systems.

The shock of the 2011 Tohoku earthquake in eastern Japan still resonates for many. It caused unimaginable devastation, but also generated vast amounts of seismic and other kinds of data. Years later researchers still mine this data to improve models and find novel ways to use it, which could help people in the future. A team of researchers from the University of Tokyo's Earthquake Research Institute (ERI) found something in this data which could help the field of research and might someday even save lives.

It all started when ERI Associate Professor Shingo Watada read an interesting physics paper on an unrelated topic by J. Harms from Istituto Nazionale di Fisica Nucleare in Italy. The paper suggests gravimeters -- sensors which measure the strength of local gravity -- could theoretically detect earthquakes.

"This got me thinking," said Watada. "If we have enough seismic and gravitational data from the time and place a big earthquake hit, we could learn to detect earthquakes with gravimeters as well as seismometers. This could be an important tool for future research of seismic phenomena."

The idea works like this. Earthquakes occur when a point along the edge of a tectonic plate comprising the earth's surface makes a sudden movement. This generates seismic waves which radiate from that point at 6-8 kilometers per second. These waves transmit energy through the earth and rapidly alter the density of the subsurface material they pass through. Dense material imparts a slightly greater gravitational attraction than less dense material. As gravity propagates at light speed, sensitive gravimeters can pick up these changes in density ahead of the seismic waves' arrival.

"This is the first time anyone has shown definitive earthquake signals with such a method. Others have investigated the idea, yet not found reliable signals," elaborated ERI postgraduate Masaya Kimura. "Our approach is unique as we examined a broader range of sensors active during the 2011 earthquake. And we used special processing methods to isolate quiet gravitational signals from the noisy data."

Japan is famously very seismically active so it's no surprise there are extensive networks of seismic instruments on land and at sea in the region. The researchers used a range of seismic data from these and also superconducting gravimeters (SGs) in Kamioka, Gifu Prefecture, and Matsushiro, Nagano Prefecture, in central Japan.

The signal analysis they performed was extremely reliable scoring what scientists term a 7-sigma accuracy, meaning there is only a one-in-a-trillion chance a result is incorrect. This fact greatly helps to prove the concept and will be useful in calibration of future instruments built specifically to help detect earthquakes. Associate Professor Masaki Ando from the Department of Physics invented a novel kind of gravimeter -- the torsion bar antenna (TOBA) -- which aims to be the first of such instruments.

"SGs and seismometers are not ideal as the sensors within them move together with the instrument, which almost cancels subtle signals from earthquakes," explained ERI Associate Professor Nobuki Kame. "This is known as an Einstein's elevator, or the equivalence principle. However, the TOBA will overcome this problem. It senses changes in gravity gradient despite motion. It was originally designed to detect gravitational waves from the big bang, like earthquakes in space, but our purpose is more down-to-earth."

The team dreams of a network of TOBA distributed around seismically active regions, an early warning system that could alert people 10 seconds before the first ground shaking waves arrive from an epicenter 100 km away. Many earthquake deaths occur as people are caught off-guard inside buildings that collapse on them. Imagine the difference 10 seconds could make. This will take time but the researchers continually refine models to improve accuracy of the method for eventual use in the field.

Credit: 
University of Tokyo

EPFL researchers simulate the process of adhesive wear

image: Image of the computer simulation showing adhesive wear on a self-affine surface. © LSMS / EPFL

Image: 
© LSMS / EPFL

Surface wear describes the process of material loss when two surfaces come into contact with each other. It has significant economic, social and health consequences - just think of the fine particles emitted by moving vehicles. What's more, it can be observed at all levels, from the nanoscale up to the scale of tectonic faults, with the formation of gouge. There are several wear mechanisms, yet the adhesive type is most common. It takes place when two surfaces - such as two pieces of the same metal - rub against one another and adhere.

One of the parameters that influence the wear mechanism is surface roughness. A better understanding of how surface roughness changes during the wear process would improve our control over this mechanism. This could lead to significant reductions in energy consumption, greenhouse gas emissions and costs.

Researchers at EPFL's Computational Solid Mechanics Laboratory (LSMS) have taken an important step in this direction. They have digitally simulated how surface roughness changes over time, and their results are in line with experimental results. What sets their simulations apart is their duration: using a method developed at EPFL, the LSMS researchers were able to simulate these mechanisms over an extended period of time. In other words, they managed to capture the entire process - from the initial geometry to the final fractal geometry. Their findings were published on 8 March in Nature Communications.

This study is the LSMS researchers' third on adhesive wear. Their first study - published in 2016 in Nature Communications - used digital simulations to describe how the process of adhesive wear produced fine particles. In 2017, taking their simulations further, they came out with a second study, appearing this time in Proceedings of the National Academy of Science, demonstrating that it was possible to predict the volume, shape and size of these particles.

Incomplete picture

Scientists are still far from fully understanding the physics underlying wear, and engineers must still carry out ad hoc experiments for each situation. What is known, however, is that worn surfaces display a characteristic fractal morphology, called self-affine, that has some fundamental properties regardless of the material and the scale. The origins of this self-affine morphology are still unknown.

Little work has been done on how surface roughness changes over time - and it has been mostly experimental. One limitation of experiments is that, because of the debris that forms, it is not easy to monitor how surface morphology changes during the rubbing process. The researchers overcame this problem through their digital simulations, which provide a constant stream of data.

Powerful digital simulations

"We used high-performance computer simulations to track the change in surface morphology in 2D materials," says Enrico Milanese, a PhD student at the LSMS. "In our simulations, we observed that contact between two surfaces always generates a wear debris particle. That particle is then forced to roll between the two surfaces, wearing them down. This led us to conclude that wear debris must be present for the surfaces to develop their characteristic self-affine roughness."

In the future, the LSMS researchers hope to explore the origins of adhesive wear by applying their simulation approach to 3D models of materials that are of interest to industry.

Credit: 
Ecole Polytechnique Fédérale de Lausanne

Canadians' consumption of fruit and vegetables drops 13 per cent in 11 years

image: Claire Tugault-Lafleur, a postdoctoral fellow in UBC's food, nutrition and health program, was lead author of the study.

Image: 
UBC Faculty of Land & Food Systems

Two surveys taken 11 years apart show a 13-per-cent decrease in the amount of fruit and vegetables being consumed by Canadians, new University of British Columbia research has found.

And while consumption of milk and dairy products also declined during the study period between 2004 and 2015, Canadians were eating more meat and alternatives in 2015 than they were a decade earlier.

"It's essential to look at what foods people are eating and whether food group intakes have changed over time to identify challenges and opportunities to promote healthier eating patterns among Canadians," said Claire Tugault-Lafleur, a postdoctoral fellow in UBC's food, nutrition and health program who was lead author of the study published today in Nutrients. "While some studies have recently reported trends in Canadians' intake of macronutrients like energy and total sugars, nobody had looked at differences in food group intakes during this period."

Researchers examined dietary data from two nationwide surveys involving more than 50,000 Canadians aged two and older. In both 2004 and 2015, respondents provided information about food and beverages they had consumed in the past 24 hours.

In 2015, Canadians reported consuming an average of 4.6 servings of total fruit and vegetables daily, down from 5.2 servings per day in 2004. The decrease was largely explained by fewer servings of vegetables (outside the dark green and orange category), potatoes, and fruit juices. While Canadians increased their intake of dark green and orange vegetables, eggs, legumes, nuts and seeds over this time, the average daily intake of other healthy dietary components like whole fruit, whole grains, fish and shellfish was stagnant. Canadians also reported fewer daily servings of fluid milk in 2015 compared to 2004.

The researchers were encouraged to find that energy from sugary beverages declined, on average, by 32 calories per day, and the decrease was more substantial among young people. For example, Canadian adolescents (aged 13 to 17) reported consuming, on average, 73 fewer daily calories from sugary beverages--a 43-per-cent decrease from 2004.

In 2007, the Canadian government released Eating Well with Canada's Food Guide. This version of the food guide included specific recommendations regarding amounts and types of foods to consume from each of the four core food groups (fruit and vegetables, grain products, milk and alternatives, meat and alternatives). Some of the shifts uncovered by the UBC study were in line with recommendations, but many healthy food groups recommended in the 2007 guide either saw no increase (whole fruit, whole grains, fish and shellfish) or decreased (vegetables, fluid milk). This suggests that more effective efforts are needed to address barriers to healthier diets among Canadians.

"Poor diet quality is the number one contributor to the burden of chronic diseases in Canada," said Tugault-Lafleur.

Credit: 
University of British Columbia

In developing nations, national parks could save endangered species

image: Research led by Stacy Lindshield shows protected areas like national parks are effectively preserving many mammal species in Senegal.

Image: 
Purdue University/Stacy Lindshield

The West African chimpanzee population has declined by nearly 80 percent in recent decades. Habitat loss is threatening their livelihoods across the continent, and especially in Senegal, where corporate mining has started eating up land in recent years.

The geographical distribution of West African chimps overlaps almost perfectly with gold and iron ore deposits, and unfortunately for the chimps, mining is a key piece of the country's development strategy, said Stacy Lindshield, a biological anthropologist at Purdue University.

Extractive industries are already improving people's livelihoods and promoting investment and infrastructure development, and researchers are trying to find a way to protect Senegal's chimps without surrendering these benefits. Many of Earth's animal species are now dying off at accelerated rates, but as human's closest living relatives, they tend to tug at our heart strings. Chimps are scientifically important, too - because they participate in collective activities such as hunting and food-sharing, they're often studied by social science researchers.

A new study of animal populations inside and outside a protected area in Senegal, Niokolo-Koba National Park, shows that protecting such an area from human interaction and development preserves not only chimps but many other mammal species. The findings were published in the journal Folia Primatologica.

"We saw the same number of chimpanzee species inside and outside the park, but more species of carnivores and ungulates in the protected area," Lindshield said.

Although habitat loss is the biggest threat to West African chimps, they're sometimes killed for meat. This is uncommon in Senegal, where eating chimpanzee meat is a taboo - people think chimps are too similar to humans to eat. But this isn't the case in other West African countries, where researchers might see a bigger difference in chimp populations inside and outside protected areas. National parks could be especially effective at protecting chimps in these nations.

The difference in the number of species of carnivores and hooved animals (known as ungulates), inside and outside the park was stark - their populations were 14 and 42 percent higher in the park, respectively. This is in sharp contrast with what Lindshield was hearing on the ground in Senegal: There's nothing in the park; all the animals are gone.

"There were qualitative and quantitative differences between what people were telling me and what I was seeing in the park," she said. "Niokolo-Koba National Park is huge, and the area we study is nestled deeply in the interior where it's difficult for humans to access. As a consequence, we see a lot of animals there."

Hunting practices and human-carnivore conflict are two big reasons for ungulates thriving inside the park. These animals are frequently targeted by hunters, and some carnivore species turn to livestock as a food source when their prey species are dwindling, creating potential for conflict with humans. Because the two sites are relatively close geographically and have similar grassland, woodland and forest cover, the researchers think human activity is the root of differences between the two sites.

Lindshield's team conducted basic field surveys by walking around the two sites and recording the animals they saw. They also installed camera traps at key water sources, gallery forests and caves to record more rare and nocturnal animals.

"We're engaging in basic research, but it's crucial in an area that's rapidly developing and home to an endangered species," Lindshield said. "This provides evidence that the protected area is effective, at least where we are working, counter to what I was hearing from the public. The management of protected areas is highly complex. Myriad challenges can make management goals nearly impossible, such as funding shortfalls or lack of buy-in from local communities, but I think it's important for people to recognize that this park is not a lost cause; it's working as it's intended to at Assirik, especially for large ungulates and carnivores."

Lindshield hopes her future studies will uncover not only which species exist in each site, but population sizes of each species. This metric, known as species evenness, is a key measure of biodiversity.

Data from the unprotected area in Senegal was collected by Jill Pruetz of Texas State University. Stephanie Bogart and Papa Ibnou Ndiaye of the University of Florida, and Mallé Gueye of Niokolo-Koba National Park, also contributed to this research. Funding was provided by the National Science Foundation, National Geographic Society, Leakey Foundation, Rufford Foundation, Primate Conservation Inc., Jane Goodall Research Center at University of Southern California, Purdue and Iowa State University.

Credit: 
Purdue University

Thyroid hormone helped our ancestors survive but left us susceptible

Although most victims survive the 735,000 heart attacks that occur annually in the U.S., their heart tissue is often irreparably damaged -- unlike many other cells in the body, once injured, heart cells cannot regenerate. According to a new UC San Francisco study, the issue may date back to our earliest mammalian ancestors, which may have lost the ability to regenerate heart tissue in exchange for endothermy -- or as it's known colloquially, "warm-bloodedness" -- a Faustian evolutionary bargain that ushered in the age of mammals but left modern humans vulnerable to irreparable tissue damage after heart attack.

The Warm-Blooded Advantage

Early mammals were small, rodent-like creatures that emerged in a world dominated by cold-blooded animals. Rather than compete directly, early mammals evolved a novel strategy that enabled them to occupy new niches: endothermy. While cold-blooded animals, unable to regulate their own body temperature, were hostage to ever-changing weather conditions and relegated to temperate climates, warm-blooded mammals were able to spread to colder climes and to thrive nocturnally. But, as the new study shows, this came at a steep cost.

"Many of the lower vertebrates can regenerate body parts and organs, including the heart, but most mammals cannot. This feature was lost somewhere in the ectotherm-to-endotherm transition," said Guo Huang, PhD, investigator at UCSF's Cardiovascular Research Institute, assistant professor of physiology and senior author of the new study, published March 7 in the journal Science.

At first glance, there's no obvious connection between a mammal's ability to regulate its body temperature and its inability to repair heart damage. But the new study reveals that these seemingly disparate biological traits are inextricably linked -- by thyroid hormones.

Thyroid Hormones Halt Heart Cell Regeneration

The thyroid gland produces a pair of well-studied hormones that are known to regulate body temperature, metabolic rate and normal heart function. Because of their critical role in promoting heat generation to maintain body temperature, these hormones have been posited to be the driving force behind the evolutionary transition from cold- to warm-bloodedness.

But Huang's study revealed that these hormones are also responsible for shutting off cardiac cell division, thus preventing heart tissue from repairing itself after an injury. This discovery represents the first demonstrated connection between thyroid hormones, cardiac development and repair, and the evolution of endothermy.

"Before our study, scientists knew that thyroid hormones were important for controlling heart rate and heart contractility. But the link with heart regenerative potential had never been shown before," Huang said.

Huang's team took a multi-species approach, comparing heart cell "ploidy" -- the number of copies of each chromosome pair in a cell -- across 41 different vertebrate species. Ploidy is closely linked to a cell's ability to divide and replicate. Virtually all actively dividing animal cells are diploid, containing only one pair of each chromosome, a copy inherited from mothers and another from fathers. By contrast, polyploid cells contain multiple copies of each pair and generally can't divide.

This comparative approach revealed a clear connection between ploidy and body temperature. Cold-blooded animals -- fish, amphibians and reptiles -- had heart cells that were largely diploid and responded to cardiac injury by ramping up cell division. Warm-blooded mammals had heart cells that were overwhelmingly polyploid, and lab experiments confirmed that these cells rarely divide in response to cardiac damage.

"This led us to hypothesize that the same thyroid hormones responsible for regulating body temperature might also be responsible for the diploid-to-polyploid transition and the arrest of cardiac cell division," Huang said.

The researchers confirmed their hunch in a series of lab experiments involving mice, a warm-blooded mammal in which heart cells normally cannot regenerate, and zebrafish, a cold-blooded animal noted for its ability to completely repair its heart, even if large chunks -- up to 20 percent -- are surgically amputated.

Mammals Gain, Fish Lose Heart Healing After Thyroid Hormone Levels Altered

In the womb, mice have diploid heart cells that regularly replicate to produce new cardiac tissue. But the heart cells of newborn mice undergo rapid polyploidization and lose the ability to divide -- events that coincide with a more than 50-fold increase in circulating thyroid hormones.

Experiments showed that these events were more than mere coincidence. When the researchers injected newborn mice with a drug that blocked thyroid hormone receptors and inspected their hearts two weeks later, they found four times as many dividing diploid heart muscle cells than mice that received no drug. Similar results were observed when they administered a different drug that impeded the production of thyroid hormones.

The researchers also produced genetically engineered mice whose heart cells lacked a functional receptor for thyroid hormone, which allowed their hearts to develop free from the influence of thyroid hormones. Unlike normal mice, these mutant mice were found to have significant numbers of actively dividing, diploid heart cells. Furthermore, when the scientists restricted blood flow to the heart -- a condition that usually causes permanent damage to cardiac tissue -- they observed a 10-fold increase in the number of dividing heart cells and 62 percent less scar tissue when compared with normal mice. Meanwhile, echocardiograms revealed an 11 percent improvement in heart function over normal mice after injury.

In stark contrast to mice and other mammals, adult zebrafish have relatively low levels of circulating thyroid hormone. This led Huang to wonder whether increasing the levels of thyroid hormone could shut off the self-repair machinery that makes zebrafish hearts uncommonly resilient.

The researchers added thyroid hormone to the water in zebrafish tanks, then surgically amputated a portion of the heart and provided the fish with ample recovery time. Normally, zebrafish would be able to completely repair this kind of damage over the course of a few weeks. But fish that were reared in a high-hormone environment experienced a 45 percent reduction in heart cell division, a significant increase in polyploid heart cells and pronounced scarring of heart tissue after injury. Just as in mammals, thyroid hormones led to impaired cardiac regeneration in fish.

"Our results demonstrate an evolutionarily conserved function for thyroid hormone in regulating heart cell proliferation and suggest that loss of regenerative potential was a trade-off that allowed mammals to become warm-blooded," Huang said. "For early mammals, endothermy was more advantageous than retention of regenerative potential. But now, with medical improvements allowing us to live much longer, this loss of cardiac regeneration becomes more problematic and is a fundamental cause of heart disease."

Credit: 
University of California - San Francisco

Unlocking the untapped potential of light in optical communications

image: The multiplexing/demultiplexing module fabricated (a,d) employs a property of light called the "optical vortex" to transmit/receive multiple signals simultaneously through a shared optical medium. The required light waves with different optical vortexes are generated using a combination of a star coupler (b) and an optical-vortex generator (c).

Image: 
The Optical Networking and Communication Conference & Exhibition 2019

Scientists at Tokyo Institute of Technology have fabricated a multiplexer/demultiplexer module based on a property of light that was not being exploited in communications systems: the optical vortex. Such devices will be crucial for improving optical networks, which are the backbone of today's Internet, so that they can meet the traffic demands of tomorrow.

In our communication-centered era, Internet traffic has been increasing rapidly. The massive amounts of data that travel through the Internet are enabled by huge backbone networks, usually involving millions of connections deployed using optical communication technology. Foreseeing that this increase in data flow will not stop anytime soon, researchers worldwide are searching for ways to further develop and improve optical communications.

One ubiquitous technique in modern electronic communications is multiplexing, which is a way to maximize the use of the available bandwidth. Multiplexing consists in packing multiple signals into a single signal that can be sent through a shared medium, such as an optical fiber. The received complex signal is then demultiplexed at the receiver and each simple signal is routed to its intended destination. Multiple multiplexing approaches are used nowadays to achieve speeds of over 100 gbits/s through optical networks.

However, we need to find a way to cram more data into optical signals without requiring more energy and at a low cost; that is, new multiplexing technologies are needed. Recent promising methods involve taking advantage of properties of light not conventionally used for communication to encode independent signals. For example, the polarization of light has already been employed and practical applications have been proposed.

On the other hand, there is another characteristic of light, called the "optical vortex", that can be exploited. This was the focus of a research carried out at Tokyo Institute of Technology, led by Assistant Professor Tomohiro Amemiya. "The optical vortex carries the orbital angular momentum of light and can be used to multiplex signals by assigning each signal to a light wave of different momentum," explains Amemiya. The application of the optical vortex for signal multiplexing represents untapped territory with great potential.

Of course, to even think of encoding signals into light waves with different optical vortexes and transmit them, it is first necessary to design and implement the necessary circuitry for both the multiplexing and demultiplexing operations. The research team therefore designed and fabricated an orbital angular momentum multiplexing/demultiplexing module.

Their device was fabricated so as to take five independent signals as input. Using a combination of two tiny circuit structures, called a star coupler and an optical-vortex generator, each of the five signals is "encoded" with a unique optical angular momentum. The output signal consists of a combination of the five signals, and the receiver circuit only has to carry out the multiplexing operation in reverse (demultiplexing) to end up again with the five independent signals.

The fabricated module is shown in Figure 1. The curved tips of the waveguides of the optical-vortex generator were made of silicon and measured a few micrometers. The fabrication process for the optical-vortex generator had been reported in previous research, and the work now done by the team demonstrates one concrete application of this technology.

Devices and multiplexing techniques such as the ones demonstrated by the team will be crucial in the very near future. "It is certain that the demand for high capacity systems with low cost and less energy losses will further increase in the future," states Amemiya. Fortunately, more ways to improve current communications systems by exploiting the untapped properties of light will surely become available to bring us one step forward in our communication era.

Credit: 
Tokyo Institute of Technology