Tech

NASA gets an eyeful of Typhoon Fengshen

image: On Nov. 15, the MODIS instrument that flies aboard NASA's Terra satellite provided a visible image of Typhoon Fengshen in the Northwestern Pacific Ocean after its eye opened.

Image: 
NASA Worldview

NASA's Terra satellite captured an image of Typhoon Fengshen after its eye opened as Fengshen had strengthened from a tropical storm to a typhoon and developed an eye.

On Nov. 15, the Moderate Resolution Imaging Spectroradiometer or MODIS instrument that flies aboard NASA's Terra satellite provided a visible image of Fengshen. The MODIS image showed the cyclone was producing a large area of deep convection and strong thunderstorms around the visible eye. MODIS imagery showed the eye is about 10 nautical miles in diameter. Bands of thunderstorms were wrapping into the eye.

At 5 a.m. EDT (0900 UTC), the center of Typhoon Fengshen was located near latitude 21.6 degrees north and longitude 142.3 degrees east. That is about 227 nautical miles south-southeast of Iwo To Island, Japan. Fengshen has tracked to the north-northwest.  Maximum sustained winds were near 110 knots (127 mph/204 kph).

The Joint Typhoon Warning Center expects slight strengthening over the day on Nov. 15 before a weakening trend begins. The storm is expected to make a loop over the next five days and head toward the northwest by Nov. 19, passing to the northeast of Iwo To.

NASA's Terra satellite is one in a fleet of NASA satellites that provide data for hurricane research.

Hurricanes are the most powerful weather event on Earth. NASA's expertise in space and scientific exploration contributes to essential services provided to the American people by other federal agencies, such as hurricane weather forecasting.

Credit: 
NASA/Goddard Space Flight Center

Seeing past the stigma

image: Erythroxylum coca in bloom.

Image: 
J. D'Auria / IPK

Since the Western world came across the South American plant genus Erythroxylum, the use of this multifaceted genus has been associated with the production of soft drinks, such as Coca Cola, or with the abuse of the purified plant substance in the form of the narcotic cocaine. However, the indigenous peoples of South America have been using different Erythroxylum species within their traditional medicine for thousands of years. A review article written by IPK-scientist Dr. John D'Auria and a number of scientists from different Colombian research institutes now aims to reopen the conversation around the stigmatised plant by highlighting the numerous -positive- potential use cases of the controversial genus.

The plant genus Erythroxylum encompasses over 230 species and has been cultivated and used by South American communities for thousands of years. E. coca is included as one of the oldest cultivated medicinal plant species, with evidence of its use dating back to around 8000 years. Nevertheless, relatively little is known about the biology of the different Erythroxylum species, as a comparatively modern stigma has been having a detrimental effect on starting new research attempts - Biochemist Dr. John D'Auria: "For a long time, upon hearing 'coca', everyone equated it with either Coca Cola or cocaine. Therefore, the genus has a pretty or even extremely negative connotation - especially cocaine has led to strong social and economic damage." Despite the impeded scientific progress, nowadays, Erythroxylum, in particular E. coca, is one of the largest cash crops in South American countries such as Colombia, Peru and Bolivia - regardless of its illicit status in some regions. And despite previous political attempts, it seems unlikely that the plant will be eradicated as a crop due to its major role in South American societies and markets.

Keeping this in mind, Dr. D'Auria, based at the Leibniz Institute of Plant Genetics and Crop Plant Research (IPK) recently collaborated with a group of Colombian scientists. Together they compiled an interdisciplinary review paper on the domestication history and potential use-cases of the Erythroxylum genus which was recently published in "Molecules". Dr. D'Auria: "Some Erythroxylum compounds are dangerous and should be controlled. With the review paper, we are using a measured voice to say: here are other compounds and alternative ways people might be able to use these plants to produce medicinal and pharmaceutical compounds." Dr. D'Auria continues: "People chew coca to this day to reduce thirst-urge or alleviate muscular aches and there are various potential uses in modern medicine, for example in the area of dental health and procedures involving the upper respiratory tract." The hope is that one day these positive applications will help transform the cultivation and use of Erythroxylum without disrupting the livelihood of South American farmers.

In the meantime, within his research group "Metabolic Diversity" at the IPK in Gatersleben, Dr. D'Auria's primary focus lies on understanding how crop plants, such as wheat or alfalfa, can modify their metabolism. As Dr. D'Auria emphasises: "We are not married to any one compound or class of metabolic compounds. We are investigating variations of primary and specialised metabolism in plants and the variations achieved through their genetics and biochemistry. We aim to use the knowledge to improve crops in Germany and Europe within the context of climate change." Nevertheless, Dr. D'Auria will also continue to investigate the unique metabolic pathways of the Erythroxylum genus. Dr. D'Auria: "There are still ongoing efforts in understanding the coca plant. Our goal is to one day fully understand its specialised biosynthetic pathways so that we can create augmented, usable plants in the future.

Credit: 
Leibniz Institute of Plant Genetics and Crop Plant Research

We know we're full because a stretched intestine tells us so

We commonly think a full stomach is what tells us to stop eating, but it may be that a stretched intestine plays an even bigger role in making us feel sated, according to new laboratory research led by UC San Francisco neuroscientist Zachary Knight, PhD.

You may not believe it, especially heading into the holiday season, but your body is remarkably good at keeping your weight within an extremely narrow range in the long run, which it does by balancing how much you eat with how much energy you expend each day.

The extensive web of nerve endings lining your gut plays an important role in controlling how much you eat by monitoring the contents of the stomach and intestine and then sending signals back to the brain that boost or lower your appetite. Most scientists believe this feedback involves hormone-sensitive nerve endings in the gut that track the nutrients you consume and calculate when you've had enough, but no one has yet tracked down the exact type of neurons that convey these signals to the brain.

"Given how central eating is to our lives, it is remarkable that we still don't understand how our bodies know to stop being hungry when we eat food," said Knight, a Howard Hughes Medical Institute Investigator and associate professor in the Department of Physiology at UCSF.

One of the challenges to answering this question is that the thousands of sensory nerves involved in collecting sensory information from the stomach and intestine come in many different types, yet all of them transmit messages back to the brain via the same giant bundle, which is called the vagus nerve. Scientists can either block or stimulate the activity of this nerve bundle and change animals' appetites, but how to figure out which vagal nerve endings in particular were responsible for the change?

To resolve this mystery, the Knight lab team, led by postdoctoral researcher Ling Bai, PhD, comprehensively mapped the molecular and anatomical identities of the vagal sensory cell types neurons innervating the stomach and intestine. This new map, published November 14, 2019 in Cell, allowed the researchers to selectively stimulate different types of vagal neurons in mice, revealing that intestinal stretch sensors are uniquely able to stop even hungry mice from wanting to eat.

Comprehensive Map of Gut Nervous System Reveals Surprising Insights

Scientists had previously classified gut sensory neurons into three types based on the anatomy of their nerve endings: mucosal endings have nerve terminals that line the inner layer of the gut and detect hormones that reflect nutrient absorption; IGLEs (intraganglionic laminar arrays) have nerve endings in the layers of muscle that surround the stomach and intestine and sense physical stretching of the gut; and IMAs (intramuscular arrays), whose function is still not known, but may also sense stretching.

"The vagus nerve is the major neural pathway that transmits information from gut to brain, but the identities and functions of the specific neurons that are sending these signals were still poorly understood," Bai said. "We decided to use modern genetic techniques to systematically characterize the cell types that make up this pathway for the first time."

Using these techniques, Bai and colleagues discovered that mucosal endings actually come in many different varieties -- four of which the researchers studied in detail. Some of these were mainly found in the stomach and others mainly in different parts of the intestines, with each type specialized to sense a particular combination of nutrient-related hormones. Stretch-sensitive IGLEs also came in at least two different types, the researchers found, one mainly in the stomach and the other mainly in the intestine.

To learn how these different nerve types in the gut control appetite, Bai and her team used a technique called optogenetics, which involves genetically engineering specific groups of neurons in a way that allows them be selectively stimulated by light -- in this case to test their ability to make hungry mice stop eating.

The researchers expected that stimulating the IGLE neurons that sense stomach stretch would make animals stop eating, and this is just what they found. But when they turned to stimulating the different types of hormone-sensing mucosal endings in the intestine that had been hypothesized to control appetite, they found that none of these were able to impact animals' feeding at all. Instead, to the researchers' surprise, they found that stimulating IGLE stretch receptors in the intestine proved much more powerful at eliminating the appetites of the hungry mice than even the stomach stretch receptors.

"This was quite unexpected, because the dogma in the field for decades has been that stomach stretch receptors sense the volume of food being eaten and the intestinal hormone receptors sense its energy content," Bai said.

These results raise important questions about how these stretch receptors are normally activated during feeding and how they might be manipulated to treat obesity. The findings also suggest a potential explanation for why bariatric surgery -- performed to treat extreme obesity by reducing the size of the gut -- is so mysteriously effective at promoting long-term appetite and weight reduction.

Researchers have suspected for some time that one reason this surgery is so surprisingly effective at blocking hunger is that it causes food to pass very rapidly from the stomach into the intestine, but the mechanism has been unknown. The new findings suggest an answer: that rapidly incoming food stretches the intestine, thereby activating the vagal stretch sensors and powerfully blocking feeding.

"Identifying the mechanism by which bariatric surgery causes weight loss is one of the biggest unsolved problems in the study of metabolic disease, and so it is exciting that our work could suggest a fundamentally new mechanism for this procedure," Knight said. "At present, however, this idea is a hypothesis that still needs to be tested."

Findings Add to New Science of Hunger and Thirst

Knight, a member of the UCSF Weill Institute for Neurosciences and UCSF Kavli Institute for Fundamental Neuroscience, investigates how the brain senses the needs of the body and then generates specific behaviors to restore physiologic balance -- sometimes in surprising ways. In just the last few years, his lab has upended long-held textbook theories of hunger and thirst.

It had been thought, for example, that neurons in the brain motivate eating and drinking by reacting to the body's internal nutrient and water balance. But Knight's team, by precisely recording the activity of specific neurons in mice, found that hunger neurons turn off as soon an animal sees or smells food, seeming to anticipate food intake. Similarly, thirst neurons turn off at the first taste of water, long before any change in the body's fluid balance. Knight's team has also identified warm-sensing neurons that control thermoregulation, including an animal's responses to heat. Most recently, his lab has turned its attention to the gut, exploring how nutrients, salt, and stretch in the stomach and intestine influence the neurons that control eating and drinking.

"We like to use unbiased approaches such as in vivo imaging to observe these systems as they naturally operate." Knight said. "This creates the opportunity for serendipity and allows us to discover the 'unknown unknowns' -- the things that we didn't know we should be looking for."

Credit: 
University of California - San Francisco

Researchers generate terahertz laser with laughing gas

Within the electromagnetic middle ground between microwaves and visible light lies terahertz radiation, and the promise of "T-ray vision."

Terahertz waves have frequencies higher than microwaves and lower than infrared and visible light. Where optical light is blocked by most materials, terahertz waves can pass straight through, similar to microwaves. If they were fashioned into lasers, terahertz waves might enable "T-ray vision," with the ability to see through clothing, book covers, and other thin materials. Such technology could produce crisp, higher-resolution images than microwaves, and be far safer than X-rays.

The reason we don't see T-ray machines in, for instance, airport security lines and medical imaging facilities is that producing terahertz radiation requires very large, bulky setups or devices that produce terahertz radiation at a single frequency -- not very useful, given that a wide range of frequencies is required to penetrate various materials.

Now researchers from MIT, Harvard University, and the U.S. Army have built a compact device, the size of a shoebox, that produces a terahertz laser whose frequency they can tune over a wide range. The device is built from commercial, off-the-shelf parts and is designed to generate terahertz waves by spinning up the energy of molecules in nitrous oxide, or, as it's more commonly known, laughing gas.

Steven Johnson, professor of mathematics at MIT, says that in addition to T-ray vision, terahertz waves can be used as a form of wireless communication, carrying information at a higher bandwidth than radar, for instance, and doing so across distances that scientists can now tune using the group's device.

"By tuning the terahertz frequency, you can choose how far the waves can travel through air before they are absorbed, from meters to kilometers, which gives precise control over who can 'hear' your terahertz communications or 'see' your terahertz radar," Johnson says. "Much like changing the dial on your radio, the ability to easily tune a terahertz source is crucial to opening up new applications in wireless communications, radar, and spectroscopy."

Johnson and his colleagues have published their results in the journal Science. Co-authors include MIT postdoc Fan Wang, along with Paul Chevalier, Arman Armizhan, Marco Piccardo, and Federico Capasso of Harvard University, and Henry Everitt of the U.S. Army Combat Capabilities Development Command Aviation and Missile Center.

Molecular breathing room

Since the 1970s, scientists have experimented with generating terahertz waves using molecular gas lasers -- setups in which a high-powered infrared laser is shot into a large tube filled with gas (typically methyl fluoride) whose molecules react by vibrating and eventually rotating. The rotating molecules can jump from one energy level to the next, the difference of which is emitted as a sort of leftover energy, in the form of a photon in the terahertz range. As more photons build up in the cavity, they produce a terahertz laser.

Improving the design of these gas lasers has been hampered by unreliable theoretical models, the researchers say. In small cavities at high gas pressures, the models predicted that, beyond a certain pressure, the molecules would be too "cramped" to spin and emit terahertz waves. Partly for this reason, terahertz gas lasers typically used meters-long cavities and large infrared lasers.

However, in the 1980s, Everitt found that he was able to produce terahertz waves in his laboratory using a gas laser that was much smaller than traditional devices, at pressures far higher than the models said was possible. This discrepancy was never fully explained, and work on terahertz gas lasers fell by the wayside in favor of other approaches.

A few years ago, Everitt mentioned this theoretical mystery to Johnson when the two were collaborating on other work as part of MIT's Institute for Soldier Nanotechnologies. Together with Everitt, Johnson and Wang took up the challenge, and ultimately formulated a new mathematical theory to describe the behavior of a gas in a molecular gas laser cavity. The theory also successfully explained how terahertz waves could be emitted, even from very small, high-pressure cavities.

Johnson says that while gas molecules can vibrate at multiple frequencies and rotational rates in response to an infrared pump, previous theories discounted many of these vibrational states and assumed instead that a handful of vibrations were what ultimately mattered in producing a terahertz wave. If a cavity were too small, previous theories suggested that molecules vibrating in response to an incoming infrared laser would collide more often with each other, releasing their energy rather than building it up further to spin and produce terahertz.

Instead, the new model tracked thousands of relevant vibrational and rotational states among millions of groups of molecules within a single cavity, using new computational tricks to make such a large problem tractable on a laptop computer. It then analyzed how those molecules would react to incoming infrared light, depending on their position and direction within the cavity.

"We found that when you include all these other vibrational states that people had been throwing out, they give you a buffer," Johnson says. "In simpler models, the molecules are rotating, but when they bang into other molecules they lose everything. Once you include all these other states, that doesn't happen anymore. These collisions can transfer energy to other vibrational states, and sort of give you more breathing room to keep rotating and keep making terahertz waves."

Laughing, dialed up

Once the team found that their new model accurately predicted what Everitt observed decades ago, they collaborated with Capasso's group at Harvard to design a new type of compact terahertz generator by combining the model with new gases and a new type of infrared laser.

For the infrared source, the researchers used a quantum cascade laser, or QCL -- a more recent type of laser that is compact and also tunable.

"You can turn a dial, and it changes the frequency of the input laser, and the hope was that we could use that to change the frequency of the terahertz coming out," Johnson says.

The researchers teamed up with Capasso, a pioneer in the development of QCLs, who provided a laser that produced a range of power that their theory predicted would work with a cavity the size of a pen (about 1/1,000 the size of a conventional cavity). The researchers then looked for a gas to spin up.

The team searched through libraries of gases to identify those that were known to rotate in a certain way in response to infrared light, eventually landing on nitrous oxide, or laughing gas, as an ideal and accessible candidate for their experiment.

They ordered laboratory-grade nitrous oxide, which they pumped into a pen-sized cavity. When they sent infrared light from the QCL into the cavity, they found they could produce a terahertz laser. As they tuned the QCL, the frequency of terahertz waves also shifted, across a wide range.

"These demonstrations confirm the universal concept of a terahertz molecular laser source which can be broadly tunable across its entire rotational states when pumped by a continuously tunable QCL," Wang says.

Since these initial experiments, the researchers have extended their mathematical model to include a variety of other gas molecules, such as carbon monoxide and ammonia, providing scientists with a menu of different terahertz generation options with different frequencies and tuning ranges, paired with a QCL matched to each gas. The group's theoretical tools also enable scientists to tailor the cavity design to different applications. They are now pushing toward more focused beams and higher powers, with commercial development on the horizon.

Johnson says scientists can refer to the group's mathematical model to design new, compact and tunable terahertz lasers, using other gases and experimental parameters.

"These gas lasers were for a long time seen as old technology, and people assumed these were huge, low-power, nontunable things, so they looked to other terahertz sources," Johnson says. "Now we're saying they can be small, tunable, and much more efficient. You could fit this in your backpack, or in your vehicle for wireless communication or high-resolution imaging. Because you don't want a cyclotron in your car."

Credit: 
Massachusetts Institute of Technology

NASA sending solar power generator developed at Ben-Gurion U to space station

image: A new solar power generator prototype developed by Ben-Gurion University of the Negev (BGU) and research teams in the United States, will be deployed on the first 2020 NASA flight launch to the International Space Station.

The first prototype pictured here holds 90 miniaturized solar concentrators, and shows 12 solar cells with the concentrators in black.

Image: 
Ben-Gurion U./Jeffery Gordon

NASA Sending Solar Power Generator Developed at Ben-Gurion University to the International Space Station

BEER-SHEVA, ISRAEL...November 14, 2019 - A new solar power generator prototype developed by Ben-Gurion University of the Negev (BGU) and research teams in the United States, will be deployed on the first 2020 NASA flight launch to the International Space Station.

According to research published in Optics Express, the compact, microconcentrator photovoltaic system could provide unprecedented watt per kilogram of power critical to lowering costs for private space flight.

As the total costs of a launch are decreasing, solar power systems now represent a larger fraction than ever of total system cost. Optical concentration can improve the efficiency and reduce photovoltaic power costs, but has traditionally been too bulky, massive and unreliable for space use.

Together with U.S. colleagues, Prof. (Emer.) Jeffrey Gordon of the BGU Alexandre Yersin Department of Solar Energy and Environmental Physics, Jacob Blaustein Institutes for Desert Research, developed this first-generation prototype (1.7 mm wide) that is slightly thicker than a sheet of paper (.10 mm) and slightly larger than a U.S. quarter.

"These results lay the groundwork for future space microconcentrator photovoltaic systems and establish a realistic path to exceed 350 w/kg specific power at more than 33% power conversion efficiency by scaling down to even smaller microcells," the researchers say. "These could serve as a drop-in replacement for existing space solar cells at a substantially lower cost."

A second generation of more efficient solar cells now being fabricated at the U.S. Naval Research Labs is only 0.17 mm per side, 1.0 mm thick and will increase specific power even further. If successful, future arrays will be planned for private space initiatives, as well as space agencies pursuing new missions that require high power for electric propulsion and deep space missions, including to Jupiter and Saturn.

Credit: 
American Associates, Ben-Gurion University of the Negev

Space radar suggests North Korea set off a nuke equivalent to '17 Hiroshimas'

image: Satellites such as Sentinel-1 and ALOS-2 carry advanced synthetic aperture radars that can provide data to map changing land cover, ground deformation, ice shelves and glaciers, and can be used to help emergency response when disasters such as floods strike, and to support humanitarian relief efforts at times of crisis.

Image: 
ESA / ATG medialab

North Korea withdrew from the Treaty on the Non-Proliferation of Nuclear Weapons in 2003. It subsequently developed nuclear weapons, with five underground nuclear tests culminating in a suspected thermonuclear explosion (a hydrogen bomb) on 3 September 2017. Now a team of scientists, led by Dr K. M. Sreejith of the Space Applications Centre, Indian Space Research Organisation (ISRO), have used satellite data to augment measurements of tests on the ground. The researchers find that the most recent test shifted the ground by a few metres, and estimate it to be equivalent to 17 times the size of the bomb dropped on Hiroshima in 1945. The new work appears in a paper in Geophysical Journal International, a publication of the Royal Astronomical Society.

Conventional detection of nuclear tests relies on seismic measurements using the networks deployed to monitor earthquakes. But there are no openly available seismic data from stations near this particular test site, meaning that there are big uncertainties in pinpointing the location and size of nuclear explosions taking place there.

Dr Sreejith and his team turned to space for a solution. Using data from the ALOS-2 satellite and a technique called Synthetic Aperture Radar Interferometry (InSAR), the scientists measured the changes on the surface above the test chamber resulting from the September 2017 explosion, sited at Mount Mantap in the northeast of North Korea. InSAR uses multiple radar images to create maps of deformation over time, and allows direct study of the sub-surface processes from space.

The new data suggest that the explosion was powerful enough to shift the surface of the mountain above the detonation point by a few metres, and the flank of the peak moved by up to half a metre. Analysing the InSAR readings in detail reveals that the explosion took place about 540 metres below the summit, about 2.5 kilometres north of the entrance of the tunnel used to access the test chamber.

Based on the deformation of the ground, the ISRO team predict that the explosion created a cavity with a radius of 66 metres. It had a yield of between 245 and 271 kilotonnes, compared with the 15 kilotonnes of the 'Little Boy' bomb used in the attack on Hiroshima in 1945.

Lead author of the study, Dr Sreejith, commented, "Satellite based radars are very powerful tools to gauge changes in earth surface, and allow us to estimate the location and yield of underground nuclear tests. In conventional seismology by contrast, the estimations are indirect and depend on the availability of seismic monitoring stations."

The present study demonstrates the value of space-borne InSAR data for measurement of the characteristics of underground nuclear tests, with greater precision than conventional seismic methods. At the moment though nuclear explosions are rarely monitored from space due to a lack of data. The team argue that currently operating satellites such as Sentinel-1 and ALOS-2 along with the NASA-ISRO Synthetic Aperture Radar (NISAR) mission, due to launch in 2022, could be used for this purpose.

Credit: 
Royal Astronomical Society

Human-machine interactions: Bots are more successful if they impersonate humans

An international research team including Iyad Rahwan, Director of the Center for Humans and Machines at the Max Planck Institute for Human Development in Berlin, sought to find out whether cooperation between humans and machines is different if the machine purports to be human. They carried out an experiment in which humans interacted with bots. In the study now published in Nature Machine Intelligence, the scientists show that bots are more successful than humans in certain human-machine interactions -- but only if they are allowed to hide their non-human identity.

The artificial voices of Siri, Alexa, or Google, and their often awkward responses, leave no room for doubt that we are not talking to a real person. The latest technological breakthroughs that combine artificial intelligence with deceptively realistic human voices now make it possible for bots to pass themselves off as humans. This has led to new ethical issues: Is bots' impersonation of humans a case of deception? Should transparency be obligatory?

Previous research has shown that humans prefer not to cooperate with intelligent bots. But if people do not even notice that they are interacting with a machine and cooperation between the two is therefore more successful, would it not make sense to maintain the deception in some cases?

In the study published in Nature Machine Intelligence, a research team from the United Arab Emirates, USA, and Germany involving Iyad Rahwan, Director of the Center for Humans and Machines at the Max Planck Institute for Human Development, asked almost 700 participants in an online cooperation game to interact with a human or an artificial partner. In the game, known as the prisoner's dilemma, players can either act egotistically to exploit the other player, or act cooperatively with advantages for both sides.

The crucial aspect of the experiment was that the researchers gave some participants false information about their gaming partner's identity. Some participants interacting with a person were told they were playing with a bot, and vice versa. This allowed the researchers to examine whether humans are prejudiced against gaming partners they take for bots and whether it makes a difference to bots' efficiency if they admit that they are bots, or not.

The findings showed that bots impersonating humans were more successful in convincing their gaming partners to cooperate. As soon as they divulged their true identity, however, cooperation rates decreased. Translating this to a more realistic scenario could mean that help desks run by bots, for example, may be able to provide assistance more rapidly and efficiently if they are allowed to masquerade as humans. The researchers say that society will have to negotiate the distinctions between the cases of human-machine interaction that require transparency and those where efficiency is key.

Credit: 
Max-Planck-Gesellschaft

Improved fitness can mean living longer without dementia

"It is important to say that it is never too late to begin exercising. The average participant in our study was around 60 years old at baseline, and improvement in cardiorespiratory fitness was strongly linked to lower dementia risk. Those who had poor fitness in the 1980s but improved it within the next decade could expect to live two years longer without dementia," says Atefe Tari of the Cardiac Exercise Research Group (CERG) at the Norwegian University of Science and Technology (NTNU).

Tari is lead author of a new study that was recently published in Lancet Public Health, a highly ranked journal in the prestigious Lancet family.

"Persistently low fitness is an independent risk factor for dementia and death due to dementia," the authors concluded.

The higher, the better

Dementia involves a progressive decline in cognitive functions, severe enough to interfere with the ability to function independently. Alzheimer's disease is the most common form of dementia.

By 2050, it is estimated that 150 million people in the world will have dementia - a tripling of the incidence of the disease today. There is no cure. Men live on average five years after being diagnosed with dementia, while women live for seven years on average after the diagnosis.

"As there is currently no effective drug for dementia, it is important to focus on prevention. Exercise that improves fitness appears to be one of the best medicines to prevent dementia," says Tari.

Tari's study is far from the first to show a link between good fitness and lower risk of getting dementia. What is unique, however, is that Tari and her research colleagues have measured the fitness level of participants twice ten years apart.

Thus, they have been able to evaluate how changes in fitness over time are related to dementia risk. And the results were clear.

Exercise that improves fitness appears to be one of the best medicines to prevent dementia.

"If you increase your cardiorespiratory fitness from poor to good you almost halve the risk of getting dementia. You also reduce the risk of dying from or with dementia. In our study, each increase of 1MET was associated with a 16% lower risk of getting dementia and a 10% lower risk of dementia-related death. This is an improvement that is very achievable for most people", says Tari.

A MET is a measurement unit used by researchers to quantify the rate at which a person expends energy relative to their body weight.

Followed for 30 years

Between 1984 and 1986, almost 75,000 Norwegians participated in the first wave of the HUNT Survey (HUNT1). Eleven years later, HUNT2 was organized, and 33,000 of the same people participated. More than 30,000 of them answered enough questions to be included in Tari's analyses.

The researchers calculated cardiorespiratory fitness with a formula previously developed and validated by the Cardiac Exercise Research Group, called the Fitness Calculator.

Previous studies have shown that those who score poorly on this calculator have an increased risk of heart attack, atrial fibrillation, depression and non-alcoholic fatty liver disease, and also that they generally die younger than people who achieve a higher fitness level.

The new study links results from the Fitness Calculator to the risk of dementia and dementia-related deaths up to 30 years later. To investigate these associations, Tari has used data from two different databases, the Health and Memory Study in Nord-Trøndelag and the Norwegian Cause of Death Registry.

Almost half the risk

Between 1995 and 2011, 920 people with dementia were included in the Health and Memory Study in Nord-Trøndelag. A total of 320 of them had also participated in both HUNT1 and HUNT2 and provided enough information about their own health to be included in the analyses.

It turned out that poor cardiorespiratory fitness in both the 1980s and 1990s was significantly more common in this group than among otherwise comparable HUNT participants who had not been diagnosed with dementia.

In fact, the risk of developing dementia was 40% lower for those who were among the 80% with the best fitness in both the 1980s and 1990s. Furthermore, it was 48% lower if one had changed from poor to higher fitness levels between the two surveys.

All participants were followed until death or end of follow-up in the summer of 2016. Via the Norwegian Cause of Death Registry, the NTNU researchers found 814 women and men who had died from or with dementia during the period. This means that dementia was stated as the underlying, immediate or additional cause of death.

The risk was lowest for those who had good fitness at both HUNT surveys. However, also those who had changed from poor to better fitness over the years had a 28% reduced risk.

Cause or coincidence?

In observational studies, there will always be questions about cause-effect relationships. For example, one might ask what causes what: Is it bad fitness that weakens the brain, or do people with cognitive impairment find it more difficult to be physically active and increase their fitness?

"Our study made it easy to see which came first. We estimated the fitness of the participants for the first time in the 1980s, and looked for dementia cases and deaths from 1995 onwards. We have also done separate analyses where we excluded those who got dementia or died during the first few years of the follow-up period, and the results were the same," says Tari.

It's also reasonable to ask if the association is random; it might not be the poor fitness that increases the risk of dementia, but people with poor fitness might also have several of the more well-known risk factors for dementia - such as high blood pressure, low level of education and a family history of brain diseases. That is an unlikely explanation, says Tari.

"The HUNT studies give us very broad information about the health of the participants, including body composition, smoking habits, educational level, blood pressure, diabetes, cholesterol levels and family history of stroke. By adjusting the analyses for these factors, we have ruled out that they fully explain the relationship between fitness and dementia risk in our study," she says.

Physical activity vs. fitness

In other words, the study provides very good evidence that maintaining good fitness is also good for the brain. However, Tari points out that this does not necessarily mean that everyone who is physically active on a regular basis is guaranteed that a good effect on brain health.

"High-intensity exercise improves fitness faster than moderate exercise, and we recommend that everyone exercise with a high heart rate at least two days each week. Regular exercise that makes you sweaty and out of breath will ensure your fitness will be good for your age. Our study suggests that good fitness for your age can delay dementia by two years and that you can also live two to three years longer after being diagnosed with dementia," she said.

Credit: 
Norwegian University of Science and Technology

Gallium-based solvating agent efficiently analyzes optically active alcohols

image: Schematic view of the in-situ direct 1H NMR chiral analysis.

Image: 
KAIST

A KAIST research team has developed a gallium-based metal complex enabling the rapid chiral analysis of alcohols. A team working under Professor Hyunwoo Kim reported the efficient new alcohol analysis method using nuclear magnetic resonance (NMR) spectroscopy in iScience.

Enantiopure chiral alcohols are ubiquitous in nature and widely utilized as pharmaceuticals. This importance of chirality in synthetic and medicinal chemistry has advanced the search for rapid and facile methods to determine the enantiomeric purities of compounds. To date, chiral analysis has been performed using high-performance liquid chromatography (HPLC) with chiral columns.

Along with the HPLC technique, chiral analysis using NMR spectroscopy has gained tremendous attention as an alternative to traditionally employed chromatographic methods due to its simplicity and rapid detection for real-time measurement. However, this method carries drawbacks such as line-broadening, narrow substrate scope, and poor resolution. Thus, compared with popular methods of chromatographic analysis, NMR spectroscopy is infrequently used for chiral analysis.

In principle, a chiral solvating agent is additionally required for the NMR measurement of chiral alcohols to obtain two distinct signals. However, NMR analysis of chiral alcohols has been challenging due to weak binding interactions with chiral solvating agents. To overcome the intrinsic difficulty of relatively weak molecular interactions that are common for alcohols, many researchers have used multifunctional alcohols to enhance interactions with solvating agents.

Instead, the KAIST team successfully varied the physical properties of metal complexes to induce stronger interactions with alcohols rather than the strategy of using multifunctional analytes, in the hopes of developing a universal chiral solvating agent for alcohols. Compared to the current method of chiral analysis used in the pharmaceutical industry, alcohols that do not possess chromophores can also be directly analyzed with the gallium complexes.

Professor Kim said that this method could be a complementary chiral analysis technique at the industry level in the near future. He added that since the developed gallium complex can determine enantiomeric excess within minutes, it can be further utilized to monitor asymmetric synthesis. This feature will benefit a large number of researchers in the organic chemistry community, as well as the pharmaceutical industry.

Credit: 
The Korea Advanced Institute of Science and Technology (KAIST)

Bacteria in the gut may alter ageing process, finds NTU Singapore study

image: NTU Singapore Professor Sven Pettersson (centre) and his team of researchers found that microorganisms living in the gut may alter the ageing process

Image: 
NTU Singapore

An international research team led by Nanyang Technological University, Singapore (NTU Singapore) has found that microorganisms living in the gut may alter the ageing process, which could lead to the development of food-based treatment to slow it down.

All living organisms, including human beings, coexist with a myriad of microbial species living in and on them, and research conducted over the last 20 years has established their important role in nutrition, physiology, metabolism and behaviour.

Using mice, the team led by Professor Sven Pettersson from the NTU Lee Kong Chian School of Medicine, transplanted gut microbes from old mice (24 months old) into young, germ-free mice (6 weeks old). After eight weeks, the young mice had increased intestinal growth and production of neurons in the brain, known as neurogenesis.

The team showed that the increased neurogenesis was due to an enrichment of gut microbes that produce a specific short chain fatty acid, called butyrate.

Butyrate is produced through microbial fermentation of dietary fibres in the lower intestinal tract and stimulates production of a pro-longevity hormone called FGF21, which plays an important role in regulating the body's energy and metabolism. As we age, butyrate production is reduced.

The researchers then showed that giving butyrate on its own to the young germ-free mice had the same adult neurogenesis effects.

The study was published in Science Translational Medicine yesterday (13 November), and was undertaken by researchers from Singapore, UK, and Australia.

"We've found that microbes collected from an old mouse have the capacity to support neural growth in a younger mouse," said Prof Pettersson. "This is a surprising and very interesting observation, especially since we can mimic the neuro-stimulatory effect by using butyrate alone."

"These results will lead us to explore whether butyrate might support repair and rebuilding in situations like stroke, spinal damage and to attenuate accelerated ageing and cognitive decline".

How gut microbes impact the digestive system

The team also explored the effects of gut microbe transplants from old to young mice on the functions of the digestive system.

With age, the viability of small intestinal cells is reduced, and this is associated with reduced mucus production that make intestinal cells more vulnerable to damage and cell death.

However, the addition of butyrate helps to better regulate the intestinal barrier function and reduce the risk of inflammation.

The team found that mice receiving microbes from the old donor gained increases in length and width of the intestinal villi - the wall of the small intestine. In addition, both the small intestine and colon were longer in the old mice than the young germ-free mice.

The discovery shows that gut microbes can compensate and support an ageing body through positive stimulation.

This points to a new potential method for tackling the negative effects of ageing by imitating the enrichment and activation of butyrate.

"We can conceive of future human studies where we would test the ability of food products with butyrate to support healthy ageing and adult neurogenesis," said Prof Pettersson.

"In Singapore, with its strong food culture, exploring the use of food to 'heal' ourselves, would be an intriguing next step, and the results could be important in Singapore's quest to support healthy ageing for their silver generation".

Group leader Dr Dario Riccardo Valenzano at the Max Planck Institute for Biology of Ageing in Germany, who was not involved in the study, said the discovery is a milestone in research on microbiome.

"These results are exciting and raise several new open questions for both biology of aging and microbiome research, including whether there is an active acquisition of butyrate producing microbes during mice life and whether extreme aging leads to a loss of this fundamental microbial community, which may be eventually responsible for dysbiosis and age-related dysfunctions," he added.

Professor Brian Kennedy, Director of the Centre for Healthy Ageing at the National University of Singapore, who provided an independent view, said, "It is intriguing that the microbiome of an aged animal can promote youthful phenotypes in a young recipient. This suggests that the microbiota with aging have been modified to compensate for the accumulating deficits of the host and leads to the question of whether the microbiome from a young animal would have greater or less effects on a young host. The findings move forward our understanding of the relationship between the microbiome and its host during ageing and set the stage for the development of microbiome-related interventions to promote healthy longevity."

The study builds on Prof Pettersson's earlier studies on how transplantation of gut microbes from healthy mice can restore muscle growth and function in germ-free mice with muscle atrophy, which is the loss of skeletal muscle mass.

Credit: 
Nanyang Technological University

Rubber in the environment

Everybody is talking about microplastics. But the amount of microplastics in air and water is small compared to another polymer that pollutes our air and water - and therefore our organism: Micro rubber. These are the finest particles from tyre abrasion, which enter our soil and air via the road surface or are removed by artificial turf. Empa researchers have now calculated that over the last 30 years, from 1988 to 2018, around 200,000 tonnes of micro rubber have accumulated in our environment in Switzerland. This is an impressive figure that has often been neglected in the discussions on microplastics.

The cause: squeaking tires

Researchers around Bernd Nowack from Empa's "Technology and Society" lab identified car and truck tyres as the main source of micro-rubber. "We quantified the abrasion of tires, but also the removal of artificial green areas such as artificial turf," says Nowack. However, this only plays a subordinate role, because only three percent of the rubber particles emitted come from rubber granulate from artificial green areas. Tire abrasion is responsible for the remaining 97 percent. Of the particles released into the environment, almost three-quarters remain on the left and right side of the road in the first five metres, 5% in the remaining soils and almost 20% in water bodies. The team based its calculations on data on the import and export of tyres and then modelled the behaviour of rubber on roads and in road waste water. Since the year 2000, the guidelines for the recycling of water and the prevention of soil pollution have been significantly tightened. Through measures such as the construction of road wastewater treatment plants (SABA), part of the microrubber can now be removed from the water.

Low impact on humans

A part of the micro rubber is first transported by air into the first five meters left and right of the road, deposited and partly whirled up again. However, Christoph Hüglin from Empa's "Air Pollution / Environmental Technology" lab estimates the impact on humans to be low, as a study from 2009 shows. "The proportion of tyre abrasion in inhaled fine dust is also in the low single-digit percentage range at locations close to traffic," says Hüglin.

Researchers emphasize, however, that microplastic and microrubber are not the same. "These are different particles that can hardly be compared with each other," says Nowack. And there are also huge differences in quantity: According to Nowack's calculations, only 7% of the polymer-based microparticles released into the environment are made of plastic, while 93% are made of tire abrasion. "The amount of microrubber in the environment is huge and therefore highly relevant," says Nowack.

Credit: 
Swiss Federal Laboratories for Materials Science and Technology (EMPA)

New material breaks world record turning heat into electricity

image: This is professor Ernst Bauer in his lab.

Image: 
TU Wien

Thermoelectric materials can convert heat into electrical energy. This is due to the so-called Seebeck effect: If there is a temperature difference between the two ends of such a material, electrical voltage can be generated and current can start to flow. The amount of electrical energy that can be generated at a given temperature difference is measured by the so-called ZT value: The higher the ZT value of a material, the better its thermoelectric properties.

The best thermoelectrics to date were measured at ZT values of around 2.5 to 2.8. Scientists at TU Wien (Vienna) have now succeeded in developing a completely new material with a ZT value of 5 to 6. It is a thin layer of iron, vanadium, tungsten and aluminium applied to a silicon crystal.

The new material is so effective that it could be used to provide energy for sensors or even small computer processors. Instead of connecting small electrical devices to cables, they could generate their own electricity from temperature differences. The new material has now been presented in the journal Nature.

Electricity and Temperature

"A good thermoelectric material must show a strong Seebeck effect, and it has to meet two important requirements that are difficult to reconcile," says Prof. Ernst Bauer from the Institute of Solid State Physics at TU Wien. "On the one hand, it should conduct electricity as well as possible; on the other hand, it should transport heat as poorly as possible. This is a challenge because electrical conductivity and thermal conductivity are usually closely related."

At the Christian Doppler Laboratory for Thermoelectricity, which Ernst Bauer established at TU Wien in 2013, different thermoelectric materials for different applications have been studied over the last few years. This research has now led to the discovery of a particularly remarkable material - a combination of iron, vanadium, tungsten and aluminium.

"The atoms in this material are usually arranged in a strictly regular pattern in a so-called face-centered cubic lattice," says Ernst Bauer. "The distance between two iron atoms is always the same, and the same is true for the other types of atoms. The whole crystal is therefore completely regular".

However, when a thin layer of the material is applied to silicon, something amazing happens: the structure changes radically. Although the atoms still form a cubic pattern, they are now arranged in a space-centered structure, and the distribution of the different types of atoms becomes completely random. "Two iron atoms may sit next to each other, the places next to them may be occupied by vanadium or aluminum, and there is no longer any rule that dictates where the next iron atom is to be found in the crystal," explains Bauer.

This mixture of regularity and irregularity of the atomic arrangement also changes the electronic structure, which determines how electrons move in the solid. "The electrical charge moves through the material in a special way, so that it is protected from scattering processes. The portions of charge travelling through the material are referred to as Weyl Fermions," says Ernst Bauer. In this way, a very low electrical resistance is achieved.

Lattice vibrations, on the other hand, which transport heat from places of high temperature to places of low temperature, are inhibited by the irregularities in the crystal structure. Therefore, thermal conductivity decreases. This is important if electrical energy is to be generated permanently from a temperature difference - because if temperature differences could equilibrate very quickly and the entire material would soon have the same temperature everywhere, the thermoelectric effect would come to a standstill.

Electricity for the Internet of Things

"Of course, such a thin layer cannot generate a particularly large amount of energy, but it has the advantage of being extremely compact and adaptable," says Ernst Bauer. "We want to use it to provide energy for sensors and small electronic applications." The demand for such small-scale generators is growing quickly: In the "Internet of Things", more and more devices are linked together online so that they automatically coordinate their behavior with each other. This is particularly promising for future production plants, where one machine has to react dynamically to another.

"If you need a large number of sensors in a factory, you can't wire all of them together. It's much smarter for the sensors to be able to generate their own power using a small thermoelectric device," says Bauer.

Credit: 
Vienna University of Technology

Typhoons and marine eutrophication are probably the missing source of organic nitrogen in ecosystems

image: Typhoons and marine eutrophication are probably the missing source of organic nitrogen in ecosystems.

Image: 
Ming Chang

Atmospheric nitrogen deposition has a significant impact on both terrestrial and aquatic ecosystems, and alterations in its level will significantly affect the productivity and stability of an ecosystem. In recent years, with the reduction of anthropogenic inorganic nitrogen emissions, interest in organic nitrogen (ON) has increased because it represents a large fraction of total nitrogen. Given this large amount of ON deposition, researchers are interested in identifying its sources. However?scientists find large gaps between ON deposition and emission?and therefore suspect that there are missing sources of ON.

In a paper recently published in Atmospheric and Oceanic Science Letters, Dr. Ming Chang from the Institute for Environmental and Climate Research, Jinan University, and his coauthors, try to address this concern based on their recently completed preliminary work on the deposition-emission relationship.

"We classified observed flux data of dissolved ON in terms of the attributes of the wet deposition event itself, such as the season, precipitation, air mass backward trajectory, and effect of typhoons. The reverse trajectories of each air mass responsible for high ON flux precipitation events were tracked and superimposed with chlorophyll-a concentration maps in the ocean," says Dr. Chang.

According to this study, approximately one third of the total wet deposition of ON was found to be derived from a confluence of three factors: rain in the wet season, air masses from the ocean, and rainfall over 50 mm. It was also found that the co-occurrence of intense events, such as a typhoons and eutrophic surface sea waters, might be an important source of dissolved ON in wet deposition.

"However, further quantitative and targeted research is needed to confirm the validity of these possibilities," adds Dr. Chang.

Credit: 
Institute of Atmospheric Physics, Chinese Academy of Sciences

Inflammatory bowel disease appears to impact risk of Parkinson's disease

Amsterdam, NL, November 14, 2019 - Relatively new research findings indicating that the earliest stages of Parkinson's disease (PD) may occur in the gut have been gaining traction in recent years. In a review published in the Journal of Parkinson's Disease, Tomasz Brudek, PhD, evaluates evidence for the association between inflammatory bowel disease (IBD) and PD and proposes directions for future research.

"Parkinsonism is probably not just a brain disorder, but a group of diseases that may have their onset in the periphery, particularly in the gastrointestinal tract," explained Dr. Brudek, of the Research Laboratory for Stereology and Neuroscience, and Copenhagen Center for Translational Research, Copenhagen University Hospital, Bispebjerg, and Frederiksberg Hospital, Copenhagen, Denmark. "Taken together, all data, including human, animal, and microbiome studies, suggest quite strongly that individuals with an increased tendency for peripheral inflammation have a higher risk to acquire PD. Given the potentially critical role of gut pathology in the pathogenesis of PD, there is reason to suspect that IBD may impact PD risk."

This review explores and discusses the latest knowledge about links between IBD and PD and presents evidence from animal studies that peripheral immune system alterations may play a role in PD, which has the potential for new therapeutic strategies. It shows how our understanding and appreciation of the importance of the so-called gut-brain axis, the connection between gut and the brain in PD, has grown rapidly in recent years. It also provides important new insights into ways in which the immune system and inflammation can play a role in PD.

The inflammatory processes that occur in some patients with PD have naturally led to discussion of an association between IBD and PD since the two share some basic characteristics. IBD is currently considered an inappropriate immune response to the microbiota in the intestines, characterized by chronic pro-inflammatory immune activity, a trait now also suggested to be a fundamental element of neurodegenerative disorders. Highlighting the relevance of the immune system, large genome-wide association studies (GWAS) and pathway analyses based on 138,511 individuals of European ancestry identified 17 shared loci between PD and seven autoimmune diseases including celiac disease, rheumatoid arthritis, type 1 diabetes, multiple sclerosis, psoriasis, ulcerative colitis and Crohn's disease.

Many epidemiological and genetic studies have found that there seems to be an increased risk of developing PD among people with IBD. The association between IBD and PD may simply be that IBD is just one type of intestinal inflammation, so it is not IBD specifically that increases the PD risk but perhaps intestinal or peripheral inflammation in a broader sense.

"Inflammation of the gut is only one of many symptoms on the list of changes in the gut and is associated with neural structures in PD patients. Thus, IBD might be just one of many sources of intestinal inflammation," said Dr. Brudek, who is quick to point out that, "while IBD patients are more likely to get PD, the risk is still very small. For a given IBD patient, the probability of not getting the diagnosis is 95%-97%."

Looking forward, Dr Brudek noted that:

Identification of risk factors associated with prodromal phases of PD may allow for early intervention studies that could modify or slow down disease progress.

More biomarker and observational studies are needed to identify patients at risk to develop PD in order to start early therapy interventions.

Future pharmacological therapies aiming at slowing or stopping PD progression should not only target patients well into the course of the disease, but also be administered to patients in the very early phases of the disease or at risk for developing PD.

Clinicians should be aware of early Parkinsonian symptoms in IBD patients but also in patients with chronic inflammatory disorders.

"We should direct our focus on the immune system in all Parkinsonian disorders, and further investigate the role of systemic inflammation and the immune system as such in these neurological diseases. A clear knowledge of the mechanisms implicated in gut/immune/nervous communication could help improve the prognostic and therapeutic tools leading to better quality of life for patients, reducing the exacerbation of PD symptoms, and delaying the progression of the disease," he concluded.

PD is a slowly progressive disorder that affects movement, muscle control and balance. It is the second most common age-related neurodegenerative disorder affecting about 3% of the population by the age of 65 and up to 5% of individuals over 85 years of age. During the 20th century, PD was thought to be primarily a brain disorder, however, research has shown that it may actually begin in the enteric nervous system, the part of the autonomic nervous system that controls the gastrointestinal organs.

Credit: 
IOS Press

Bisphenol-a structural analogues may be less likely than BPA to disrupt heart rhythm

Philadelphia, Pa. - (November 16) - Some chemical alternatives to plastic bisphenol-a (BPA), which is still commonly used in medical settings such as operating rooms and intensive care units, may be less disruptive to heart electrical function than BPA, according to a pre-clinical study that explored how the structural analogues bisphenol-s (BPS) and bisphenol-f (BPF) interact with the chemical and electrical functions of heart cells.

The findings suggest that in terms of toxicity for heart function, these chemicals that are similar in structure to BPA may actually be safer for medically fragile heart cells, such as those in children with congenital heart disease. Previous research has found a high likelihood that BPA exposure may impact the heart's electrical conductivity and disrupt heart rhythm, and patients are often exposed to the plastic via clinical equipment found in intensive care and in the operating room.

"There are still many questions that need to be answered about the safety and efficacy of using chemicals that look and act like BPA in medical settings, especially in terms of their potential contribution to endocrine disruption," says Nikki Gillum Posnack, Ph.D., the poster's senior author and a principal investigator in the Sheikh Zayed Institute for Pediatric Surgical Innovation at Children's National Hospital. "What we can say is that, in this initial pre-clinical investigation, it appears that these structural analogues have less of an impact on the electrical activity within the heart and therefore, may be less likely to contribute to dysrhythmias."

Future studies will seek to quantify the risk that these alternative chemicals pose in vulnerable populations, including pediatric cardiology and cardiac surgery patients. Since pediatric patients' hearts are still growing and developing, the interactions may be different than what was seen in this pilot study.

Credit: 
Children's National Hospital