Tech

Hormone alters male brain networks to enhance sexual and emotional function

image: This image shows the Default Mode Network (red/yellow) and the Salience Network (blue/green) which have important roles in social and emotional function. These two networks in the brain were altered when the volunteers received the hormone kisspeptin, and this was associated with changes in brain activity linked to sexual aversion and sexual arousal.

Image: 
Imperial College London

Scientists have gained new insights into how the 'master regulator' of reproduction affects men's brains.

In a new study, scientists from Imperial College London investigated how a recently discovered hormone called kisspeptin alters brain activity in healthy volunteers.

The hormone, known as the master regulator of reproduction, not only has a crucial role in sperm and egg production, but may also boost reproductive behaviours.

In the new research, the scientists investigated how the hormone affects the brain when it is 'at rest'. So-called resting brain activity is the state our brain enters when not concentrating on a task, and is akin to a car ticking over in neutral. Studying this 'neutral', resting state is crucial for understanding what happens when the brain is active, and the car accelerates. Furthermore, studying the resting brain allows scientists to examine large brain networks they know are abnormal in various psychological disorders, and see if certain hormones or drugs can affect this.

In the study, published in the Journal of Clinical Investigation Insight, the hormone was shown to change activity in key brain networks at rest, which was linked to decreased sexual aversion, and increased brain activity associated with sexual arousal. The scientists also observed that the hormone boosted several networks in the brain involved in mood and depression.
Professor Waljit Dhillo, an NIHR Research Professor and senior author of the study from Imperial's Department of Medicine said: 'Although we have previously investigated how this hormone affects the brain when it is in an active state, this is the first time we've demonstrated it also affects the brain in its baseline, resting state. These insights suggest the hormone could one day be used to treat conditions such as low sex drive or depression'.

Dr Alexander Comninos, first author of the study and honorary senior lecturer at Imperial, said: "Our findings help unravel the many and complex roles of the naturally-occurring hormone kisspeptin, and how it orchestrates reproductive hormones as well as sexual and emotional function. Psychosexual problems, such as low sex drive, affect up to one in three people, and can have a devastating effect on a person's, and a couple's, wellbeing. These findings open avenues for kisspeptin as a future treatment for these problems, although there is a lot of work still to be done."

In the new study, funded by the National Institute for Health Research and the Medical Research Council, the researchers gave 29 healthy men an infusion of kisspeptin while assessing brain activity in a MRI scanner. Once in the MRI scanner, the volunteers were shown a number of themed images - sexual images (such as pornography), negative images (such as a car crash), and neutral images (such as a cup). The researchers monitored the volunteers' brain activity while they looked at the images, as well as measuring their resting brain activity.

During the experiments, conducted in collaboration with NIHR Imperial Clinical Research Facility and the Imanova Centre for Imaging Sciences, the volunteers were also asked to complete questionnaires to assess various behaviours such as sexual aversion (eg. by scoring words such as 'frigid' and 'unattractive' depending on how they felt at that moment).

The research team also asked the same volunteers to complete the scans and tests while receiving a placebo infusion. The volunteers did not know whether they were receiving the hormone or the placebo at each visit. This enabled the scientists to directly compare the volunteer's normal brain activity and behaviour with their responses while receiving the hormone.

The results revealed the hormone altered activity in specific resting brain networks. An increase in this activity was linked to less aversion to sex and greater brain activity in areas involved in sexual arousal.

Specifically, the researchers found the hormone altered activity in the Default Mode Network and Salience Network, which have key roles in social and emotional processing. The hormone was also found to boost key mood connections in the brain, and this increased activity in key mood centres when presented with negative images such as those of car crashes. Furthermore, the hormone was also shown to decrease negative mood in these volunteers.

Dr Comninos concluded: "We have conducted previous studies that showed kisspeptin can activate specific brain areas involved in sex and emotions. However, this study enhances our knowledge of the hormone even further. Our findings suggest it can actually influence entire networks in the brain even when we are not doing anything, and this is linked to subsequent sexual and emotional function. Taken together, these findings provide the scientific basis to investigate kisspeptin-based treatments in patients with psychosexual and mood disorders, which are both huge health issues, and frequently occur together".

The team are now hoping to further investigate how kisspeptin affects sexual behaviours, and translate this work into patients with psychosexual and mood disorders.

Credit: 
Imperial College London

Letting the sunshine in may kill dust-dwelling bacteria

Allowing sunlight in through windows can kill bacteria that live in dust, according to a study published in the open access journal Microbiome.

Researchers at the University of Oregon found that in dark rooms 12% of bacteria on average were alive and able to reproduce (viable). In comparison only 6.8% of bacteria exposed to daylight and 6.1% of bacteria exposed to UV light were viable.

Dr Fahimipour said: "Humans spend most of their time indoors, where exposure to dust particles that carry a variety of bacteria, including pathogens that can make us sick, is unavoidable. Therefore, it is important to understand how features of the buildings we occupy influence dust ecosystems and how this could affect our health."

Dust kept in the dark contained organisms closely related to species associated with respiratory diseases, which were largely absent in dust exposed to daylight.

The authors found that a smaller proportion of human skin-derived bacteria and a larger proportion of outdoor air-derived bacteria lived in dust exposed to light that in than in dust not exposed to light. This may suggest that daylight causes the microbiome of indoor dust to more strongly resemble bacterial communities found outdoors.

The researchers made eleven identical climate-controlled miniature rooms that mimicked real buildings and seeded them with dust collected in residential homes. The authors applied one of three glazing treatments to the windows of the rooms, so that they transmitted visible, ultraviolet or no light. After 90 days, the authors collected dust from each environment and analysed the composition, abundance, and viability of the bacteria present.

Dr Fahimipour said: "Our study supports a century-old folk wisdom, that daylight has the potential to kill microbes on dust particles, but we need more research to understand the underlying causes of shifts in the dust microbiome following light exposure. We hope that with further understanding, we could design access to daylight in buildings such as schools, offices, hospitals and homes in ways that reduce the risk of dust-borne infections."

The authors caution that the miniature room environments used in the study were exposed to only a relatively narrow range of light dosages. Although the researchers selected light dosages similar to those found in most buildings, there are many architectural and geographical features that produce lower or higher dosages of light that may need additional study.

Credit: 
BMC (BioMed Central)

An argument for inducing labor at 39 weeks

As the prevalence of maternal and fetal complications increases with advancing pregnancy beyond 39 weeks, induction of labor at 39 weeks has been proposed as a means to ensure optimal maternal and newborn health. An Ultrasound in Obstetrics & Gynecology analysis of data from five randomized controlled trials found that elective induction of labor in uncomplicated singleton pregnancy from 39 weeks' gestation is not associated with higher rates of complications and, in fact, may reduce the risk of cesarean section, hypertensive disease of pregnancy, and need for respiratory support in newborns.

"We now have enough data from uncomplicated singleton pregnancies to support the finding that induction of labor from 39 weeks' gestation seems a safe and potentially beneficial option for women," said lead author Dr. Alexandros Sotiriadis, of the Aristotle University of Thessaloniki, in Greece. "Before undertaking induction of labor in low-risk pregnancies, women need to be aware that it can lead to a more prolonged and painful process than spontaneous labor. Maternity services will also need to consider the impact of widespread labor induction on staffing and capacity of labor wards."

Credit: 
Wiley

Study documents paternal transmission of epigenetic memory via sperm

Studies of human populations and animal models suggest that a father's experiences such as diet or environmental stress can influence the health and development of his descendants. How these effects are transmitted across generations, however, remains mysterious.

Susan Strome's lab at UC Santa Cruz has been making steady progress in unraveling the mechanisms behind this phenomenon, using a tiny roundworm called Caenorhabditis elegans to show how marks on chromosomes that affect gene expression, called "epigenetic" marks, can be transmitted from parents to offspring. Her team's most recent paper, published October 17 in Nature Communications, focuses on transmission of epigenetic marks by C. elegans sperm.

In addition to documenting the transmission of epigenetic memory by sperm, the new study shows that the epigenetic information delivered by sperm to the embryo is both necessary and sufficient to guide proper development of germ cells in the offspring (germ cells give rise to eggs and sperm).

"We decided to look at C. elegans because it is such a good model for asking epigenetic questions using powerful genetic approaches," said Strome, a distinguished professor of molecular, cell, and developmental biology.

Epigenetic changes do not alter the DNA sequences of genes, but instead involve chemical modifications to either the DNA itself or the histone proteins with which DNA is packaged in the chromosomes. These modifications influence gene expression, turning genes on or off in different cells and at different stages of development. The idea that epigenetic modifications can cause changes in gene expression that are transmitted from one generation to the next, known as "transgenerational epigenetic inheritance," is now the focus of intense scientific investigation.

For many years, it was thought that sperm do not retain any histone packaging and therefore could not transmit histone-based epigenetic information to offspring. Recent studies, however, have shown that about 10 percent of histone packaging is retained in both human and mouse sperm.

"Furthermore, where the chromosomes retain histone packaging of DNA is in developmentally important regions, so those findings raised awareness of the possibility that sperm may transmit important epigenetic information to embryos," Strome said.

When her lab looked at C. elegans sperm, they found the sperm genome fully retains histone packaging. Other researchers had found the same is true for another commonly studied organism, the zebrafish.

"Like zebrafish, worms represent an extreme form of histone retention by sperm, which makes them a great system to see if this packaging really matters," Strome said.

Her lab focused on a particular epigenetic mark (designated H3K27me3) that has been well established as a mark of repressed gene expression in a wide range of organisms. The researchers found that removing this mark from sperm chromosomes causes the majority of the offspring to be sterile. Having established that the mark is important, they wanted to see if it is sufficient to guide normal germline development.

The researchers addressed this by analyzing a mutant worm in which the chromosomes from sperm and egg are separated in the first cell division after fertilization, so that one cell of the embryo inherits only sperm chromosomes and the other cell inherits only egg chromosomes (normally, each cell of an embryo inherits chromosomes from both egg and sperm). This unusual chromosome segregation pattern allowed the researchers to generate worms whose germ line inherited only sperm chromosomes and therefore only sperm epigenetic marks. Those worms turned out to be fertile and to have normal gene expression patterns.

"These findings show that the DNA packaging in sperm is important, because offspring that did not inherit normal sperm epigenetic marks were sterile, and it is sufficient for normal germline development," Strome said.

While the study shows that epigenetic information transmitted by sperm is important for normal development, it does not directly address how the life experience of a father can affect the health of his descendants. Strome's lab is investigating this question with experiments in which worms are treated with alcohol or starved before reproducing.

"The goal is to analyze how the chromatin packaging changes in the parent," she said. "Whatever gets passed on to the offspring has to go through the germ cells. We want to know which cells experience the environmental factors, how they transmit that information to the germ cells, what changes in the germ cells, and how that impacts the offspring."

By demonstrating the importance of epigenetic information carried by sperm, the current study establishes that if the environment experienced by the father changes the epigenetics of sperm chromosomes, it could affect the offspring.

Credit: 
University of California - Santa Cruz

Syracuse geologists contribute to new understanding of Mekong River incision

An international team of earth scientists has linked the establishment of the Mekong River to a period of major intensification of the Asian monsoon during the middle Miocene, about 17 million years ago, findings that supplant the assumption that the river incised in response to tectonic causes. Their findings are the subject of a paper published in Nature Geoscience on Oct. 15.

Gregory Hoke, associate professor and associate chair of Earth sciences, and recent SU doctoral student Gregory Ruetenik, now a post-doctoral researcher at the University of Wisconsin, co-authored the article with colleagues from China, France, Sweden, Australia, and the United States. Hoke's initial collaboration with first author Jungsheng Nie was co-editing a special volume on the growth of the Tibetan Plateau during the Cenozoic.

The Mekong River is the longest in Southeast Asia and the tenth largest worldwide in terms of water volume. Originating in the Tibetan Plateau, the Mekong runs through China, Myanmar, Laos, Thailand, Cambodia and Vietnam. The Chinese portion of the river (Lancang Jiang) occupies a spectacular canyon that is between 1-2 kilometers deep relative to the surrounding landscape.

"When the upper half of that river was established and at what point it incised the canyon it occupies today, as well as whether it was influenced by climate or by tectonics, has been debated by geologists for the last quarter century," says Hoke. "Our work establishes when major canyon incision began and identifies the most likely mechanism responsible for that incision: an intensification of the Asian monsoon during the warmest period over the last 23 million years, the Middle Miocene climate optimum."

River incision is the natural process by which a river cuts downward into its bed, deepening the active channel. "In most cases, you can attribute incision to some sort of some change in the overall relief of a landscape, which is typically interpreted to be in response to a tectonic influence," says Hoke.

The standard interpretation for river incision of the Mekong and adjacent Yangtze basins had been a response to topographic growth of the Tibetan Plateau. However, a recent string of studies have determined that the southeastern margin of Tibet was already at or near modern elevations by 40 million years ago, throwing a monkey wrench into that hypothesis.

Using thermochronology of apatite minerals extracted from bedrock samples collected along the walls of the river canyon, the scientists were able to numerically model the cooling history of the rock as the river incised, which revealed synchronous downcutting at 15-17 million years along the entire river. Synchronous downcutting points towards a non-tectonic cause for incision. Ruetenik modeled whether or not a stronger monsoon was capable of achieving the magnitude of downcutting over the relatively short duration of the middle Miocene climate optimum using landscape models he developed during his SU doctoral study. According to Hoke, "This solves how river incision occurred in the absence of any clear pulse of plateau growth along the southeast margin of Tibet. In essence, an enhanced monsoon did a tremendous amount of work sawing through the landscape during the middle Miocene climate optimum."

Previously, Hoke studied buried river sands in cave deposits to reconstruct the incision history of the Yangtze river, the next river to the east of the Mekong. "We found a sequence of ages that look similar to those from the thermochrometers in the Mekong," he says of his findings, published in Geophysical Research Letters in 2016. He next hopes subsequent studies will be able to extend the results from this new Nature Geoscience paper to the three other big rivers that drain the southeastern margin of the Tibetan Plateau.

Credit: 
Syracuse University

More clues revealed in link between normal breast changes and invasive breast cancer

WASHINGTON -- A research team, led by investigators from Georgetown Lombardi Comprehensive Cancer Center, details how a natural and dramatic process -- changes in mammary glands to accommodate breastfeeding -- uses a molecular process believed to contribute to survival of pre-malignant breast cells.

Their mouse study, published online in Cell Death Discovery, shows that a critical switch that operates during breaks in nursing controls whether breast cells that had been providing milk will survive or die. The pro-survival pathway may be an example of a normal pathway that can be co-opted by pre-cancerous cells, including those that could become breast cancer, the researchers say.

If so, the findings may provide a strategy to block a part of the pathway that contributes to cancer, says the study lead author, Anni Wärri, PhD, an adjunct professor at Georgetown University Medical Center and the University of Turku in Finland.

"The study, for the first time, identifies the molecular switch -- the unfolded protein response (UPR), which activates autophagy -- that controls the fate of milk-producing breast cells," she says.

The fact that autophagy, a common cellular housekeeping function, is used to either keep the cells surviving or to mark them for destruction is important in cancer research, because the pro-survival function of autophagy has been seen as key in a number of different tumor types. Investigators conducted this study because the role of autophagy in both breast cancer and in normal mammary gland physiology had not been settled.

"It had not been known how this critical transition between ductal cell survival or death was regulated. Earlier studies had focused on a different pathway -- apoptosis, a different form of cell death. We show that apoptosis pathway is separate from the UPR/autophagy switch, although the processes clearly work together," says the study's senior investigator, Robert Clarke, PhD, DSc, co-director of the Breast Cancer Program at Georgetown Lombardi and Dean for Research at Georgetown University Medical Center

The study used mice to study two phases of breast remodeling after lactation -- a process known as involution. "Because involution occurs in the same way in all mammals, what is found in mice closely mirrors human female breast physiology," he says.

Clarke adds that this study, in no way, suggests that breastfeeding sets up a mother to develop cancer. "Breastfeeding has been clearly associated with reduced breast cancer risk. That could be because, after breastfeeding is completed, pro-death programming takes over, which may kill abnormal cells."

The two states of involution the researchers studied occur during nursing and weaning, They found that breast cells control this remodeling in opposite ways.

During nursing, breast cells use a pro-survival strategy to maintain ductal lactation during short pauses in milking. This phase is called "reversible" involution because it maintains the milk producing cells to allow milk to be resynthesized once a pup suckles again. But when pups are weaned from the breast, cells flip on a pro-death switch in order to return mammary tissue back to its "normal" non-lactation state through "irreversible" involution.

Before this study, investigators did not know how autophagy is in play during involution and how it is different in the reversible versus irreversible phase of involution.

Researchers found that a buildup in milk protein in the ducts triggers UPR, a natural cellular process, which recognizes that too much protein has been generated. The UPR then switches on the pro-survival function of autophagy, which helps maintain the viability of milk producing ductal cells. When pups start drinking again, lactation resumes and UPR/autophagy is turned down to its baseline level.

However, a considerable buildup of milk proteins in ducts -- which occurs when mouse pups are weaned from the breast -- creates profound cellular stress that leads to autophagy switching into pro-death signaling, accompanied by increase in apoptosis, together leading to irreversible involution.

It is the reversible stage pro-survival signals that may be sustaining pre-malignant cells, Wärri says.

"It is understandable that abnormal cells may develop in breast tissue, because the mammary gland undergoes many changes during a lifespan. The breast ductal system resembles a tree. From puberty on, each menstrual cycle prompts the tree to grow a bit, but it always looks like a leafless tree in winter," she says.

"But the tree grows leaves, as if it is summer, when a woman becomes pregnant and then starts to nurse. The cells in ducts differentiate in order to produce milk. During brief breaks in lactation, the 'leaves' shrink a little, but then bloom again when feedings resume. After weaning, the tree returns to its dormant, winter state," Wärri says. "This constant state of flux may contribute to accumulation of some abnormal cells."

Cancer may come in to play when autophagy helps abnormal cells survive, she says.

To understand the mechanisms at play, the researchers used both an autophagy gene deficient mouse model and drug intervention studies on wild type mice to both inhibit (with chloroquine) and stimulate (with tunicamycin) autophagy. Chloroquine is a drug currently being studied in two clinical trials aimed at preventing ductal carcinoma in situ (DCIS) from spreading. DCIS is a collection of precancerous cells in the duct and most DCIS does not become invasive.

They found that chloroquine, a drug commonly used to prevent and treat malaria, inhibits autophagy during involution. That action allows apoptosis to proceed, pushing the breast to revert to its normal, non-lactating state. This finding provides support for the clinical trials testing chloroquine use to keep DCIS in check in women diagnosed with DCIS, Clarke says. Results with autophagy gene deficient mouse model were similar -- involution was enhanced and advanced. In contrast, stimulation of autophagy gave opposite results: milk producing cells were sustained and involution was delayed.

Researchers say their study could also have an important public health implication. The findings help explain why some women in sub-Saharan African countries who take chloroquine for malaria may have trouble breastfeeding, Clarke explains. "If, as we believe, chloroquine could bring lactation to an early end, we may be able to provide alternative short-term therapies that would allow breastfeeding when needed. Also, the opposite strategy, a short term use of autophagy-stimulating drug, could help women with difficulties in milk production or irregularities in nursing."

"The link between breast remodeling and breast cancer is a huge puzzle, and we have an important new piece to add to the emerging picture," Wärri says.

Credit: 
Georgetown University Medical Center

Major changes needed to improve the care of older adults who self-harm

Older adults (aged 65 and older) who self-harm have a higher risk of dying from unnatural causes (particularly suicide) compared to their peers without a history of self-harm, according to a large observational study of UK primary care published in The Lancet Psychiatry journal.

Almost 90% of older adults who had harmed themselves, including overdosing on prescription drugs or self-cutting, were not referred for a specialist mental health assessment after visiting their general practitioner (GP), and the likelihood of referral was much lower for individuals living in socially deprived areas.

This is of particular concern as non-fatal self-harm is the strongest risk factor for subsequent suicide, with older people reportedly having greater suicidal intent than any other age group.

Additionally, contrary to national clinical guidelines a significant proportion (12%, around one in eight) of those who self-harm are prescribed a tricyclic antidepressant, which can be dangerous in overdose.

The study highlights the opportunity for earlier intervention in primary care to prevent repeated self-harm episodes and suicide in older adults.

"Older adults often face a decline in functional ability due to multiple comorbid conditions, bereavement, and social isolation, which are all strongly linked with self-harm. With the number of people aged over 65 set to rise to 25% of the UK population by 2046, healthcare services need to be aligned to meet both physical and mental health needs to ensure that vulnerable older people are identified and get the help and support they require." says Dr Cathy Morgan, University of Manchester, UK, who led the research. [1]

In recent years, there has been an increase in reports of suicide among older adults. In England and Wales between 2012 and 2015, suicide rates among men aged 60 and older rose from 12.3 to 14.8 per 100,000--which is higher than rates for male adolescents and younger male adults (10-29 years) at 10.6 per 100,000 in 2015. Suicide rates in older women have also increased over the past 5 years, converging toward those of younger women of working age (from 4.7 per 100,000 in 60-74 year olds vs 5.8 in 30-44 year olds in 2010 to 5.4 vs 6.0 in 2015). Self-harm among older people, however, has to date received comparatively little attention compared with younger age groups.

The researchers based their findings on recorded self-harm episodes among adults aged 65 years and older registered at 674 general practices in the UK between 2001 and 2014. They analysed data from the Clinical Practice Research Datalink, which is broadly representative of the UK population and is linked with hospital admissions, mortality records, and the area-level social deprivation (the Index of Multiple Deprivation). To investigate mortality risk after self-harm, they compared data from 2,454 of these patients with 48,921 patients without a history of self-harm (matched by age, gender, and general practice). Self-harm includes intentional injury and overdosing on prescription medication.

During the 13-year study period, 4,124 adults aged 65 years or older had an episode of self-harm recorded in general practice patient notes. Over half (58%) of these were women, and many (62%) had previously received mental health diagnoses.

Drug overdose was the most common method of self-harm (81%), followed by self-cutting (6%).

Importantly, only 12% (335/2,854) of over 65s who had self-harmed were referred to mental health services within 12 months of their initial self-harm episode.

Referrals were a third less likely for older adults registered at practices located in the most deprived areas (8%, 48/578) than those from more affluent communities (13%, 65/493), even though the incidence of self-harm was higher in these areas (table 1).

Almost three-quarters of people who had harmed themselves were prescribed psychotropic medications, most commonly antidepressants. Contrary to National Institute of Clinical Excellence (NICE) guidance, 12% (336/2,854) of older adults who self-harmed were prescribed a tricyclic antidepressant within a year of harming themselves. [2]

One in seven (14.4%) older adults self-harmed again within a year of the initial episode.

Compared to the general population, older adults who had harmed themselves were twice as likely to have a history of a psychiatric illness (1,522/2,454 vs 14,455/48,921), and were 20% more likely to experience a major physical illness (1,760/2,454 vs 29,341/48,921) such as liver disease and heart failure.

Older adults who harmed themselves were also 19 times more likely to die from unnatural causes (mostly suicides, accidental poisonings, and other accidents) in the first year after a self-harm episode than the general population (29/330 vs 41/2,415 deaths), and 145 times more likely to die of suicide during the 13-year follow up--although suicide was rare in absolute terms (36 vs 12 deaths by suicide).

Co-author Professor Nav Kapur, University of Manchester, UK, adds: "We sometimes think of self-harm as a problem in younger people and of course it is. But it affects older adults too and the concerning issue is the link with increased risk of suicide. We hope our study will alert clinicians, service planners, and policy makers to the need to implement preventative measures for this potentially vulnerable group of people. Referral and management of mental health conditions are likely to be key." [1]

According to co-author Professor Carolyn Chew-Graham, GP Principal in Central Manchester and Professor of General Practice at Keele University, UK: "Since drug ingestion is one of the main methods of self-harm, we highlight the need to prescribe less toxic medication in older adults for the management of both mental illness and pain related conditions. We also recommend more frequent follow-up of a patient following an initial episode of self-harm." [1]

The authors point to several limitations of their study, including the possibility that some cases of self-harm may go unreported, and that statistics on suicide are likely to be underestimated perhaps due to cultural barriers or social stigma that discourage coroners from recording a conclusion of suicide.

Writing in a linked Comment, Associate Professor Rebecca Mitchell from the Australian Institute of Health Innovation, Macquarie University, Australia says: "Further research still needs to be done on self-harm among older adults, including the replication of Morgan and colleagues' research in other countries, to increase our understanding of how primary care could present an early window of opportunity to prevent repeated self-harm attempts and unnatural deaths. Exploration of self-harm and suicide risk among older adults in long-term care facilities has been scant. Little is known regarding the factors that might influence or be protective of the risk of self-harm among residents in long-term care compared with older adults living in the general community."

Credit: 
The Lancet

New study answers old questions about why tropical forests are so ecologically diverse

image: In this photo of the tropical rain forest canopy in Panama, Handroanthus guayacan, the focus of a new Brown/UCLA study, blooms in yellow while Jacaranda copaia has blue flowers and Cavanillesia plantanifolia has pink fruit. Taking advantage of regular annual changes, like flowering and fruiting, allowed Brown ecologist Jim Kellner to track individual trees through time and map distributions of some species throughout a large area.

Image: 
Jonathan Dandois and Helene Muller-Landau/Smithsonian Tropical Research Institute.

PROVIDENCE, R.I. [Brown University] -- Working with high-resolution satellite imaging technology, researchers from Brown University and the University of California, Los Angeles have uncovered new clues in an age-old question about why tropical forests are so ecologically diverse.

In studying Handroanthus guayacan,a common tropical tree species, over a 10-year period, they found that the tree population increased mainly in locations where the tree is rare, rather than in locations where it is common.

"There are more tree species living in an area not much larger than a few football fields in Panama than in all of North America north of Mexico combined," said Jim Kellner, first author on the paper and an assistant professor of ecology and evolutionary biology at Brown. "How this diversity originated, and why it persists over time is a paradox that has challenged naturalists for more than a century."

Until now.

"The take-home of the study is that there is a 'negative feedback' on population growth," Kellner said, which puts the brakes on population growth in locations where the species is common.

The findings confirm a prediction from the 1970s, which posited that tropical forests are diverse because natural enemies keep populations in check. An enemy could be a seed eater, an herbivore or a pathogen, said Kellner, who is affiliated with the Institute at Brown for Environment and Society.

For example, consider an oak tree and a squirrel. The squirrel eats acorns and prefers to forage where oak trees are abundant. A lone acorn in the middle of a grove of maples is likely to go unnoticed by a squirrel, whereas many acorns in an oak grove will be eaten. If this kind of behavior is widespread in tropical rainforests, it could keep species from becoming too common, Kellner said.

Earlier studies have shown that this negative feedback phenomenon holds true among young trees -- seeds, seedlings and saplings -- but ecologists hadn't been able to determine whether it influences adult trees, the reproductive portion of populations, he said.

"It takes decades for trees to become reproductive in tropical forests, and the problem is compounded by how rare each species is," Kellner said. "We found that for this species, you would have to search about 250 acres to find one new adult tree every year."

That challenge isn't feasible on foot, but remote sensing can overcome the challenges of observing large areas.

Kellner and co-author Stephen Hubbell, an ecology professor emeritus at UCLA, used high-resolution satellite images to track individuals on Barro Colorado Island, a six-square-mile island in the middle of the Panama Canal, over 10 years. They looked for Handroanthus guayacan, a tropical rainforest tree that produces bright yellow flowers for a few days a year.

"By timing the satellite image acquisition with seasonal flowering, we were able to identify most of the adults for this species on the island," said Kellner.

They found 1,006 adult trees. Starting in 2012 and looking backward over the 10-year study period, Kellner and Hubbell were able to identify when new trees joined the adult population for the first time. They used advanced statistical methods to make sure that they were in fact identifying new adults and not just trees that had skipped a year of flowering or had flowered early or late.

The researchers found that negative feedback affected the abundance of new adult trees and that it can influence the population of new adult trees in an area of almost 100 football fields. This contrasts with prior studies of juvenile trees, which found the effects of host-specific enemies are usually restricted to small areas, Kellner said.

To confirm the locations of trees from the satellite data, they went to the island and independently found 123 adult trees of the same species. Of these, 89 percent had been detected in the high-resolution images, suggesting that their data are a nearly complete census of the species.

Kellner said the implications could be broad.

"I can't think of any idea in ecology that is more important than population dynamics," he said. "It's important for everything from fishing licenses to forecasting disease outbreaks."

The research was published on Monday, Oct. 15, in the Proceedings of the National Academy of Sciences.

Credit: 
Brown University

Sound, vibration recognition boost context-aware computing

image: Carnegie Mellon University researchers are using laser vibrometry -- a method similar to one once used by the KGB for eavesdropping -- to monitor vibrations and movements of objects, enabling smart devices to be aware of human activity.

Image: 
Carnegie Mellon University

PITTSBURGH--Smart devices can seem dumb if they don't understand where they are or what people around them are doing. Carnegie Mellon University researchers say this environmental awareness can be enhanced by complementary methods for analyzing sound and vibrations.

"A smart speaker sitting on a kitchen countertop cannot figure out if it is in a kitchen, let alone know what a person is doing in a kitchen," said Chris Harrison, assistant professor in CMU's Human-Computer Interaction Institute (HCII). "But if these devices understood what was happening around them, they could be much more helpful."

Harrison and colleagues in the Future Interfaces Group will report today at the Association for Computing Machinery's User Interface Software and Technology Symposium in Berlin about two approaches to this problem -- one that uses the most ubiquitous of sensors, the microphone, and another that employs a modern-day version of eavesdropping technology used by the KGB in the 1950s.

In the first case, the researchers have sought to develop a sound-based activity recognition system, called Ubicoustics. This system would use the existing microphones in smart speakers, smartphones and smartwatches, enabling them to recognize sounds associated with places, such as bedrooms, kitchens, workshops, entrances and offices.

"The main idea here is to leverage the professional sound-effect libraries typically used in the entertainment industry," said Gierad Laput, a Ph.D. student in HCII. "They are clean, properly labeled, well-segmented and diverse. Plus, we can transform and project them into hundreds of different variations, creating volumes of data perfect for training deep-learning models.

"This system could be deployed to an existing device as a software update and work immediately," he added.

The plug-and-play system could work in any environment. It could alert the user when someone knocks on the front door, for instance, or move to the next step in a recipe when it detects an activity, such as running a blender or chopping.

The researchers, including Karan Ahuja, a Ph.D. student in HCII, and Mayank Goel, assistant professor in the Institute for Software Research, began with an existing model for labeling sounds and tuned it using sound effects from the professional libraries, such as kitchen appliances, power tools, hair dryers, keyboards and other context-specific sounds. They then synthetically altered the sounds to create hundreds of variations.

Laput said recognizing sounds and placing them in the correct context is challenging, in part because multiple sounds are often present and can interfere with each other. In their tests, Ubicoustics had an accuracy of about 80 percent -- competitive with human accuracy, but not yet good enough to support user applications. Better microphones, higher sampling rates and different model architectures all might increase accuracy with further research.

A video explaining Ubicoustics is available: https://www.youtube.com/watch?v=N5ZaBeB07u4

In a separate paper, HCII Ph.D. student Yang Zhang, along with Laput and Harrison, describe what they call Vibrosight, which can detect vibrations in specific locations in a room using laser vibrometry. It is similar to the light-based devices the KGB once used to detect vibrations on reflective surfaces such as windows, allowing them to listen in on the conversations that generated the vibrations.

"The cool thing about vibration is that it is a byproduct of most human activity," Zhang said. Running on a treadmill, pounding a hammer or typing on a keyboard all create vibrations that can be detected at a distance. "The other cool thing is that vibrations are localized to a surface," he added. Unlike microphones, the vibrations of one activity don't interfere with vibrations from another. And unlike microphones and cameras, monitoring vibrations in specific locations makes this technique discreet and preserves privacy.

This method does require a special sensor, a low-power laser combined with a motorized, steerable mirror. The researchers built their experimental device for about $80. Reflective tags -- the same material used to make bikes and pedestrians more visible at night -- are applied to the objects to be monitored. The sensor can be mounted in a corner of a room and can monitor vibrations for multiple objects.

Zhang said the sensor can detect whether a device is on or off with 98 percent accuracy and identify the device with 92 percent accuracy, based on the object's vibration profile. It can also detect movement, such as that of a chair when someone sits in it, and it knows when someone has blocked the sensor's view of a tag, such as when someone is using a sink or an eyewash station.

Credit: 
Carnegie Mellon University

Financial impacts of cancer found to intensify disease burden among German patients

image: Prof. Eva Winkler, study author, medical oncologist at the National Centre for Tumour Diseases (NCT) in Heidelberg.

Image: 
© European Society for Medical Oncology

Lugano-Munich, 16 October 2018 - A study (1) conducted in Germany draws attention to the fact that the socio-economic burden of cancer is real in Europe too, and not only in the context of the US healthcare system where it has been associated with higher morbidity and mortality. The results to be presented at the ESMO 2018 Congress in Munich show that income loss is the main source of perceived financial hardship, and that this is associated with adverse psychological effects in patients.

The work also highlights the absence of clear definitions and valid instruments with which to examine this issue. Prof. Eva Winkler, study author, medical oncologist at the National Centre for Tumour Diseases (NCT) in Heidelberg, explained the background: "We conducted a systematic literature review of the tools used to measure the subjective financial burden of cancer patients: of the 39 studies we found, most came from the USA, and the instruments they used were either not transferrable to the German context or not sufficiently focused on the subject," she said.

Study co-author Dr. Katja Mehlis from the NCT added: "We were, however, able to identify three broad dimensions through which subjective financial burden could be assessed: material aspects, psychological effects and behavioural changes such as support seeking and coping strategies. Based on this, we developed our own, yet non-validated set of questions covering income, cancer-related out-of-pocket costs, distress and lifestyle changes."

A total of 247 patients, 122 diagnosed with neuroendocrine tumours and 125 treated for colorectal cancer, responded to the survey between November 2016 and March 2017. The results brought to light financial impacts in a significant proportion of patients: 80.6% of respondents stated that they faced higher out-of-pocket costs related to their illness.

Although most medical costs in Germany are covered by a person's health insurance, patients do have to contribute co-payments for prescription drugs. Cancer patients may additionally face travel expenses to get to the hospital or medical centre, as well as potentially having to pay for care, housekeeping or childcare. For over three quarters of the patients who responded to the survey, disease-related out-of-pocket costs amounted to less than 200 euros monthly.

Cancer-related income loss was reported by 37.2% of survey participants. "In our study, this effect was more serious than out-of-pocket costs, as the suffered losses exceeded 800 euros per month in almost half of cases. They were mainly due to patients being unable to work or forced to reduce their working hours," said Mehlis.

The analysis further showed that high financial loss relative to income was significantly associated with a lower estimation of patients' quality of life and more distress. "The financial impacts that a majority of these patients experienced seem to have contributed to the burden of their disease: the bigger the loss was in proportion to their previous income level, the more negatively they rated their personal situation," Winkler observed.

"More research is needed to determine what actions are necessary at the system level - for example an extension of the period of eligibility for sickness benefits - or at the individual level, like targeted consulting and support services," she said. "To do this, we will need a valid instrument to measure 'subjective financial burden' in the German context, based on a precise definition of the concept."

Dr. Dirk Arnold of Asklepios Tumorzentrum in Hamburg, Germany, commented for ESMO: "There have been efforts in Germany, including by national entities like the Federal Joint Committee (2) and the Robert Koch Institute (3), to look at the costs of oncology treatment for cancer patients - but they have focused only on drug and procedure-related expenses. With this new study, we can see not just that the financial implications of a cancer diagnosis are much broader, but also that the monetary losses associated with this disease have significant psychosocial consequences."

"We should draw lessons from these findings: just as cancer patients receive consultations about lifestyle issues, like nutrition, so too should the financial aspect somehow be integrated into the social counselling programmes we offer them," Arnold continued. "The fact that medical expenses for patients in Germany - and Europe more generally - are relatively low compared to other parts of the world, should not lead us to underestimate the importance of their perceived financial burden and leave them alone with it. Going forward, it would be interesting to see if assessments of this burden in other European countries produce similar results."

Arnold added: "Alleviating the financial burden of cancer: is one of ESMO's key commitments: earlier this year, ESMO issued a paper (4) on this subject in the context of the implementation of the 2017 World Health Assembly Resolution on Cancer prevention and control." (5)

This paper is the first to articulate the response and commitment oncologists to advance global cancer control through the framework of the 2017 WHA Cancer Resolution and universal health coverage. It addresses key topics like cancer prevention, timely access to treatment and care, palliative and survivorship care, as well as comprehensive data collection through robust cancer registries. The authors also offer a concrete set of actions and policy recommendations for improving patient care.

ESMO's commitment to lessening the burden of cancer is not new: sustainable cancer care is one of the three pillars of the Society's 2020 Vision, (6) and various initiatives have been launched in this field over the years. Among other things, ESMO contributed in 2016 to the revision of the WHO Model List of Essential Medicines (7) - the list of vital medicines that should be available to patients everywhere for free or at affordable prices - and added 16 anti-cancer drugs including targeted therapies. With the introduction of its own classification tool, the ESMO Magnitude of Clinical Benefit Scale, (8) the Society provided a standardised, evidence-based approach to evaluating cancer medicines, thus helping to guide health systems in their decision-making and resource allocation. In collaboration with the Economist Intelligence Unit (EIU), ESMO also published in 2017 a report (9) on the shortage of inexpensive cancer medicines in Europe, raising awareness for a critical issue that has immediate consequences for patient care and treatment outcomes.

Credit: 
European Society for Medical Oncology

Fast, accurate estimation of the Earth's magnetic field for natural disaster detection

image: Deep Neural Networks (DNNs) have been applied to accurately predict the magnetic field of the Earth at specific locations.

Image: 
Kan Okubo

Tokyo, Japan - Researchers from Tokyo Metropolitan University have applied machine-learning techniques to achieve fast, accurate estimates of local geomagnetic fields using data taken at multiple observation points, potentially allowing detection of changes caused by earthquakes and tsunamis. A deep neural network (DNN) model was developed and trained using existing data; the result is a fast, efficient method for estimating magnetic fields for unprecedentedly early detection of natural disasters. This is vital for developing effective warning systems that might help reduce casualties and widespread damage.

The devastation caused by earthquakes and tsunamis leaves little doubt that an effective means to predict their incidence is of paramount importance. Certainly, systems already exist for warning people just before the arrival of seismic waves; yet, it is often the case that the S-wave (or secondary wave), that is, the later part of the quake, has already arrived when the warning is given. A faster, more accurate means is sorely required to give local residents time to seek safety and minimize casualties.

It is known that earthquakes and tsunamis are accompanied by localized changes in the geomagnetic field. For earthquakes, it is primarily what is known as a piezo-magnetic effect, where the release of a massive amount of accumulated stress along a fault causes local changes in geomagnetic field; for tsunamis, it is the sudden, vast movement of the sea that causes variations in atmospheric pressure. This in turn affects the ionosphere, subsequently changing the geomagnetic field. Both can be detected by a network of observation points at various locations. The major benefit of such an approach is speed; remembering that electromagnetic waves travel at the speed of light, we can instantaneously detect the incidence of an event by observing changes in geomagnetic field.

However, how can we tell whether the detected field is anomalous or not? The geomagnetic field at various locations is a fluctuating signal; the entire method is predicated on knowing what the "normal" field at a location is.

Thus, Yuta Katori and Assoc. Prof. Kan Okubo from Tokyo Metropolitan University set out to develop a method to take measurements at multiple locations around Japan and create an estimate of the geomagnetic field at different, specific observation points. Specifically, they applied a state-of-the-art machine-learning algorithm known as a Deep Neural Network (DNN), modeled on how neurons are connected inside the human brain. By feeding the algorithm a vast amount of input taken from historical measurements, they let the algorithm create and optimize an extremely complex, multi-layered set of operations that most effectively maps the data to what was actually measured. Using half a million data points taken over 2015, they were able to create a network that can estimate the magnetic field at the observation point with unprecedented accuracy.

Given the relatively low computational cost of DNNs, the system may potentially be paired with a network of high sensitivity detectors to achieve lightning-fast detection of earthquakes and tsunamis, delivering an effective warning system that can minimize damage and save lives.

Credit: 
Tokyo Metropolitan University

Novel catalyst for high-energy aluminum-air flow batteries

image: Professor Jaephil Cho (left) and his research team in the School of Energy and Chemical Engineering at UNIST.

Image: 
UNIST

A recent study, affiliated with UNIST has introduced a novel electric vehicle (EV) battery technology that is more energy-efficient than gasoline-powered engines. The new technology enables drivers to simply have their battery packs replaced instead of charging them, which ultimately troubleshoot slow charging problem with the existing EV battery technology. It also provides lightweight, high-energy density power sources with little risk of catching fire or explosion.

This breakthrough has been led by Professor Jaephil Cho and his research team in the School of Energy and Chemical Engineering at UNIST. Their findings have been published a prestigious academic journal, Nature Communications on September 13, 2018.

In the study, the research team has developed a new type of aluminum-air flow battery for (EVs). When compared to the existing lithium-ion batteries (LIBs), the new battery outperforms the others in terms of higher energy density, lower cost, longer cycle life, and higher safety.

Aluminum-air batteries are primary cells, which means they cannot be recharged via conventional means. When applied to EVs, it will produce electricity by simply replacing the aluminum plate and electrolyte. Considering the actual energy density of gasoline and aluminum of the same weight, aluminum is superior.

"Gasoline has an energy density of 1,700 Wh/kg, while an aluminum-air flow battery exhibits a much higher energy densities of 2,500 Wh/kg with its replaceable electrolyte and aluminum," says Professor Cho. "This means, with 1kg of aluminum, we can build a battery that enables an electric car to run up to 700km."

The new battery works much like metal-air batteries, as it produces electricity from the reaction of oxygen in the air with aluminium. Metal-air batteries, especially aluminium-air batteries, have attracted much attention as the next-generatoin battery due to their energy density, which is higher than that of LIBs. Indeed, batteries that use aluminum, a lightweight metal, are lighter, cheaper, and have a greater capacity than a traditional LIB.

While aluminum-air batteries are one of the highest energy densities of all batteries, they are not widely used because of problems with high anode cost and byproduct removal when using traditional electrolytes. Professor Cho has solved this issue by developing a flow-based aluminum-air battery to alleviate the side reactions in the cell, where the electrolytes can be continuously circulated.

In the study, the research team has prepared a silver nanoparticle seed-mediated silver manganate nanoplate architecture for the oxygen reduction reaction (ORR). They discovered that the silver atom can migrate into the available crystal lattice and rearrange manganese oxide structure, thus creating abundant surface dislocations.

Thanks to improved logevity and energy density, the team anticipates that their aluminum-air flow battery system could potentially help bring more EVs on the road with greater range and substantially less weight with zero risk of explosion.

"This innovative strategy prevented the precipitation of solid by-product in the cell and dissolution of a precious metal in air electrode," says Jaechan Ryu, first author of the study. "We believe that our AAFB system has the potential for a cost-effective and safe next-generation energy conversion system."

The discharge capacity of aluminum-air flow battery increased 17 times, as compared to the conventional aluminum air batteries. Besides, the capacity of newly developed silver-manganese oxide-based catalysts was comparable to that of the conventional platinum catalysts (Pt/C). As silver is 50 times less expensive than platinum, it is also competitive in terms of the price

Credit: 
Ulsan National Institute of Science and Technology(UNIST)

NASA tracks post-Tropical Cyclone Michael's heavy rains to Northeastern US

image: At 3:25 a.m. EDT (0725 UTC) on Oct. 12, 2018 the MODIS instrument that flies aboard NASA's Aqua satellite gathered infrared data on Post-Tropical Cyclone Michael. Strongest thunderstorms appeared over eastern Pennsylvania, New Jersey, southeastern New York, Connecticut, Rhode Island and Massachusetts. In those areas, storms had cloud top temperatures as cold as minus 63 degrees Fahrenheit (minus 53 Celsius).

Image: 
NASA/NRL

NASA satellite imagery showed that although Michael's center was off-shore of the Delmarva Peninsula and over the western Atlantic Ocean, rain from its western quadrant was affecting the northeastern U.S.

At 3:25 a.m. EDT (0725 UTC) on Oct. 12, the MODIS instrument that flies aboard NASA's Aqua satellite gathered infrared data on Post-Tropical Cyclone Michael. Infrared data provides temperature information. Strongest thunderstorms appeared over eastern Pennsylvania, New Jersey, southeastern New York, Connecticut, Rhode Island and Massachusetts. In those areas, storms had cloud top temperatures as cold as minus 63 degrees Fahrenheit (minus 53 Celsius). NASA research has shown that cloud tops with temperatures that cold were high in the troposphere and have the ability to generate heavy rain.

All coastal tropical cyclone warnings and watches are discontinued. There are no coastal watches or warnings in effect.

The Global Precipitation Measurement mission or GPM core satellite provided an analysis of the rate in which rain is falling throughout Post-Tropical Cyclone Michael. The GPM core satellite measured rainfall within Post-Tropical Storm Michael on Oct. 12. GPM found the heaviest rainfall was north of Michael's center, falling at a rate of over 1.6 inches (40 mm) per hour south of Long Island, New York.

The National Hurricane Center or NHC noted at 5 a.m. EDT (0900 UTC), the center of Post-Tropical Cyclone Michael was located near latitude 38.0 degrees north and longitude 73.1 degrees west. Michael's center was about 185 miles (300 km) east-northeast of Norfolk, Virginia. The post-tropical cyclone is moving toward the east-northeast near 29 mph (46 kph), and this motion is expected to continue with an increase in forward speed during the next couple of days. On the forecast track, the center of Michael will move away from the United States today and move rapidly across the open Atlantic Ocean tonight through Sunday. Maximum sustained winds have increased near 65 mph (100 kph) with higher gusts. Some additional strengthening is expected today and tonight as the post-tropical cyclone moves across the Atlantic.

NHC expects Michael to cross the North Atlantic Ocean and head toward Europe over the next two days.

Credit: 
NASA/Goddard Space Flight Center

NASA sees Tropical Cyclone Luban nearing Oman

image: NASA-NOAA's Suomi NPP satellite passed over the Northern Indian Ocean and captured a visible image of Tropical Cyclone Luban nearing Oman's coast on Oct. 11.

Image: 
NASA Worldview, Earth Observing System Data and Information System (EOSDIS)/NOAA

Tropical Cyclone Luban continued to track toward Oman as NASA-NOAA's Suomi NPP satellite passed over the Northern Indian Ocean.

Suomi NPP passed over Luban on Oct. 11 and the Visible Infrared Imaging Radiometer Suite (VIIRS) instrument provided a visible image of the storm. The VIIRS image showed Luban stretched from the Gulf of Marisa south to Socotra Island. Luban had a symmetrical shape with a cloud-filled eye, surrounded by powerful thunderstorms.

At 11 a.m. EDT (1500 UTC), the center of Tropical Cyclone Luban was located near latitude 14.6 degrees north and longitude 56.8 degrees east. Luban was moving toward the west. Maximum sustained winds are near 52 mph (45 knots/83 kph) with higher gusts.

Luban is forecast to move north of Socotra Island and make landfall in Oman between Lukalla and Salalah on Oct. 14.

Credit: 
NASA/Goddard Space Flight Center

Satellite finds wind shear battering Tropical Storm Nadine

image: NASA-NOAA's Suomi NPP satellite passed over the Eastern Atlantic Ocean and captured a visible image of Tropical Storm Nadine. Nadine appeared devoid of rainfall except in the northeastern quadrant. Clouds around the center appeared as a wispy swirl.

Image: 
NASA Worldview, Earth Observing System Data and Information System (EOSDIS)/NOAA

Tropical Storm Nadine continues to be battered by vertical wind shear, winds that can tear a tropical cyclone apart. NASA-NOAA's Suomi NPP satellite captured a visible image that showed the bulk of Nadine's clouds were pushed northeast of the center.

Suomi NPP passed over Nadine on Oct. 11 and the Visible Infrared Imaging Radiometer Suite (VIIRS) instrument provided a visible image of the storm. The VIIRS image showed that Nadine appeared devoid of rainfall except in the northeastern quadrant. Southwesterly wind shear had pushed the bulk of clouds and showers east of its center. Clouds around the center appeared as a wispy swirl.

In general, wind shear is a measure of how the speed and direction of winds change with altitude. Tropical cyclones are like rotating cylinders of winds. Each level needs to be stacked on top each other vertically in order for the storm to maintain strength or intensify. Wind shear occurs when winds at different levels of the atmosphere push against the rotating cylinder of winds, weakening the rotation by pushing it apart at different levels.

At 11 a.m. EDT (1500 UTC), the center of Tropical Storm Nadine was located near latitude 16.0 degrees north and longitude 36.2 degrees west. Nadine is moving toward the west-northwest near 8 mph (13 kph). A west-northwestward to westward motion with an increase in forward speed is expected through the weekend. Maximum sustained winds have decreased to near 45 mph (75 kph) with higher gusts. Weakening is forecast during the next couple of days, and Nadine is expected to dissipate by Sunday.

Credit: 
NASA/Goddard Space Flight Center