Brain

New system detects faint communications signals using the principles of quantum physics

image: The incoming signal (red, lower left) proceeds through a beam splitter to the photon detector, which has an attached time register (top right). The receiver sends the reference beam to the beam splitter to cancel the incoming pulse so that no light is detected. If even one photon is detected, it means that the receiver used an incorrect reference beam, which needs to be adjusted. The receiver uses exact times of photon detection to arrive at the right adjustment with fewer guesses. The combination of recorded detection times and the history of reference beam frequencies are used to find the frequency of the incoming signal.

Image: 
NIST

Researchers at the National Institute of Standards and Technology (NIST) have devised and demonstrated a system that could dramatically increase the performance of communications networks while enabling record-low error rates in detecting even the faintest of signals, potentially decreasing the total amount of energy required for state-of-the-art networks by a factor of 10 to 100.

The proof-of-principle system consists of a novel receiver and corresponding signal-processing technique that, unlike the methods used in today's networks, are entirely based on the properties of quantum physics and thereby capable of handling even extremely weak signals with pulses that carry many bits of data.

"We built the communication test bed using off-the-shelf components to demonstrate that quantum-measurement-enabled communication can potentially be scaled up for widespread commercial use," said Ivan Burenkov, a physicist at the Joint Quantum Institute, a research partnership between NIST and the University of Maryland. Burenkov and his colleagues report the results in Physical Review X Quantum. "Our e?ort shows that quantum measurements o?er valuable, heretofore unforeseen advantages for telecommunications leading to revolutionary improvements in channel bandwidth and energy efficiency."

Modern communications systems work by converting information into a laser-generated stream of digital light pulses in which information is encoded -- in the form of changes to the properties of the light waves -- for transfer and then decoded when it reaches the receiver. The train of pulses grows fainter as it travels along transmission channels, and conventional electronic technology for receiving and decoding data has reached the limit of its ability to precisely detect the information in such attenuated signals.

The signal pulse can dwindle until it is as weak as a few photons -- or even less than one on average. At that point, inevitable random quantum fluctuations called "shot noise" make accurate reception impossible by normal ("classical," as opposed to quantum) technology because the uncertainty caused by the noise makes up such a large part of the diminished signal. As a result, existing systems must amplify the signals repeatedly along the transmission line, at considerable energy cost, keeping them strong enough to detect reliably.

The NIST team's system can eliminate the need for amplifiers because it can reliably process even extremely feeble signal pulses: "The total energy required to transmit one bit becomes a fundamental factor hindering the development of networks," said Sergey Polyakov, senior scientist on the NIST team. "The goal is to reduce the sum of energy required by lasers, amplifiers, detectors, and support equipment to reliably transmit information over longer distances. In our work here we demonstrated that with the help of quantum measurement even faint laser pulses can be used to communicate multiple bits of information -- a necessary step towards this goal."

To increase the rate at which information can be transmitted, network researchers are finding ways to encode more information per pulse by using additional properties of the light wave. So a single laser light pulse, depending on how it was originally prepared for transmission, can carry multiple bits of data. To improve detection accuracy, quantum-enhanced receivers can be fitted onto classical network systems. To date, those hybrid combinations can process up to two bits per pulse. The NIST quantum system uses up to 16 distinct laser pulses to encode as many as four bits.

To demonstrate that capability, the NIST researchers created an input of faint laser pulses comparable to a substantially attenuated conventional network signal, with the average number of photons per pulse from 0.5 to 20 (though photons are whole particles, a number less than one simply means that some pulses contain no photons).

After preparing this input signal, the NIST researchers take advantage of its wavelike properties, such as interference, until it finally hits the detector as photons (particles). In the realm of quantum physics, light can act as either particles (photons) or waves, with properties such as frequency and phase (the relative positions of the wave peaks).

Inside the receiver, the input signal's pulse train combines (interferes) with a separate, adjustable reference laser beam, which controls the frequency and phase of the combined light stream. It is extremely difficult to read the different encoded states in such a faint signal. So the NIST system is designed to measure the properties of the whole signal pulse by trying to match the properties of the reference laser to it exactly. The researchers achieve this through a series of successive measurements of the signal, each of which increases the probability of an accurate match.

That is done by adjusting the frequency and phase of the reference pulse so that it interferes destructively with the signal when they are combined at the beam splitter, canceling the signal out completely so no photons can be detected. In this scheme, shot noise is not a factor: Total cancellation has no uncertainty.

Thus, counterintuitively, a perfectly accurate measurement results in no photon reaching the detector. If the reference pulse has the wrong frequency, a photon can reach the detector. The receiver uses the time of that photon detection to predict the most probable signal frequency and adjusts the frequency of the reference pulse accordingly. If that prediction is still incorrect, the detection time of the next photon results in a more accurate prediction based on both photon detection times, and so on.

"Once the signal interacts with the reference beam, the probability of detecting a photon varies in time," Burenkov said, "and consequently the photon detection times contain information about the input state. We use that information to maximize the chance to guess correctly after the very first photon detection.

"Our communication protocol is designed to give different temporal profiles for different combinations of the signal and reference light. Then the detection time can be used to distinguish between the input states with some certainty. The certainty can be quite low at the beginning, but it is improved throughout the measurement. We want to switch the reference pulse to the right state after the very first photon detection because the signal contains just a few photons, and the longer we measure the signal with the correct reference, the better our confidence in the result is."

Polyakov discussed the possible applications. "The future exponential growth of the internet will require a paradigm shift in the technology behind communications," he said. "Quantum measurement could become this new technology. We demonstrated record low error rates with a new quantum receiver paired with the optimal encoding protocol. Our approach could significantly reduce energy for telecommunications."

Credit: 
National Institute of Standards and Technology (NIST)

Older the person, higher the self-esteem: age differences in self-esteem in Japan

image: Predicted self-esteem scores across ages in Japan (2017 survey; Error bars represent 95% confidence intervals)

Image: 
Tokyo University of Science

Self-esteem does not remain constant through life, but changes as a person develops. A large number of studies conducted on this topic, mainly in the United States, have shown that self-esteem is high in childhood, declines in adolescence, but then continues to increase throughout adulthood, peaking in the 50s and 60s, and declining thereafter. Studies in Japan have also reported that self-liking, which is an aspect of self-esteem, follows a similar trajectory across different ages.

However, previous Japanese studies had two main limitations. First, they focused on self-liking, one element of self-esteem. Self-esteem is composed of self-liking (the affective judgment of oneself) and self-competence (the overall sense of oneself as capable and effective). It is important to comprehensively examine self-esteem, including simultaneous investigation of both the aspects of self-liking and self-competence, to clarify the developmental trajectory of self-esteem. Second, the studies did not sufficiently investigate age differences in self-esteem in elderly people aged 70 years and older.

Research has indicated that self-esteem does not decline in Japan up to 69 years of age, but it may decline thereafter. Furthermore, a decline in self-esteem itself may be absent in Japan. Reports have consistently demonstrated that levels of self-esteem vary across different cultures, but the differing tendencies of developmental trajectories have not been adequately reported. Thus, it is also necessary to examine the self-esteem of elderly people aged 70 years and older, to elucidate the developmental trajectory of self-esteem in Japan.

To address this gap, Assistant professor Yuji Ogihara, from the Faculty of Science Division II, Tokyo University of Science and Professor Takashi Kusumi, from the Graduate School of Education, Kyoto University, conducted a large-scale study comprehensively examining age differences in self-esteem from adolescence to old age, including both self-liking and self-competence, across a wider sample of people, including respondents aged 70 and older.

They analyzed six web-based surveys administered to a large and diverse sample of people in Japan from 2009 to 2018. The responses were obtained from 6113 persons (2996 males and 3117 females) between the ages of 16 and 88. Each study used the most commonly used self-esteem scale (10 items) to measure self-esteem. The scale includes items for measuring self-liking, such as "On the whole, I am satisfied with myself", and items for measuring self-competence, such as "I feel that I have a number of good qualities". The participants scored each item on a scale of one to five, from "1: Not applicable" to "5: Applicable".

The results showed that self-esteem is low in adolescence but increases gradually from adulthood to old age (see Figure 1). The changes from adolescence to middle age were consistent with findings from previous research in Europe and the United States, but unlike observed in previous studies, there was no decline in self-esteem from the 50s onwards. Therefore, the findings in this research suggest that the developmental trajectory of self-esteem may differ in different cultures.

"Previous research has insisted that one of the causes of the decline in self-esteem after middle age in Europe and the United States is that elderly people come to accept their limitations and faults, leading them to have a more humble, modest, and balanced view of themselves. On the other hand, reports have shown that people in Japan have a humbler view of themselves even before middle age. This may be the reason for the lack of decline in self-esteem in this study," suggests Dr Ogihara. Other factors that may generate cultural differences, including the seniority system and the culture of respect for the aged, require further detailed examination.

Generational effects may obscure the low self-esteem in Japan after middle age. Therefore, further investigation is needed to separate these developmental changes from generational differences, such as conducting a longitudinal survey that tracks people of the same generation. Further work is required owing to the small sample size of participants in their 80s--collecting and analyzing more data and verifying that similar results can be obtained.

"Examining the age differences and developmental trajectories of self-esteem is not only academically and theoretically significant, as described above, it also has practical and social significance," explains Dr. Ogihara. "For example, understanding when self-esteem tends to be low can help determine when the adoption of effective preventive measures is more necessary, and allow for timely intervention and response."

This study has elucidated the age differences in self-esteem--one of the most basic psychological tendencies. Thus, Ogihara and Kusumi hope that these findings can contribute not only to related academic research in various fields, but also more broadly to clinical and general practice, including prevention and intervention.

Credit: 
Tokyo University of Science

Post-Tropical Storm Teddy in NASA Newfoundland nighttime view

image: NASA-NOAA's Suomi NPP satellite provided a nighttime view of Post-Tropical Cyclone Teddy over Newfoundland, Canada at 1:40 a.m. EDT (0540 UTC) on Sept. 24. The nighttime lights of Newfoundland can be seen somewhat through Teddy's clouds, and the nighttime lights of Nova Scotia were visible, revealing the Teddy had moved past the province.

Image: 
NASA Worldview, Earth Observing System Data and Information System (EOSDIS)

NASA-NOAA's Suomi NPP satellite provided an infrared image of Post-tropical cyclone Teddy over the province of Newfoundland, Canada in the early morning hours of Sept. 24.

Teddy's Last Advisory

At 11 p.m. EDT on Sept. 23 (0300 UTC on Sept. 24), NOAA's National Hurricane Center (NHC) issued the final advisory on Post-Tropical Cyclone Teddy. At that time, the center of Post-Tropical Cyclone Teddy was located near latitude 51.0 degrees north and longitude 57.3 degrees west based on the Marble Mountain, Newfoundland, radar and surface observations along the west coast of Newfoundland.  The post-tropical cyclone was moving toward the north-northeast near 32 mph (52 kph), and this general motion is expected to continue through Thursday. Maximum sustained winds were near 50 mph (85 kph) with higher gusts. The estimated minimum central pressure is 975 millibars.

The center of Teddy moved closer to the northwestern Newfoundland coast overnight.

NASA's Night-Time View  

The Visible Infrared Imaging Radiometer Suite (VIIRS) instrument aboard Suomi NPP provided a nighttime image of Post-Tropical Storm Teddy over Newfoundland, Canada. The image was taken at 1:40 a.m. EDT (0540 UTC) on Sept. 24. The nighttime lights of Newfoundland can be seen somewhat through Teddy's clouds, and the nighttime lights of Nova Scotia were visible, revealing that Teddy had moved past the province.

The image was created using the NASA Worldview application at NASA's Goddard Space Flight Center in Greenbelt, Md.

Teddy's Final Fate

On the forecast track, Teddy is expected to move into the Labrador Sea today, Sept. 24 before merging with a larger extratropical low-pressure area.

Credit: 
NASA/Goddard Space Flight Center

Opening an autophagy window as the apoptosis door starts to close

image: The conjugate of Trastuzumab, a monoclonal antibody against HER2 expressed in various malignant tumors, and methylated β-cyclodextrin-threaded polyrotaxane (Me-PRX) is designed for specific delivery of Me-PRX in HER2-expressing tumor cells. Because Me-PRX is known to induce endoplasmic reticulum stress-related autophagic cell death, Trastuzumab-Me-PRX conjugates are regarded as a new class of antibody-drug conjugates that would contribute to the chemotherapy of cancers through the induction of autophagic cell death.

Image: 
Department of Organic Biomaterials,TMDU

Tokyo - Many people across the globe are working hard to get the better of cancer; however cancer is always working too. Cancer cells can become resistant to the methods that have been adopted to kill them, so identifying drugs that act in different ways is part of the push to outsmart this ubiquitous disease. TMDU researchers have engineered a material that can identify cancer cells and mount an attack that they are not yet ready for. Their findings are published in the Journal of Materials Chemistry B.

Most of the anti-tumor drugs available work by inducing a cell death process known as apoptosis, and unfortunately cancer cells are developing resistance to drugs that work by this mechanism. However, other modes of cell death are known and focusing on alternative mechanisms is one way for researchers to stay a step ahead.

The TMDU researchers previously reported a supermolecule (Me-PRX) that acts as a drug by inducing autophagic cell death. The structure of Me-PRX can be thought of as lots of rings threaded onto a piece of string. These rings are kept on the string by adding stoppers to each end. However, the stoppers are specifically designed so that the rings are released at the acidic pH inside the cell. The release of the rings inside the cell causes endoplasmic reticulum stress, which leads to autophagic cell death.

Me-PRX has significant potential, but until now lacked a way of targeting the specific cancer cells requiring treatment. It was also too small to stay in the bloodstream for extended periods. In their recent work, the researchers have attached their supramolecular drug to the Trastuzumab antibody, which is able to identify the HER2 gene that is overexpressed by many tumor cells.

The size increase brought about by forming this antibody-drug conjugate (ADC) also stops it being filtered out by the kidneys, giving it more time to act. And being larger helps the ADC to be passively taken up by tumors.

"We compared our Trastuzumab-modified Me-PRX (Tras-Me-PRX) with Me-PRX alone and Me-PRX modified with an antibody unable to target HER2-overexpressing cancer cells," study lead author Atsushi Tamura explains. "And we found that the binding of Trastuzumab to HER2 played an important role in getting Me-PRX into cells."

The researchers also found that lower concentrations of Tras-Me-PRX were needed to have an impact on the number of live healthy cells than those required for Me-PRX alone. This shows that the delivery of Tras-Me-PRX led to enhanced autophagic cell death.

"Our demonstration of targeted delivery and cell death using an ADC is a significant step towards an autophagic cell death drug," study corresponding author Nobuhiko Yui explains. "Many ADCs have been approved for clinical use; therefore we are hopeful that future in vivo investigations with our material will ultimately lead to a tangible option in the clinic."

Credit: 
Tokyo Medical and Dental University

Bristol scientists shine light on tiny crystals behind unexpected violent eruptions

image: Nanolite 'snow' surrounding an iron oxide microlite 'Christmas tree'. Even these small 50 nm spheres are actually made up of even smaller nanolites aggregated into clumps. Christmas has come early this year for these researchers.

Image: 
Brooker/Griffiths/Heard/Cherns

In a new study of volcanic processes, Bristol scientists have demonstrated the role nanolites play in the creation of violent eruptions at otherwise 'calm' and predictable volcanoes.

The study, published in Science Advances, describes how nano-sized crystals (nanolites), 10,000 times smaller than the width of a human hair, can have a significant impact of the viscosity of erupting magma, resulting in previously unexplained and explosive eruptions.

"This discovery provides an eloquent explanation for violent eruptions at volcanos that are generally well behaved but occasionally present us with a deadly surprise, such as the 122 BC eruption of Mount Etna," said Dr Danilo Di Genova from the University of Bristol's School of Earth Sciences.

"Volcanoes with low silica magma compositions have very low viscosity, which usually allows the gas to gently escape. However, we've shown that nanolites can increase the viscosity for a limited time, which would trap gas in the sticky liquid, leading to a sudden switch in behaviour that was previously difficult to explain."

Dr Richard Brooker also from Earth Sciences, said: "We demonstrated the surprising effect of nanolites on magma viscosity, and thereby volcanic eruptions, using cutting-edge nano-imaging and Raman spectroscopy to hunt for evidence of these almost invisible particles in ash erupted during very violent eruptions."

"The next stage was to re-melt these rocks in the laboratory and recreate the correct cooling rate to produce nanolites in the molten magma. Using the scattering of extremely bright synchrotron source radiation (10 billion times brighter than the sun) we were able to document nanolite growth."

"We then produced a nanolite-bearing basaltic foam (pumice) under laboratory conditions, also demonstrating how these nanolites can be produced by undercooling as volatiles are exsolved from magma, lowering the liquidus."

Professor Heidy Mader added: "By conducting new experiments on analogue synthetic materials, at low shear rates relative to volcanic systems, we were able to demonstrate the possibility of extreme viscosities for nanolite-bearing magma, extending our understanding of the unusual (non-Newtonian) behaviour of nanofluids, which have remained enigmatic since the term was coined 25 years ago."

The next stage for this research is to model this dangerous, unpredictable volcanic behaviour in actual volcanic situations. This is the focus of a Natural Environment Research Council (UK) and National Science Foundation (US) grant 'Quantifying Disequilibrium Processes in Basaltic Volcanism' awarded to Bristol and a consortium of colleagues in Manchester, Durham, Cambridge and Arizona State University.

Credit: 
University of Bristol

Placing barthelonids on the tree of life

Tsukuba, Japan - New species of microbial life are continually being identified, but localizing them on a phylogenetic tree is a challenge. Now, researchers at the University of Tsukuba have pinpointed barthelonids, a genus of free-living heterotrophic biflagellates typified by Barthelona vulgaris, and clarified their ancestry as well as evolution of their ATP-generation mechanisms.

A phylogenetic tree portrays species by lineage. The trunk represents a common ancestor and the branches all its evolutionary descendants; together, a monophyletic group or clade. The eukaryotic Tree of Life represents the phylogeny of all organisms with nucleated cells, ranging from unicellular protists to blue whales. Where would the barthelonids fit?

The researchers established five strains of Barthelona species from different parts of the world. Analysis of the transcriptome of one strain (PAP020), its RNA "signature," localized it on the phylogenetic tree to the base of the Fornicata clade. This indicated that the last common ancestor of the barthelonids evolutionarily diverged very early in the evolution of Metamonada.

Senior author Professor Yuji Inagaki explains: "We analyzed small subunit ribosomal DNA as well as phylogenomic data to confirm the commonality of all Barthelona strains. In order to deduce their phylogenetic position, we matched transcriptome data from PAP020 against a eukaryote-wide dataset containing 148 genes."

The transcriptome data of PAP020 also indicated the evolutionarily adapted metabolic pathways of ATP generation. The research team suspected that barthelonids, being anaerobic, possessed mitochondrion-related organelles (MROs) instead of full-fledged mitochondria, a suspicion upheld by electron microscopy. Comparison with MROs in fornicates predicted that PAP020 could not generate ATP in the MRO, as no mitochondrial/MRO enzymes involved in substrate-level phosphorylation were detected. However, PAP020 possesses a cytosolic ATP synthase, acetyl-CoA synthetase (ACS), suggesting that PAP020 generated ATP in the cytosol.

"We have furthered current hypotheses around the evolutionary history of ATP-generating mechanisms in the Fornicata clade in light of data from Barthelona strain PAP020," says Professor Inagaki. "Interestingly, the sequence ACS2 was formerly believed to be acquired at the base of the Fornicata clade, but we propose that this event occurred earlier with the common ancestor of fornicates and barthelonids. Indeed, it may have occurred further back with the last common metamonad ancestor. Loss of substrate-level phosphorylation from the MRO in the clade containing barthelonids with other fornicates could well be two discrete events."

Credit: 
University of Tsukuba

Uncovering new understanding of Earth's carbon cycle

image: Diamonds from Kankan, Guinea, analyzed in this study The imperfections inside the diamond are small inclusions of a mineral called ferropericlase, which is from the lower mantle.

Image: 
Anetta Banas

A new study led by a University of Alberta PhD student--and published in Nature--is examining the Earth's carbon cycle in new depth, using diamonds as breadcrumbs of insight into some of Earth's deepest geologic mechanisms.

"Geologists have recently come to the realization that some of the largest, most valuable diamonds are from the deepest portions of our planet," said Margo Regier, PhD student in the Department of Earth and Atmospheric Sciences under the supervision of Graham Pearson and Thomas Stachel. "While we are not yet certain why diamonds can grow to larger sizes at these depths, we propose a model where these 'superdeep' diamonds crystallize from carbon-rich magmas, which may be critical for them to grow to their large sizes."

Beyond their beauty and industrial applications, diamonds provide unique windows into the deep Earth, allowing scientists to examine the transport of carbon through the mantle.

"The vast majority of Earth's carbon is actually stored in its silicate mantle, not in the atmosphere," Regier explained. "If we are to fully understand Earth's whole carbon cycle then we need to understand this vast reservoir of carbon deep underground."

The study revealed that the carbon-rich oceanic crust that sinks into the deep mantle releases most of its carbon before it gets to the deepest portion of the mantle. This means that most carbon is recycled back to the surface and only small amounts of carbon will be stored in the deep mantle--with significant implications for how scientists understand the Earth's carbon cycle. The mechanism is important to understand for a number of reasons, as Regier explained.

"The movement of carbon between the surface and mantle affects Earth's climate, the composition of its atmosphere, and the production of magma from volcanoes," said Regier. "We do not yet understand if this carbon cycle has changed over time, nor do we know how much carbon is stored in the deepest parts of our planet. If we want to understand why our planet has evolved into its habitable state it is today and how the surfaces and atmospheres of other planets may be shaped by their interior processes, we need to better understand these variables."

Credit: 
University of Alberta

Feeling frisky makes you see what you want to see

There’s something to rose-tinted glasses after all.

A group of psychologists at the University of Rochester and the Israeli-based Interdisciplinary Center (IDC) Herzliya discovered that we see possible romantic partners as a lot more attractive if we have what the scientists call “a sexy mindset.” Under the same condition we also tend to overestimate our own chances of romantic success.

The researchers examined what would happen if a person’s sexual system is activated—think “feeling frisky”—by exposing test subjects to brief sexual cues that induced a “sexy mindset.” Such a mindset, the team found, reduced a person’s concerns about being rejected, while simultaneously inducing a sense of urgency to start a romantic relationship.

The US-Israeli team noticed that people often have overly optimistic views when it comes to a potential partner and their own chances of landing a date. Their latest research, published in the Journal of Social and Personal Relationships, sought to explain the biased perception. It’s precisely this bias, the team concluded, that may provide people with the necessary confidence to worry less about rejection and instead motivate them to take a leap of faith to pursue a desired romantic relationship without hesitation.

“If people anticipate that a partner shares their attraction, it is that much easier to initiate contact, because the fear of rejection is lessened”—coauthor Harry Reis, a professor of psychology and the Dean’s Professor in Arts, Sciences & Engineering at Rochester

“People are more likely to desire potential partners and to project their desires onto them when sexually aroused,” says lead author Gurit Birnbaum, a social psychologist and associate professor of psychology at the IDC. “Our findings suggest that the sexual system prepares the ground for forming relationships by biasing interpersonal perceptions in a way that motivates human beings to connect. Clearly the sexual system does so by inspiring interest in potential partners, which, in turn, biases the perceptions of a potential partner’s interest in oneself.”

Evolutionary principles at play

Having evolved over millennia, the sexual behavioral system of humans ensures reproduction and survival of the species by arousing sexual urges that motivate us to pursue mates. Success depends on targeting the right potential partners who are not only perceived as desirable but also as likely to reciprocate our advances. In previous studies the researchers found that people often refrain from courting desirable possible partners because they fear rejection.

“Forming stable sexual relationships had, and continues to have, a great deal of evolutionary significance,” says study coauthor Harry Reis, a professor of psychology and the Dean’s Professor in Arts, Sciences & Engineering at Rochester.

“If people anticipate that a partner shares their attraction, it is that much easier to initiate contact, because the fear of rejection is lessened,” says Reis. “One of the main purposes of sexual attraction is to motivate people to initiate relationships with potentially valuable, and valued, partners.”

Testing the effects of a sexy mindset

Across three experiments the team discovered that sexual activation helps people initiate relationships by inducing them to project their own desires onto prospective partners. In other words—you see what you want to see if you’ve been sexually primed.

To test the effects of a sexy mindset, the team exposed participants across three separate studies either to sexual (but not pornographic) stimuli or to neutral stimuli. Next, the participants encountered a potential partner and rated this partner’s attractiveness and romantic interest in them. Participants’ interest in the partner was self-reported or evaluated by raters.

In the first study, 112 heterosexual participants, aged 20 to 32, who were not in a romantic relationship, were randomly paired with an unacquainted participant of the other sex. First, participants introduced themselves to each other by talking about their hobbies, positive traits, and future career plans while being videotaped. Then the team coded the videotaped introductions for nonverbal expressions of so-called immediacy behavior—such as close physical proximity, frequent eye contact, and flashing smiles—that indicates interest in initiating romantic relationships. They discovered that those participants exposed to a sexual stimulus (versus those exposed to the neutral stimulus) exhibited more immediacy behaviors toward potential partners and perceived the partners as more attractive and interested in them.

For the second study, 150 heterosexual participants, aged 19 to 30, who were not in a romantic relationship, served as a control for the potential partner’s attractiveness and reactions. Here, all participants watched the same prerecorded video introduction of a potential partner of the other sex and then introduced themselves to the partner while being videotaped. The team coded the videotapes for attempts to induce a favorable impression. Just as in the first study, the researchers found that activation of the sexual system led participants to perceive potential partners as more attractive as well as more interested in a romantic relationship.

In the third study, the team investigated whether a participant’s romantic interest in the other participant might explain why sexual activation affects perceptions of others’ romantic interest in oneself. Here, 120 heterosexual participants, aged 21 to 31, who were not in a romantic relationship, interacted online with another participant, who in reality was an attractive opposite-sex member of the research team, in a get-to-know-each-other conversation. The participants rated their romantic interest in the other person as well as that person’s attractiveness and interest in them. They found again that sexual activation increased a participant’s romantic interest in the other participant, which, in turn, predicted perceiving the other as more interested in oneself. Having active sexual thoughts apparently arouses romantic interest in a prospective partner and encourages the adoption of an optimistic outlook on courting prospects with a partner, concluded the researchers.

“Sexual feelings do more than just motivate us to seek out partners. It also leads us to project our feelings onto the other person,” says Reis. “One important finding of the study is that the sexual feelings need not come from the other person; they can be aroused in any number of ways that have nothing to do with the other person.”

Yet, there’s also the obvious possible pitfall: when sexual feelings are present, people tend to assume that the other person shares their attraction, whether warranted or not, notes Reis. “Or you end up kissing a lot of frogs,” adds Birnbaum, “because a sexy mood makes you mistake them for princes.”

Birnbaum and Reis have spent the last few decades studying the dynamics of human sexual attraction. In a 2019 study, the duo found that when people feel greater certainty that a prospective romantic partner reciprocates their interest, they will put more effort into seeing that person again. Furthermore, people will rate the possible date as more sexually attractive than they would if they were less certain about the prospective date’s romantic intentions.

Credit: 
University of Rochester

Caregiving factors may affect hospitalization risk among disabled older adults

Few studies have investigated the potential impact of caregivers and caregiver factors on older adults' likelihood of being hospitalized. A recent study published in the Journal of the American Geriatrics Society has now provided some insights.

The study included 2,589 community-living Medicare fee-for-service beneficiaries aged 65 years and older who were disabled and were receiving help from family members or other unpaid caregivers. Thirty-eight percent of the older adults were hospitalized within one year after being interviewed.

Older adults had an increased risk of hospitalization if they had a primary caregiver who helped with healthcare tasks, reported physical strain, and provided more than 40 hours of care weekly. Having a caregiver who had helped for at least 4 years was associated with a lower risk of hospitalization. These caregiving factors were associated with hospitalization risk regardless of whether older adults had dementia.

"This study brings attention to key individuals often overlooked when thinking about how to prevent hospitalization in vulnerable groups of older adults: caregivers," said lead author Halima Amjad, MD, MPH, of Johns Hopkins University. "Policies or interventions that target aspects of caregiving we identified in this study should be explored as strategies to reduce risk of hospitalization in older adults living with disability."

Credit: 
Wiley

A fresh sense of possibility

image: The system has the resilience to withstand very harsh conditions, such as extreme temperatures, high salinity, varying pressure, intense radiation, reactive chemicals and/or high humidity.

Image: 
© 2020 KAUST

Harsh environments that are inhospitable to existing technologies could now be monitored using sensors based on graphene. An intriguing form of carbon, graphene comprises layers of interconnected hexagonal rings of carbon atoms, a structure that yields unique electronic and physical properties with possibilities for many applications.

"Graphene has been projected as a miracle material for years now, but its application in harsh environmental conditions was unexplored," says Sohail Shaikh, who has developed the new sensors, together with KAUST's Muhammad Hussain.

"Existing sensor technologies operate in a very limited range of environmental conditions, failing or becoming unreliable if there is much deviation," Shaikh adds.

The new robust sensor relies on changes in the electrical resistance of graphene in response to varying temperature, salinity and the acidity of a solution measured as pH. The system has potential to monitor additional variables, including pressure and water flow rates.

The researchers point out that sensing for multiple variables can be incorporated into a single device, greatly increasing its usefulness.

The graphene is transferred onto a flexible sheet of polyimide polymer, and it can be connected to appropriate electronic systems to collect and transmit the signals for whichever environmental variable is being monitored. The data could be transmitted wirelessly using standard Bluetooth technology.

The greatest practical advance is in the resilience of the system that allows it to tolerate temperatures as high as 650 degrees Celsius, high salinity, varying pressure, intense radiation, reactive chemicals, high humidity or any combination of these conditions. The sensors can also offer advantages in sensitivity, for example, achieving a 260 percent sensitivity increase in temperature sensing relative to an existing alternative.

As Hussain explains: "Our study is the first to show decisively the prospects of graphene as a sensing material for a variety of harsh environmental conditions."

Likely real-world applications include monitoring conditions in ocean water, body fluids, the oil and gas industry, space exploration, and many situations involving exposure to chemicals that would damage existing sensors.

The sensor's thin structure and flexibility also makes it suitable for use in wearable technologies for divers and athletes, or in medical applications.

The researchers believe that continual developments linking electronic devices with Internet of Things (IoT) and Internet of Everything (IoE) technologies will bring many needs and opportunities for their robust and flexible sensors.

Credit: 
King Abdullah University of Science & Technology (KAUST)

Forest margins may be more resilient to climate change than previously thought

image: Dry forest margins in the western United States may be more resilient to climate change than previously thought if managed appropriately, according to Penn State researchers. The researchers studied forest regeneration at four sites that had experienced wildfires in the eastern Sierra Nevada Mountains in California. The sites sit at the forest margin, a drier area where forest meets sagebrush grassland and which may be the most vulnerable to climate change-driven forest loss.

Image: 
Lucas Harris / Penn State

A warming climate and more frequent wildfires do not necessarily mean the western United States will see the forest loss that many scientists expect. Dry forest margins may be more resilient to climate change than previously thought if managed appropriately, according to Penn State researchers.

"The basic narrative is it's just a matter of time before we lose these dry, low elevation forests," said Lucas Harris, a postdoctoral scholar who worked on the project as part of his doctoral dissertation. "There's increasing evidence that once disturbances like drought or wildfire remove the canopy and shrub cover in these dry forests, the trees have trouble coming back. On the other hand, there's growing evidence that there's a lot of spatial variability in how resilient these forests are to disturbances and climate change."

The researchers studied forest regeneration at four sites that had experienced wildfires in the eastern Sierra Nevada Mountains in California. The sites sit at the forest margin, a drier area where forest meets sagebrush grassland. These dry forest margins may be the most vulnerable to climate change-driven forest loss, according to the researchers.

Large fires in the area tend to consume the forest starting from the steppe margin then sweeping up the mountain, said Alan Taylor, professor of geography and ecology who has worked in the area for decades.

"You wouldn't see forest anymore over 10 or 20 years, and it seemed like the lower forest margin was getting pushed way up in elevation because it's so dry near the sagebrush boundary," Taylor said. "My research group wanted to look at this in detail because no one had actually done it."

Harris and Taylor's research team measured tree diameters and litter depth, counted the number of seedlings and saplings and identified tree species at the research sites. They also quantified fire severity, the amount of moisture available for plant growth and water deficit, an indicator of drought intensity. They then fed the data into five models to see how the probability for tree regeneration varied based on fire severity, climate and location, and remaining vegetation and canopy cover. They report their results today (Sept. 21) in Ecosphere.

The researchers found that 50% of the plots at the sites showed signs of tree regeneration, and water balance projections through the end of the current century indicate that there will be enough moisture available to support tree seedlings. The key is to prevent severe fire disturbances through proper management, according to the researchers, because tree regeneration was strongly associated with mature trees that survived fires.

"In these marginal or dry forest areas, management approaches like prescribed burning or fuel treatments that thin the forest can prevent the severe fires that would push this ecosystem to a non-forest condition," said Taylor, who also holds an appointment in the Earth and Environmental Systems Institute. "The study suggests that these low-severity disturbances could actually create very resilient conditions in places where most people have been suggesting that we'll see forest loss."

The researchers also noticed a shift in tree composition from fire-resistant yellow pines to less fire-resistant but more drought-resistant species like pinyon pine. They attributed the shift to drying and fire exclusion policies in effect over the last century.

"The shift could be beneficial if the species moving in is better suited to present and near-future climates," said Harris. "However, it could be dangerous if a bunch of fire-sensitive species move into a place and then it all burns up. Many trees would die, and we could see lasting forest loss."

California's climate is projected to warm, but many climate models also forecast an average increase in winter precipitation, especially in the northern part of the state and in the mountains, continued Harris.

"On the one hand, you have greater drought intensity for sure, but also you're going to have these wetter periods where there's more moisture available for tree growth in the spring and maybe into the early summer," he said. "So if the trees are able to survive that drought stress and take advantage of the additional moisture present in some years, they might be able to maintain or even expand their distribution."

This forest system is important for recreation, carbon storage, biodiversity and wildlife habitat, said Taylor. It also comprises part of the western side of the Great Basin, the largest area of contiguous watersheds that do not empty into an ocean in North America.

"There's not much forest in the Great Basin, which is a huge area of sagebrush grassland in Utah, Idaho, Oregon, Nevada and Arizona," Taylor said. "So the forests of the eastern Sierra Nevada Mountains represent a significant component of the forest found in that system."

Credit: 
Penn State

Perspective on employment rates after spinal cord injury - 30 years after the ADA

image: Dr. O'Neill is director of the Center for Employment and Disability Research at Kessler Foundation, and co-author of National Trends in Disability Employment, a semi-monthly report issued by Kessler Foundation and the University of New Hampshire.

Image: 
Kessler Foundation

East Hanover, NJ. September 22, 2020. A team of experts in disability employment summarized advances in outcomes being achieved in individuals recovering from spinal cord injury. Their article, "30 Years after the Americans with Disabilities Act: Perspectives on employment for persons with spinal cord injury," (DOI: 10.1016/j.pmr.2020.04.007), was published online on June 7, 2020 in Physical Medicine and Rehabilitation Clinics of North America.

Authored by Lisa Ottomanelli, PhD, Lance Goetz, PhD, John O'Neill, PhD, Eric Lauer, MPH, PhD, and Trevor Dyson-Hudson, MD, the article frames challenges and opportunities for individuals with spinal cord injury in the workplace, as the nation marks three decades influenced by the protections ensured by the ADA.

Link to abstract: https://pubmed.ncbi.nlm.nih.gov/32624107/

Thirty years after the passage of the ADA, planning for return to work is often a low priority during rehabilitation for spinal cord injury, and employment rates remain low for this population. The authors report their analysis based on the Medical Expenditure Panel Survey, comparing the characteristics of working age people with spinal cord injury/dysfunction, and their experiences with employment and health care, with those of the overall working age population. Despite the positive impact of employment on many domains, workers with SCI/D experience significantly more issues related to health and medical care, according to John O'Neill, PhD, director of the Center for Employment and Disability Research at Kessler Foundation.

New models for vocational rehabilitation that address these issues are returning more people to the workplace after spinal cord injury. "Compared with traditional vocational rehabilitation, the comprehensive services offered through individual placement and support (IPS) are helping more people achieve competitive employment," said Dr. O'Neill. "Integrating vocational services into spinal cord injury rehabilitation enlists the talents of the treatment team in the fulfillment of the individual's employment goals."

Another promising approach is vocational resource facilitation (VRF), an early intervention model implemented at Kessler Institute for Rehabilitation with funding from the Craig H. Neilsen Foundation. A dedicated vocational resource facilitator works with the treatment team to support newly injured individuals with their plans to return to work, coordinates services, and provides follow up after discharge. Since this publication, the employment rate at one year after discharge for traumatic spinal cord injury has increased from 34% to 43%, significantly exceeding national one-year post injury benchmarks ranging from 12% to 21%."

The authors emphasize that vocational rehabilitation services, when delivered soon after injury and integrated into the medical rehabilitation plan, contribute to better employment outcomes. "Implementing evidence-based practices during rehabilitation is an important step toward fulfilling the promises of the ADA for people with spinal cord injury," Dr. O'Neill concluded.

Credit: 
Kessler Foundation

Thin and ultra-fast photodetector sees the full spectrum

image: Close-up photo of the photodetectors.

Image: 
RMIT University

Researchers have developed the world's first photodetector that can see all shades of light, in a prototype device that radically shrinks one of the most fundamental elements of modern technology.

Photodetectors work by converting information carried by light into an electrical signal and are used in a wide range of technologies, from gaming consoles to fibre optic communication, medical imaging and motion detectors.
Currently photodetectors are unable to sense more than one colour in the one device.

This means they have remained bigger and slower than other technologies, like the silicon chip, that they integrate with.

The new hyper-efficient broadband photodetector developed by researchers at RMIT University is at least 1,000 times thinner than the smallest commercially available photodetector device.

In a significant leap for the technology, the prototype device can also see all shades of light between ultraviolet and near infrared, opening new opportunities to integrate electrical and optical components on the same chip.

*New possibilities*

The breakthrough technology opens the door for improved biomedical imaging, advancing early detection of health issues like cancer.

Study lead author, PhD researcher Vaishnavi Krishnamurthi, said in photodetection technologies, making a material thinner usually came at the expense of performance.

"But we managed to engineer a device that packs a powerful punch, despite being thinner than a nanometre, which is roughly a million times smaller than the width of a pinhead," she said.

As well as shrinking medical imaging equipment, the ultra-thin prototype opens possibilities for more effective motion detectors, low-light imaging and potentially faster fibre optical communication.

"Smaller photodetectors in biomedical imaging equipment could lead to more accurate targeting of cancer cells during radiation therapy," Krishnamurthi said.

"Shrinking the technology could also help deliver smaller, portable medical imaging systems that could be brought into remote areas with ease, compared to the bulky equipment we have today."

*Lighting up the spectrum*

How versatile and useful photodetectors are depends largely on three factors: their operating speed, their sensitivity to lower levels of light and how much of the spectrum they can sense.

Typically, when engineers have tried improving a photodetector's capabilities in one of those areas, at least one of the other capabilities have been diminished.

Current photodetector technology relies on a stacked structure of three to four layers.

Imagine a sandwich, where you have bread, butter, cheese and another layer of bread - regardless of how good you are at squashing that sandwich, it will always be four layers thick, and if you remove a layer, you'd compromise the quality.

The researchers from RMIT's School of Engineering scrapped the stacked model and worked out how to use a nanothin layer - just a single atom thick - on a chip.

Importantly, they did this without diminishing the photodetector's speed, low-light sensitivity or visibility of the spectrum.

The prototype device can interpret light ranging from deep ultraviolet to near infrared wavelengths, making it sensitive to a broader spectrum than a human eye.

And it does this over 10,000 times faster than the blink of an eye.

*Nano-thin technology*

A major challenge for the team was ensuring electronic and optical properties didn't deteriorate when the photodetector was shrunk, a technological bottleneck that had previously prevented miniaturisation of light detection technologies.

Chief investigator Associate Professor Sumeet Walia said the material used, tin monosulfide, is low-cost and naturally abundant, making it attractive for electronics and optoelectronics.

"The material allows the device to be extremely sensitive in low-lighting conditions, making it suitable for low-light photography across a wide light spectrum," he said.

Walia said his team is now looking at industry applications for their photodetector, which can be integrated with existing technologies such as CMOS chips.

"With further development, we could be looking at applications including more effective motion detection in security cameras at night and faster, more efficient data storage", he said.

Credit: 
RMIT University

Cincinnati Children's scientists identify hormone that might help treat malabsorption

image: This is an image of a human intestinal organoid with enteroendocrine cells (red) embedded within the intestinal cells of the HIO (green).

Image: 
Heather A. McCauley, PhD/Cincinnati Children's Hospital Medical Center

Scientists at Cincinnati Children's used human intestinal organoids grown from stem cells to discover how our bodies control the absorption of nutrients from the food we eat. They further found that one hormone might be able to reverse a congenital disorder in babies who cannot adequately absorb nutrients and need intravenous feeding to survive.

Heather A. McCauley, PhD, a research associate at Cincinnati Children's Hospital Medical Center, found that the hormone peptide YY, also called PYY, can reverse congenital malabsorption in mice. With a single PYY injection per day, 80% of the mice survived. Normally, only 20% to 30% survive.

This indicates PYY might be a possible therapeutic for people with severe malabsorption.

Poor absorption of macronutrients is a global health concern, underlying ailments such as malnutrition, intestinal infections and short-gut syndrome. So, identification of factors regulating nutrient absorption has significant therapeutic potential, the researchers noted.

McCauley was lead author of a manuscript published Sept. 22 in Nature Communications, which reported that the absorption of nutrients - in particular, carbohydrates and proteins - is controlled by enteroendocrine cells in the gastrointestinal tract.

Babies born without enteroendocrine cells - or whose enteroendocrine cells don't function properly - have severe malabsorption and require IV nutrition.

"This study allowed us to understand how important this one rare cell type is in controlling how the intestine absorbs nutrients and functions on a daily basis," McCauley said.

The Cincinnati Children's study, "Enteroendocrine cells couple nutrient sensing to nutrient absorption by regulating ion transport," was the first to describe a mechanism linking enteroendocrine cells to the absorption of macronutrients like carbohydrates and amino acids.

One key finding of the study is how these cells, upon sensing ingested nutrients, prepare the intestine to absorb nutrients by controlling the influx and outflux of electrolytes and water, the researchers stated. Absorption of carbohydrates and protein is then linked to the movement of ions in the intestine.

For this study, the scientists relied on human intestinal organoid models created in a lab, said James Wells, PhD, senior author of the study and chief scientific officer of the Center for Stem Cell and Organoid Medicine (CuSTOM) at Cincinnati Children's.

Grown from stem cells, organoids are small formations of human organ that have an architecture and functions that are similar to their full-size counterparts.

Cincinnati Children's launched efforts to make organoids from human pluripotent stem cells in 2006, said Wells, who is also director for basic research in the Division of Endocrinology at the medical center and an Allen Foundation Distinguished Investigator.

"What this study highlights is how decades of basic research into how organs are made and how they function is now leading to breakthroughs in identifying new therapeutics," said Wells, who has led a team of investigators at Cincinnati Children's who developed some of the first human organoid technologies that are now used globally.

The study on malabsorption used three different human small intestinal tissue models - all derived from pluripotent stem cells, which can form any kind of tissue in the body.

"The human organoids are essentially a much more realistic avatar to these patients with these rare mutations," Wells said. "They allow us to model much more faithfully the human disease."

Credit: 
Cincinnati Children's Hospital Medical Center

Online training helps preemies

An international team of researchers has now found that computerised training can support preterm children's academic success. In their randomised controlled study "Fit for School", the researchers compared two learning apps. The project at the University Hospital Essen and at Ruhr-Universität Bochum was funded by Mercator Research Center Ruhr (Mercur) with approximately 300,000 Euros for four years. Results have been published online as unedited manuscript in the journal Pediatric Research on 12 September 2020.

Every 11. baby is born too early in Germany, over 15 million globally each year. Although survival rates have increased, long-term development has not improved much. At school age, children born preterm often struggle with attention and complex tasks, such as math.

"Preemies need special support," says neonatologist Dr. Britta Hüning of the Clinic for Pediatrics I, University Hospital Essen. Together with psychologist Dr. Julia Jaekel from the University of Tennessee Knoxville, previously at Ruhr-Universität Bochum, she was part of a multidisciplinary team that led the study with Professor Ursula Felderhoff-Müser, Director of the Clinic for Pediatrics I. Their findings are promising and novel, as few intervention studies have ever shown academic improvements for school-aged preterm children.

Two learning apps tested

The study included 65 first graders, born between five and twelve weeks preterm in the Ruhr Region. They practiced daily for five weeks, using the software app Xtramath or Cogmed. Teachers rated their academic progress in math, attention, reading and writing through first and second grade.

The final results: parents and children liked both apps. "The different trainings supported long-term school success to a similar degree," says Julia Jaekel. "However, Xtramath received more positive ratings and led to better short-term academic progress."

In times of increasing remote and online instruction for all children, apps with documented effectiveness are scarce. Parents and teachers may turn to adaptive apps such as Xtramath for learning at home.

Credit: 
Ruhr-University Bochum