Brain

Baby heartbeat reveals the stress of having a depressed or anxious mother

Scientists have shown that the babies of mothers dealing with anxiety or depression exhibit physiologically stronger signs of stress than babies of healthy mothers, when given a standard stress test. These babies show a significantly increased heart rate, which researchers fear may lead to imprinted emotional stresses as the child grows up.

The interaction of mother and infant, especially in the early months of life, plays a huge role in healthy development. Some mothers, particularly those suffering from mood disorders such as depression, anxiety, or post-natal depression, have difficulties regulating infant´s negative affection, which is believed to create insecurities in the children as they grow older. Mood disorders (such as (irritability, changing moods, mild depression) are common during the pregnancy and the postpartum period, occurring in 10-20% of women.

The effect of "emotionally distant" mothers for infants was demonstrated in the famous "Still Face Test" (see notes), first devised in the 1970's; mothers were asked to playfully interact with their babies, and then spend a period where they "blank" all interaction, before resuming normal contact. During the second phase (Still-Face episode) babies showed heightened negative emotionality as well as a reduction of social engagement and avoiding behaviours.

Now in a preliminary finding, German researchers have shown that during the period where the mother withdraws attention, babies of anxious or depressed mothers had a significant rise in heart rate, on average 8 beats per minute more than that of the babies of healthy mothers. These babies were also classified by their mothers as having a more difficult temperament than healthy babies.

"To our knowledge this is one of the first times this physical effect has been seen in 3 months old infants. This may feed into other physiological stress systems leading to imprinted psychological problems", said researcher Fabio Blanco-Dormond of the University of Heidelberg.

The researchers recruited a total of 50 mothers and their babies: 20 mothers exhibiting with depression or anxiety disorders around the time of birth, and 30 healthy controls. Each mother- baby couple underwent the Still Face Paradigm. Mothers were asked to play with their babies for 2 minutes, then to cut off all interaction while maintaining eye contact. After 2 more minutes mothers then resumed playful interaction. Throughout the test researchers measured the heart rates of both mother and baby.

"We found that if a mother was anxious or depressed, their baby had a more sensitive physiological response to stress during the test than did the babies of healthy mothers. This was a statistically significantly increase of an average of 8 beats per minute during the non-interactive phase".

This is a preliminary finding, so we need to repeat it with a larger sample to make sure that the results are consistent. This is our next step", said Fabio Blanco-Dormond.

Commenting, Professor Veerle Bergink , Director of Women's Mental Health Program at the Icahn School of Medicine at Mount Sinai, New York, said:

"This work means that it is important to diagnose and treat depressive and anxiety disorders in new mothers, because it has an immediate impact on the stress system of the baby. Prior studies showed not only short term, but also long term adverse effects of postpartum mood disorders on the children. Most postpartum mood disorders start during, or even before pregnancy, and early diagnosis is therefore important".

Professor Bergink was not involved in this work, this is an independent comment.

Credit: 
European College of Neuropsychopharmacology

Halving the risk of infection following surgery

image: Stock image of surgeons beginning a procedure

Image: 
deborabalves (Pixabay)

Surgeons could dramatically reduce the risk of infection after an operation by simply changing the antiseptic they use.

New analysis by the University of Leeds and the University of Bern of more than 14,000 operations has found that using alcoholic chlorhexidine gluconate (CHG) halves the risk of infection in certain types of surgery when compared to the more commonly used povidone-iodine (PVI).

Infection after surgery could result in a range of issues including readmission to hospital and possibly further surgery.

Switching antiseptics to help tackle infections would be a simple process for healthcare providers, and could be rolled out globally, according to the new research, published in the Annals of Surgery.

Lead author Ryckie Wade, Clinical Research Fellow at Leeds' School of Medicine said: "Infection is the most common and costly complication of surgery.

"Even though the risk of infection in these types of surgery is low (about 3%), anything we can change to reduce this risk is very important.

"Our findings suggest that the number of infections may be halved if surgeons used a different skin cleaning agent before surgery."

The team reviewed 17 existing studies, comparing infection complications of five different antiseptics used in 14,593 operations.

The initial research was carried out in North America, Europe, Asia, South America, and Australasia on patients who had undergone a range of surgical procedures including orthopaedic, cardiac, plastic and burn reconstruction surgery, cranial neurosurgery, open inguinal hernia repair and neurosurgery.

Using a statistical technique called network meta-analysis, the team showed that CHG was safe and twice as effective in preventing infection after "clean" surgery on adults compared to PVI (alcoholic or aqueous), which has been widely used as an antiseptic since its discovery in 1955.

Clean surgery is defined as a procedure outside the respiratory, urogenital and digestive system where there is no inflammation or infection and where the wound is not caused by a trauma.

Mr Wade said he hoped the new findings would lead to a change in healthcare practice: "This research should be of benefit to all healthcare professionals around the world who perform any type of invasive procedure on a 'clean site'."

Credit: 
University of Leeds

A phonon laser - coherent vibrations from a self-breathing resonator

image: Figure 1. (a) Polariton BEC and phonon lasing of a microstructured trap in a semiconductor microcavity. (b) BEC emission under low (the lower curve) and high (the upper curve) particle densities, displaying phonon sidebands separated by the phonon energy ?ω_a .

Image: 
PDI and Instituto Balseiro and Centro Atómico

Lasing - the emission of a collimated light beam of light with a well-defined wavelength (color) and phase - results from a self-organization process, in which a collection of emission centers synchronizes itself to produce identical light particles (photons). A similar self-organized synchronization phenomenon can also lead to the generation of coherent vibrations - a phonon laser, where phonon denotes, in analogy to photons, the quantum particles of sound.

Photon lasing was first demonstrated approximately 60 years ago and, coincidentally, 60 years after its prediction by Albert Einstein. This stimulated emission of amplified light found an unprecedented number of scientific and technological applications in multiple areas.

Although the concept of a "laser of sound" was predicted almost at the same time, only few implementations have so far been reported and none has attained technological maturity. Now, a collaboration between researchers from Instituto Balseiro and Centro Atómico in Bariloche (Argentina) and Paul-Drude-Institut in Berlin (Germany) has introduced a novel approach for the efficient generation of coherent vibrations in the tens of GHz range using semiconductor structures [Nat. Commun. DOI 10.1038/s41467-020-18358-z]. Interestingly, this approach to the generation of coherent phonons is based on another of Einstein's predictions: that of the 5th state of matter, a Bose-Einstein condensate (BEC) of coupled light-matter particles (polaritons).

The polariton BEC is created in a microstructured trap of a semiconductor microcavity consisting of electronic centers sandwiched in-between distributed Bragg reflectors (DBRs) designed to reflect light of the same energy hω_C emitted by the centers (cf. Fig. 1a). When optically excited by a light beam with a different energy hω_L, for which the DBR is transparent, the electronic states of the centers emit light particles (photons) at the energy hω_C, which are back-reflected at the DBRs. The photons are then again reabsorbed by the centers. The rapid and repeating sequence of emission and reabsorption events makes it impossible to distinguish whether the energy is stored in an electronic or photonic state. One rather says that the mixing between the states creates a new, light-matter particle, called polariton. Furthermore, under a high particle density (and helped by the spatial localization induced by the trap), the polaritons enter a self-organized state similar to photons in a laser, where all particles synchronize to emit light with the same energy and phase - a polariton BEC laser. The characteristic signature of the polariton BEC is a very narrow spectral line illustrated by the blue curve in Fig. 1b, which can be detected by measuring the evanescent radiation escaping from the microcavity.

A further interesting property of the used microcavity mirrors (DBRs) is the ability to reflect not only optical (light) but also mechanical vibrations (sound) within a specific range of wavelengths. As a consequence, a typical AlGaAs microcavity for photons in the near-infrared also confines quanta of vibrations - phonons - with the energy hω_a corresponding to the oscillation frequency ω_a/2pi of approximately 20 GHz. As the photon reflection by the DBRs provides the required feedback for the formation of a polariton BEC, phonon reflection leads to a buildup of the phonon population as well as an enhancement of the phonon interaction with the polariton BEC.

How does the interaction between polaritons and phonons occur? As air in a tire, a high density of condensed polaritons exerts a pressure on the microcavity mirrors, which can trigger and sustain mechanical oscillations at the frequency of the confined phonons. These breathing oscillations modify the microcavity dimensions, thus acting back on the polariton BEC. It is this coupled optomechanical interaction that gives rise to the coherent emission of sound above a critical polariton density. A fingerprint of this coherent emission of phonons is the self-pulsing of the BEC emission under continuous excitation by a laser with the energy hω_L. This self-pulsing is identified by the emergence of strong sidebands around the polariton BEC emission displaced by the multiples of the phonon energy hω_a (cf. the red curve in Fig. 1b).

Analysis of the amplitude of the sidebands in Fig. 1b shows that hundreds of thousands of monochromatic phonons populate the resulting vibrational state and are emitted towards the substrate as a 20 GHz coherent phonon laser beam. An essential feature of the design is the stimulation of the phonons by an internal highly intense and monochromatic light emitter - the polariton BEC - which can be excited not only optically but also electrically, as in a Vertical Cavity Surface Emitting Laser (VCSEL). Furthermore, higher phonon frequencies can be achieved by appropriate modifications of the microcavity design. Potential applications of the phonon laser include the coherent control of light beams, quantum emitters, and gates in communication and quantum information devices, as well as light-to-microwave bidirectional conversion in a very wide 20-300 GHz frequency range relevant for future network technologies.

Credit: 
Forschungsverbund Berlin

Researchers discovered a novel gene involved in primary lymphedema

image: A model of the angiopoietin 2 growth factor secretion from lymphatic endothelial cells and the effect of ANGPT2 mutations on the development of lymphedema. Reduced secretion of the ANGPT2 mutants results in decreased receptor dimerization and activation, leading to swelling of the leg.

Image: 
Alitalo Lab

The Human Molecular Genetics laboratory of the de Duve Institute (UCLouvain), headed by Professor Miikka Vikkula, recently identified mutations in a novel gene, ANGPT2, responsible for primary lymphedema. Together with the Wihuri Research Institute and it's director Professor Kari Alitalo at the University of Helsinki, Finland, the laboratories could show how these mutations cause the disease.

"The mutations result in loss of the normal function of the ANGPT2 protein that is known to play a role in lymphatic and blood vessel maturation. This important discovery opens possibilities for the development of improved treatments of lymphedema", explains Professor Alitalo.

The discovery is recently published in Science Translational Medicine.

Influence of the ANGPT2 gene mutations causing lymphedema shown in humans for the first time

Lymphedema is a strongly invalidating chronic disease resulting from abnormal development or function of the lymphatic system. In the patients, lymph is poorly drained from tissues, thus it accumulates in e.g. the legs or arms, causing swelling, and fibrosis, limiting the mobility of the affected body part and increasing the likelihood of infections in it. Lymphedema can be either primary, when there is no known underlying cause, or secondary, when it results from removed or damaged lymph vessels, e.g. after surgery, infection or cancer treatment. Primary lymphedema is often inherited.

The team of the de Duve Institute with its large international network of collaborators, including the Center for Vascular Anomalies and the Center for Medical Genetics of the Saint-Luc hospital in Brussels, has collected samples from almost 900 patients (and family members) suffering from primary lymphedema. By using whole-exome sequencing (i.e. the sequencing of all the coding parts of the genes in our genome), mutations in ANGPT2 were discovered in lymphedema patients from five families.

The ANGPT2 encodes the angiopoietin 2 protein, a growth factor that binds to receptors in blood and lymphatic vessels that were first identified in Professor Alitalo's laboratory.

"ANGPT2 has previously been shown to influence lymphatic development in mice, but this is the first time when mutations in this gene were found to cause lymphedema in humans", says Professor Alitalo.

New information on mechanisms that lead to lymphedema

Among the identified mutations, one deletes one copy of the entire gene, whereas the four other ones are amino acid substitutions.

The researchers showed that three of the mutants are not properly secreted from cells that normally produce the protein, and this decreases also the secretion of the protein produced from the remaining normal allele. Thus the mutations had a so called dominant-negative effect. The fourth mutant was hyper-active in inducing increased proliferation of dilated lymphatic vessels. This mutant demonstrated altered integrin binding.

The mutations that resulted in primary lymphedema in patients provided investigators important insights into the function of the ANGPT2 protein and mechanisms that lead to lymphedema.

Identifying the genetic causes crucial for a better management of the disease

In Europe, over a million people are affected by lymphedema. Therapy is limited to repeated manual lymphatic massage and use of compressive garments that are intended to decrease tissue swelling. In some cases, surgery may be helpful. Another lymphatic vessel growth factor, VEGF-C, is currently undergoing clinical trial in combination with surgery for the treatment of lymphedema in patients whose lymph nodes in the armpit have been removed due to breast cancer metastasis. So far, no cure exists for lymphedema and only in a minority of cases it resolves or ameliorates with time.

"Identifying the genetic causes is crucial for a better management of the disease. It makes a more precise and reliable diagnosis possible, where today many people with the disease are still not diagnosed. As the newly published study shows, research on lymphedema leads to insight in the underlying cellular mechanisms, which may be targets for the development of new therapies", Professor Alitalo continues.

Credit: 
University of Helsinki

Safety-net clinicians' caseloads received reduced merit-based incentive payment scores

ST. LOUIS - A team of researchers led by Kenton Johnston, Ph.D., an associate professor of health management and policy at Saint Louis University's College for Public Health and Social Justice, conducted a study to investigate how outpatient clinicians that treated disproportionately high caseloads of socially at-risk Medicare patients (safety-net clinicians) performed under Medicare's new mandatory Merit-Based Incentive Payment System (MIPS).

Their findings, "Clinicians With High Socially At-Risk Caseloads Received Reduced Merit-Based Incentive Payment System Scores," were published online Sept. 8 in Health Affairs.

The researchers found that clinicians treating high (top quintile) caseloads of Medicare patients dually-enrolled in Medicaid had performance scores 13.4 points lower than clinicians treating low (bottom quintile) caseloads of such patients, on a scale from 0-100. Clinicians with high caseloads of dually-enrolled patients were 99 percent more likely to receive a negative payment adjustment, and were 52 percent less likely to receive an exceptional performance bonus payment, than their peers with low caseloads of such patients.

The study examined 2019 MIPS performance data for 510,020 clinicians.

"The MIPS appears to be regressive. Low-resourced and safety-net practices located in poor areas have higher rates of payment penalties and lower rates of payment bonuses than high-resourced practices in wealthy areas," Johnston said. "Thus, the MIPS may end up discouraging clinicians from practicing in poor areas and, as a result, exacerbate existing health disparities. I doubt this is what the Centers for Medicare and Medicaid Services (CMS) intended when designing the program. However, if this issue is not addressed by CMS, the MIPS may unintentionally widen care and health disparities among rich and poor Medicare beneficiaries."

MIPS, which is authorized under the Medicare Access and CHIP Reauthorization Act, is a mandatory pay-for-performance program for clinicians participating in Medicare in the outpatient setting. Clinician performance under MIPS looks at quality of care, meaningful use of electronic health records, improvement activities for patient care processes and cost.

The authors also assessed the potential future impact of the Complex Patient Bonus, to be implemented by Medicare in 2021 to better reimburse clinicians who treat high caseloads of socially-at-risk patients. The authors found that had the Bonus existed in 2019 it would have had very little impact on the payment disparities suffered by clinicians with high caseloads of socially-at-risk patients.
Thus, the Complex Patient Bonus appears unlikely to mitigate the most regressive effects of the MIPS on safety-net clinicians.

Credit: 
Saint Louis University

New ACM study gives most detailed picture to date of US bachelor's programs in computing

image: The Association for Computing Machinery, the world's largest society of computing professionals, just released a detailed study on the state of undergraduate computing in the United States.

Image: 
Association for Computing Machinery

ACM, the Association for Computing Machinery, recently released its eighth annual Study of Non-Doctoral Granting Departments in Computing (NDC study). With the aim of providing a comprehensive look at computing education, the study includes information on enrollments, degree completions, faculty demographics, and faculty salaries. For the first time, this year's ACM NDC study includes enrollment and degree completion data from the National Student Clearinghouse Research Center (NSC).

In previous years, ACM directly surveyed Computer Science departments, and would work with a sample of approximately 18,000 students. By accessing the NSC's data, the ACM NDC study now includes information on approximately 300,000 students across the United States, allowing for a more reliable understanding of the state of enrollment and graduation in Bachelor's programs. Also for the first time, the ACM NDC study includes data from private, for-profit institutions, which are playing an increasingly important role in computing education.

"By partnering with the NSC, we now have a much fuller picture of computing enrollment and degree production at the Bachelor's level," explained ACM NDC study co-author Stuart Zweben, Professor Emeritus, Ohio State University. "The NSC also gives us more specific data on the gender and ethnicity of students. This is an important tool, as increasing the participation of women and other underrepresented groups has been an important goal for leaders in academia and industry. For example, having a clear picture of the current landscape for underrepresented people is an essential first step toward developing approaches to increase diversity."

"The computing community has come to rely on the ACM NDC study to understand trends in undergraduate computing education," added ACM NDC study co-author Jodi Tims, Professor, Northeastern University. "At the same time, using our previous data collection methods, we were only capturing about 15-20% of institutions offering Bachelor's degrees in computing. The NSC data gives us a much broader sample, as well as more precise information about enrollment and graduation in specific computing disciplines---such as computer science, information systems, information technology, software engineering, computer engineering and cybersecurity. For example, we've seen a noticeable increase in cybersecurity program offerings between the 2017/2018 and 2018/2019 academic years, and we believe this trend will continue next year. Going forward, we also plan to begin collecting information on data science offerings in undergraduate education. Our overall goal will be to maintain the ACM NDC study as the most up-to-date and authoritative resource on this topic."

As with previous NDC studies, information on faculty salaries, retention, and demographics was collected by sending surveys to academic departments across the United States. Responses were received from 151 departments. The average number of full-time faculty members at the responding departments was 12.

Important findings of the ACM NDC study include:

-Between the 2017/2018 and the 2018/2019 academic years, there was a 4.7% increase in degree production across all computing disciplines. The greatest increases in degree production were in software engineering (9% increase) and computer science (7.5% increase)

-The representation of women in information systems (24.5% of degree earners in the 2018/2019 academic year) and information technology (21.5% of degree earners in the 2018/2019 academic year) is much higher than in areas such as computer engineering (12.2% of degree earners in the 2018/2019 academic year).

-Bachelor's programs, as recorded by the ACM NDC study, had a stronger representation of African American and Hispanic students than PhD programs, as recorded by the Computer Research Association's (CRA) Taulbee Survey. For example, during the 2018/2019 academic year, the ACM NDC records that 15.6% of enrollees in Bachelor's programs were African American, whereas the CRA Taulbee survey records that 4.7% of enrollees in PhD programs were African American.

-In some disciplines of computing, African Americans and Hispanics are actually over-represented, based on their percentage of the US population.

-Based on aggregate salary data from 89 non-doctoral-granting computer science departments (including public and private institutions), the average median salary for a full professor was $109,424.

- Of 40 non-doctoral granting departments reporting over 56 faculty departures, only 10.7% of faculty departed for non-academic positions. Most departed due to retirement (46.4%) or other academic positions (26.9%).

In addition to Stuart Zweben, and Jodi Tims, the ACM NDC study was co-authored by Yan Timanovsky, Association for Computing Machinery. By employing the NSC data in future ACM NDC studies, the co-authors are confident that an even fuller picture will emerge regarding student retention with respect to computing disciplines, gender and ethnicity.

Credit: 
Association for Computing Machinery

Understanding the 'deep-carbon cycle'

CLEVELAND--New geologic findings about the makeup of the Earth's mantle are helping scientists better understand long-term climate stability and even how seismic waves move through the planet's layers.

The research by a team including Case Western Reserve University scientists focused on the "deep carbon cycle," part of the overall cycle by which carbon moves through the Earth's various systems.

In simplest terms, the deep carbon cycle involves two steps:

Surface carbon, mostly in the form of carbonates, is brought into the deep mantle by subducting oceanic plates at ocean trenches.

That carbon is then returned to the atmosphere as carbon dioxide (CO2) through mantle melting and magma degassing processes at volcanoes

Scientists have long suspected that partially melted chunks of this carbon are broadly distributed throughout the Earth's solid mantle.

What they haven't fully understood is how far down into the mantle they might be found, or how the geologically slow movement of the material contributes to the carbon cycle at the surface, which is necessary for life itself.

Deep carbon and climate change connection

"Cycling of carbon between the surface and deep interior is critical to maintaining Earth's climate in the habitable zone over the long term--meaning hundreds of millions of years," said James Van Orman, a professor of geochemistry and mineral physics in the College of Arts and Sciences at Case Western Reserve and an author on the study, recently published in the Proceedings of the National Academy of Sciences.

"Right now, we have a good understanding of the surface reservoirs of carbon, but know much less about carbon storage in the deep interior, which is also critical to its cycling."

Van Orman said this new research showed--based on experimental measurements of the acoustic properties of carbonate melts, and comparison of these results to seismological data--that a small fraction (less than one-tenth of 1%) of carbonate melt is likely to be present throughout the mantle at depths of about 180-330 km.

"Based on this inference, we can now estimate the carbon concentration in the deep upper mantle and infer that this reservoir holds a large mass of carbon, more than 10,000 times the mass of carbon in Earth's atmosphere," Van Orman said.

That's important, Van Orman said, because gradual changes in the amount of carbon stored in this large reservoir, due to exchange with the atmosphere, could have a corresponding effect on CO2 in the atmosphere--and therefore, on long-term climate change.

The first author of the article is Man Xu, who did much of the work as a PhD student at Case Western Reserve and is now a postdoctoral scholar at the University of Chicago.

Others on the project were from Florida State University, the University of Chicago and Southern University of Science and Technology (SUSTech) in Shenzhen, China.

Explaining seismic wave speed differences

The research also sheds light on seismology, especially deep earth research.

One way geologists better understand the deep interior is by measuring how seismic waves generated by earthquakes--fast-moving compressional waves and slower shear waves--move through the Earth's layers.

Scientists have long wondered why the speed difference between the two types of seismic waves--P-waves and S-waves--peaked at depths of around 180 to 330 kilometers into the Earth.

Carbon-rich melts seem to answer that question: small quantities of these melts could be dispersed throughout the deep upper mantle and would explain the speed change, as the waves move differently through the melts.

Credit: 
Case Western Reserve University

Emotion vocabulary reflects state of well-being, study suggests

image: Emotion words in stream-of-consciousness essays.

Image: 
University of Pittsburgh

PITTSBURGH, Sept. 10, 2020 - Vocabulary that one uses to describe their emotions is an indicator of mental and physical health and overall well-being, according to an analysis led by a scientist at the University of Pittsburgh School of Medicine and published today in Nature Communications. A larger negative emotion vocabulary--or different ways to describe similar feelings--correlates with more psychological distress and poorer physical health, while a larger positive emotion vocabulary correlates with better well-being and physical health.

"Our language seems to indicate our expertise with states of emotion we are more comfortable with," said lead author Vera Vine, Ph.D., postdoctoral fellow in the Department of Psychiatry at Pitt. "It looks like there's a congruency between how many different ways we can name a feeling and how often and likely we are to experience that feeling."

To examine how emotion vocabulary depth corresponds broadly with lived experience, Vine and her team analyzed public blogs written by more than 35,000 individuals and stream-of-consciousness essays by 1,567 college students. The students also self-reported their moods periodically during the experiment.

Overall, people who used a wider variety of negative emotion words tended to display linguistic markers associated with lower well-being--such as references to illness and being alone--and reported greater depression and neuroticism, as well as poorer physical health.

Conversely, those who used a variety of positive emotion words tended to display linguistic markers of well-being--such as references to leisure activities, achievements and being part of a group--and reported higher rates of conscientiousness, extraversion, agreeableness, overall health, and lower rates of depression and neuroticism.

These findings suggest that an individual's vocabulary may correspond to emotional experiences, but it does not speak to whether emotion vocabularies were helpful or harmful in bringing about emotional experiences.

"There's a lot of excitement right now about expanding people's emotional vocabularies and teaching how to precisely articulate negative feelings," Vine said. "While we often hear the phrase, 'name it to tame it' when referring to negative emotions, I hope this paper can inspire clinical researchers who are developing emotion-labeling interventions for clinical practice, to study the potential pitfalls of encouraging over-labeling of negative emotions, and the potential utility of teaching positive words."

During the stream-of-consciousness exercise, Vine and colleagues found that students who used more names for sadness grew sadder over the course of the experiment; people who used more names for fear grew more worried; and people who used more names for anger grew angrier.

"It is likely that people who have had more upsetting life experiences have developed richer negative emotion vocabularies to describe the worlds around them," noted James W. Pennebaker, Ph.D., professor of psychology at the University of Texas at Austin and an author on the project. "In everyday life, these same people can more readily label nuanced feelings as negative which may ultimately affect their moods."

A custom open-source software developed by these researchers to help with emotion vocabulary computation is called "Vocabulate." It's available at https://osf.io/8ckyp/ and https://github.com/ryanboyd/Vocabulate.

Credit: 
University of Pittsburgh

Study provides insights on bouncing back from job loss

Stress associated with job loss can have a host of negative effects on individuals that may hinder their ability to become re-employed. A new study published in the Journal of Employment Counseling examines the importance of self-regulation for enabling people to effectively search for a new job and to maintain their psychological well-being. This trait allows people to manage their emotions and behaviors to produce positive results, and to consider adversity as a positive challenge rather than a hindrance.

The study involved an online survey completed by 185 individuals who had recently been laid off and had not yet been re-employed. High levels of self-regulation predicted better well-being, job search clarity, and job search self-efficacy (the belief that one can successfully perform specific job search behaviors and obtain employment).

The findings suggest that employment counseling efforts should help people improve their self-regulation in order to achieve positive outcomes after job loss.

"Together, results of this study suggest that the components of self-regulation are key to a comprehensive model of resiliency, which plays a crucial role in enhancing well-being and re-employment outcomes during individuals' search for employment," said lead author Matthew J. W. McLarnon, PhD, MSc, of Mount Royal University, in Canada.

Credit: 
Wiley

Factors linked to college aspirations, enrollment, and success

A recent study has identified certain factors associated with a greater likelihood that a high school student will decide to attend college, enroll in college the fall semester immediately following high school graduation, and then return to that same college a year later as a retained college student.

The study, which is published in the Journal of Counseling & Development, found that high school seniors will be much more likely to decide to go to college if they develop both a college-going aspirational identity and a personally held goal to achieve greater levels of postsecondary education. Similarly, they're more likely to enroll in college if they set higher postsecondary educational goals for themselves.

Returning as a retained college student the second year was primarily shaped by three factors: the environmental characteristics of the college were supportive of the students; the students had developed some financial certainty as to how they were going to pay for college before leaving high school; and as twelfth graders, the students had made a strong personal commitment to graduate from the college they had chosen to attend.

"Making an informed decision about which college to attend is more important than ever, especially during the COVID-19 pandemic," said co-author Richard T. Lapan, PhD, of the University of Massachusetts Amherst. "This research study advantages students and their families by identifying factors related to student success in the transition to college."

Credit: 
Wiley

New microfluidic device minimizes loss of high value samples

image: Alexandra Ros, professor in Arizona State University's School of Molecular Sciences and the Center for Applied Structural Discovery in the Biodesign Institute.

Image: 
Mary Zhu

A major collaborative effort that has been developing over the last three years between ASU and European scientists, has resulted in a significant technical advance in X-ray crystallographic sample strategies.

The ASU contribution comes from the School of Molecular Sciences (SMS), department of physics and the Biodesign Institute Center for Applied Structural Discovery.

The European X-ray Free Electron Laser (EuXFEL) is a research facility of superlatives: It generates ultrashort X-ray pulses - 27, 000 times per second and with a brilliance that is a billion times higher than that of the best conventional X-ray radiation sources. After ten years of construction, it opened for initial experiments in late 2017. The group of Alexandra Ros, professor in ASU's SMS was awarded the second allocation of beam time amongst worldwide competitors.

Their results, published Sept. 9 in Nature Communications, validated a unique microfluidic droplet generator for reducing sample size as well as waste (which can be as high as 99 percent) in her team's Serial Femtosecond Crystallography (SFX) experiments. Using this, they determined the crystal structure of the enzyme 3-Deoxy-d-manno-Octulosonate 8-Phosphate Synthase (KDO8PS) and revealed new detail in a previously undefined loop region of the enzyme which is a potential target for antibiotic studies.

"We are excited that this work, resulting from a huge collaborative effort, has been well received in the XFEL community," explained Ros. "We are further developing this method and are seeking synchronization of the microfluidic droplets with the pulses of XFELs. At this very moment, a small team of ASU students has just finished performing experiments at the Linac Coherent Light Source (LCLS) at the SLAC National Accelerator Laboratory in Menlo Park, CA to refine the method. There could not have been better timing for the publication of our work."

SLAC has been the XFEL facility best known to U.S. scientists where the now-famous work on crystallography of protein nanocrystals (by the ASU team led by professors John Spence and Petra Fromme) was carried out. SLAC and its companion in Europe, also at Hamburg, have been very successful and consequently, have become heavily overbooked. The coming-on-line of the new facility, with its giant 2.6-mile accelerator tunnel and atomic length scale resolution, has relieved some of the demand on the other facilities, while offering grand new possibilities in the physical sciences.

SFX is a promising technique for protein structure determination, where a liquid stream containing protein crystals is intersected with a high-intensity XFEL beam that is a billion times brighter than traditional synchrotron X-ray sources.

Although the crystals are destroyed by the intense XFEL beam immediately after they have diffracted, the diffraction information can, remarkably, still be recorded thanks to the state-of-the-art detectors. Powerful new data analysis methods have been developed, allowing a team to analyze these diffraction patterns and obtain electron density maps and detailed structural information of proteins.

The method is specifically appealing for hard-to-crystallize proteins, such as membrane proteins, as it yields high-resolution structural information from micro- and even nanocrystals, thus reducing the contribution of crystal defects and avoiding the tedious (if not impossible) growth of the large crystals demanded by traditional synchrotron- based crystallography.

While crystallography with XFELs has been a powerful technique for unraveling the structures of large protein complexes and also permitting time-resolved crystallography, this cutting-edge science nevertheless engenders a major problem. Because of the small "hit" rate it requires huge amounts of suspended protein, which although not irradiated, are cumbersome to retrieve for most protein samples. As much as 99% of the protein can be wasted.

Herein lies the major technical advance made by Ros and her team. They have developed a 3D-printed microfluidic device, which is high-resolution, and generates aqueous-in-oil droplets of variable droplet segmentation that can be synchronized to the free electron laser pulses. This dramatically reduces the amount of purified protein needed for the European XFEL experiment from the currently typical (and almost inaccessible) 1 g requirement for full data set recording.

The importance of this development bears restating. The researchers' approach interleaves sample-laden liquid "slugs" within a sacrificial liquid, so that a fast-moving liquid microjet is maintained with sample present only during exposure to the femtosecond XFEL pulses (one millionth of one billionth of a second in duration).

The team of scientists has demonstrated droplet generation of the enzyme KDO8PS crystal suspensions with the microfluidic droplet generator and shown that the droplet generation frequency can be controlled by the rates of the aqueous and oil streams. The diffraction quality of the crystals of KDO8PS is similar both when injected in aqueous droplets surrounded by oil or by continuous injection with a Gas Dynamic Virtual Nozzle (GDVN), with ~60% reduction in sample consumption achieved with droplet injection.

The determined structure revealed new detail in a previously undefined loop region of KDO8PS, a potential target for antibiotic studies. These results advocate for future routine integration of droplet generation by segmented oil flow at other XFELs around the world.

Credit: 
Arizona State University

Land development in New Jersey continues to slow

image: Coastal flooding in Tuckerton, New Jersey, from a storm off the East Coast in October 2019. Such flooding, which occurred during a high tide, is expected to increase as a result of sea-level rise.

Image: 
Life on the Edge Drones

Land development in New Jersey has slowed dramatically since the 2008 Great Recession, but it's unclear how the COVID-19 pandemic and efforts to fight societal and housing inequality will affect future trends, according to a Rutgers co-authored report.

Between 2012 and 2015, 10,392 acres in the Garden State became urban land. That's 3,464 acres a year - far lower than the 16,852 acres per year in the late 1990s and continuing the trend of decreasing urban development that began in the 2008 Great Recession.

While the rate of farmland converted to urban land decreased dramatically in recent years, the conversion of upland and wetland forests increased, according to the report.

That's concerning since these ecosystems play a critical role in removing carbon dioxide, a greenhouse gas linked to climate change, from the atmosphere and storing it in wood and forest soils.

New Jersey also lost nearly 4,400 acres (almost 7 square miles) of coastal salt marshes from 1986 to 2015 due to rising sea levels and coastal erosion. Such marshes are important fish and wildlife habitat and serve as important buffers against coastal storms.

"We have been tracking changes across the New Jersey landscape for nearly three decades and observed as the state transitioned from sprawling development through the 1980s and 1990s toward more concentrated urban redevelopment in the 2000s," said co-author Richard G. Lathrop Jr., director of the Center for Remote Sensing & Spatial Analysis and a professor of environmental monitoring in the Department of Ecology, Evolution, and Natural Resources in the School of Environmental and Biological Sciences at Rutgers University-New Brunswick. "Six months ago, I would have said that we could expect this pattern to continue over the coming decade. However, given the scope of recent and ongoing events, predicting how New Jersey's land use and development patterns will change is much more uncertain."

The report analyzes change in the state's land use/land cover between spring 2012 and spring 2015, based on N.J. Department of Environmental Protection data. The research, co-authored by Professor John Hasse at Rowan University, is part of an ongoing series of collaborative Rutgers and Rowan studies examining New Jersey's urban growth and land use change since 1986.

Between 2012 and 2015, the residential piece of the development pie dropped dramatically, becoming 40 percent of the urban development footprint. But as the economy began recovering post-2011, more housing units were built on less land, signaling a significant shift toward denser residential development. A trend toward urban redevelopment was also evident in Hudson, Union and Bergen counties.

Here are some of the report's other findings:

Sea-level rise: In some locations, the shoreline in New Jersey retreated more than 1,000 feet in 29 years. By 2050, about one-fifth of the state's salt marshes are highly vulnerable to transitioning to tidal mud flat or open water, or facing greater "drowning" stress. That's about 44,000 acres, or nearly 70 square miles, of salt marshes. If sea-level rise accelerates as some studies suggest, more salt marshes may be vulnerable.

Salt marsh retreat or migration zones: Some of the expected loss of salt marshes due to erosion and drowning may be balanced by new marshes forming as upland/wetland forests. Abandoned croplands may also become salt marsh. New Jersey has more than 66,000 acres of potential marsh retreat zones, which should be a high priority for conservation to allow salt marshes to "migrate." That would partly compensate for expected losses from sea-level rise.

Pinelands land use: From 1986 to 2015, land use change and development in the Pinelands occurred at half the rate in the rest of New Jersey. Pinelands development was more compact and consumed less land compared with the state overall and generally took place in designated growth zones, which typically have the infrastructure to accommodate growth. The slower development rate in conservation zones has maintained most of the rural lands over the decades, giving more time for conservation actions.

Credit: 
Rutgers University

Mindfulness with paced breathing and lowering blood pressure

image: Paced breathing is defined as deep and diaphragmatic with slow rates typically about five to seven per minute compared with the usual rate of 12 to 14.

Image: 
Florida Atlantic University

According to the American Stroke Association (ASA) and the American Heart Association (AHA), more than 100 million Americans have high blood pressure. Elevated blood pressure is a major avoidable cause of premature morbidity and mortality in the United States and worldwide due primarily to increased risks of stroke and heart attacks. Elevated blood pressure is the most important major and modifiable risk factor to reduce stroke. In fact, small but sustained reductions in blood pressure reduce risks of stroke and heart attacks. Therapeutic lifestyle changes of weight loss and salt reduction as well as adjunctive drug therapies are beneficial to treat and prevent high blood pressure.

Mindfulness is increasingly practiced as a technique to reduce stress through mind and body interactions. In some instances, mindfulness includes paced breathing defined as deep and diaphragmatic with slow rates typically about five to seven per minute compared with the usual rate of 12 to 14. Researchers from Florida Atlantic University's Schmidt College of Medicine and collaborators have published a paper in the journal Medical Hypotheses, exploring the possibility that mindfulness with paced breathing reduces blood pressure.

"One of the most plausible mechanisms is that paced breathing stimulates the vagus nerve and parasympathetic nervous system, which reduce stress chemicals in the brain and increase vascular relaxation that may lead to lowering of blood pressure," said Suzanne LeBlang, M.D., a neuroradiologist, second and corresponding author, and an affiliate associate professor in FAU's Schmidt College of Medicine.

The researchers believe the hypothesis they have formulated that mindfulness with paced breathing reduces blood pressure should be tested. To do so, FAU's Schmidt College of Medicine co-authors are already collaborating with their co-authors from the Marcus Neuroscience Institute, Boca Raton Regional Hospital/ Baptist Health South; and the University of Wisconsin School of Medicine and Public Health on an investigator-initiated research grant proposal to the National Institutes of Health. The initial pilot trial would include obtaining informed consent from willing and eligible subjects and assigning them at random to mindfulness either with or without paced breathing and examining whether there are sustained effects on lowering blood pressure.

"This pilot randomized trial might lead to further randomized trials of intermediate markers such as inhibition of progression of carotid intimal thickening or coronary artery atherosclerosis, and subsequently, a large scale trial to reduce stroke and heart attacks," said Charles H. Hennekens, M.D., Dr.PH, senior author, first Sir Richard Doll Professor and senior academic advisor in FAU's Schmidt College of Medicine. "Achieving sustained reductions in blood pressure of 4 to 5 millimeters of mercury decreases risk of stroke by 42 percent and heart attacks by about 17 percent; so positive findings would have important clinical and policy implications."

According to the ASA and AHA, cardiovascular disease (CVD), principally heart attacks and strokes, accounts for more than 800,000 deaths or 40 percent of total mortality in the U.S. each year and more than 17 million deaths worldwide. In the U.S., CVD is projected to remain the single leading cause of mortality and is rapidly becoming so worldwide. Stroke alone ranks fifth in all-cause mortality in the U.S., killing nearly 133,000 people annually as well as more than 11 percent of the population worldwide.

"Now more than ever, Americans and people all over the world are under increased stress, which may adversely affect their health and well-being," said Barbara Schmidt, co-author, teacher, researcher, philanthropist, bestselling author of "The Practice," as well as an adjunct instructor at FAU's Schmidt College of Medicine. "We know that mindfulness decreases stress and I am cautiously optimistic that mindfulness with paced breathing will produce sustained lowering of blood pressure."

Credit: 
Florida Atlantic University

Nature as a model: Researchers develop novel anti-inflammatory substance

Anti-inflammatory substances based on components of human cells could one day improve treatment in patients. Researchers at the Institute of Pharmacy at Martin Luther University Halle-Wittenberg (MLU) have developed a method for producing those substances with controlled quality. Since the body does not recognise them as foreign substances, they offer advantages over anti-inflammatory drugs such as ibuprofen or diclofenac. The results were published in the "European Journal of Pharmaceutical Sciences".

"We are attempting to imitate nature," explains Professor Karsten Mäder from the Institute of Pharmacy at MLU. These novel anti-inflammatory substances occur naturally within the human body, for example on the inner surface of cells. When a cell dies, it turns inside out, or more precisely, phosphatidylserine (PS), a certain component of its cell membrane does. This gives phagocytes the signal to digest the dead cell. PS also ensures that there is no inflammatory response. Something similar happens in the lungs, which are regularly confronted with a large number of foreign substances after an intake of breath. Here another phospholipid, phosphatidylglycerol (PG), ensures that there is no excessive inflammatory response. Mäder's research group has now prepared both substances so that they can potentially be used as drugs - possible areas of application include infarcts, arthritis and psoriasis.

From a medical standpoint, both phospholipids are of interest to researchers because the body does not recognise them as foreign substances, which means fewer side effects can be expected. A U.S. study has already shown that PS is particularly effective in fighting inflammation after a heart attack. "However, producing the preparation was a complex process," says Mäder. The research group from Halle has now developed a production process that is much simpler and cheaper. The phospholipids form small particles that are less than ten nanometres in size. This enables them to easily undergo sterile filtration. They have also proven to be harmless to cells and blood components. Especially the PG particles produced in this way were shown to reduce the inflammatory activity of phagocytes under laboratory conditions. "The results indicate that the phospholipids could work well in anti-inflammatory therapies," says Mäder summarising the research work.

However, several clinical trials are needed before the natural anti-inflammatory substances can be used in humans. The research was supported by the Phospholipid Research Center in Heidelberg.

Credit: 
Martin-Luther-Universität Halle-Wittenberg

Sexual minority men who smoke report worse mental health and more frequent substance use

Cigarette smoking is associated with frequent substance use and poor behavioral and physical health in sexual and gender minority populations, according to Rutgers researchers.

The study, published in the journal Annals of Behavioral Medicine, examined tobacco use by sexual minority men and transgender women to better understand the relationships between smoking, substance use and mental, psychosocial and general health.

The researchers, who are part of the Rutgers School of Public Health's Center for Health, Identity, Behavior and Prevention Studies, surveyed 665 racially, ethnically and socioeconomically diverse sexual minority men and transgender women, 70 percent of whom reported smoking cigarettes.

They found that smoking was associated with participants' race/ethnicity, marijuana and alcohol use and mental health. Current smokers were more likely to be white and reported more days of marijuana use in the past month. The study also found that current smoking was associated with more severe anxiety symptoms and more frequent alcohol use.

"Evidence also tells us that smoking is associated with worse mental health and increased substance use, but we don't know how these conditions are related to each other, exacerbating and mutually reinforcing their effects," said Perry N. Halkitis, dean of the Rutgers School of Public Health and the study's senior author.

LGBTQ+ people are more likely to smoke than their cisgender and heterosexual peers to cope with an anti-LGBTQ+ society, inadequate health care access and decades of targeted tobacco marketing. Those social stressors drive the health disparities they face, which are compounded by a lack of LGBTQ-affirming healthcare providers, research shows.

"Our findings underscore the importance of holistic approaches to tobacco treatment that account for psychosocial drivers of substance use and that address the complex relationships between mental health and use of substances like alcohol, tobacco and marijuana," said Caleb LoSchiavo, a doctoral student at the Rutgers School of Public Health and the study's first author.

The study recommends further research examining the social determinants of disparities in substance use among marginalized populations and how interpersonal and systemic stressors contribute to poorer physical and mental health for minority populations.

Credit: 
Rutgers University