Tech

Scientists shed new light on viruses' role in coral bleaching

image: Pocillopora corals from Mo'orea.

Image: 
(photo by Andrew Thurber, OSU).

CORVALLIS, Ore. - Scientists at Oregon State University have shown that viral infection is involved in coral bleaching - the breakdown of the symbiotic relationship between corals and the algae they rely on for energy.

Funded by the National Science Foundation, the research is important because understanding the factors behind coral health is crucial to efforts to save the Earth's embattled reefs - between 2014 and 2017 alone, more than 75% experienced bleaching-level heat stress, and 30% suffered mortality-level stress.

The planet's largest and most significant structures of biological origin, coral reefs are found in less than 1% of the ocean but are home to nearly one-quarter of all known marine species. Reefs also help regulate the sea's carbon dioxide levels and are a vital hunting ground that scientists use in the search for new medicines.

Since their first appearance 425 million years ago, corals have branched into more than 1,500 species. A complex composition of dinoflagellates - including the algae symbiont - fungi, bacteria, archaea and viruses make up the coral microbiome, and shifts in microbiome composition are connected to changes in coral health.

The algae the corals need can be stressed by warming oceans to the point of dysbiosis - a collapse of the host-symbiont partnership.

To better understand how viruses contribute to making corals healthy or unhealthy, Oregon State Ph.D. candidate Adriana Messyasz and microbiology researcher Rebecca Vega Thurber of the OSU College of Science led a project that compared the viral metagenomes of coral colony pairs during a minor 2016 bleaching event in Mo'orea, French Polynesia.

Also known as environmental genomics, metagenomics refers to studying genetic material recovered directly from environmental samples, in this case samples taken from a coral reef.

For this study, scientists collected bleached and non-bleached pairs of corals to determine if the mixes of viruses on them were similar or different. The bleached and non-bleached corals shared nearly identical environmental conditions.

"After analyzing the viral metagenomes of each pair, we found that bleached corals had a higher abundance of eukaryotic viral sequences, and non-bleached corals had a higher abundance of bacteriophage sequences," Messyasz said. "This gave us the first quantitative evidence of a shift in viral assemblages between coral bleaching states."

Bacteriophage viruses infect and replicate within bacteria. Eukaryotic viruses infect non-bacterial organisms like animals.

In addition to having a greater presence of eukaryotic viruses in general, bleached corals displayed an abundance of what are called giant viruses. Known scientifically as nucleocytoplasmic large DNA viruses, or NCLDV, they are complex, double-stranded DNA viruses that can be parasitic to organisms ranging from the single-celled to large animals, including humans.

"Giant viruses have been implicated in coral bleaching," Messyasz said. "We were able to generate the first draft genome of a giant virus that might be a factor in bleaching."

The researchers used an electron microscope to identify multiple viral particle types, all reminiscent of medium- to large-sized NCLDV, she said.

"Based on what we saw under the microscope and our taxonomic annotations of viral metagenome sequences, we think the draft genome represents a novel, phylogenetically distinct member of the NCLDVs," Messyasz said. "Its closest sequenced relative is a marine flagellate-associated virus."

The new NCLDV is also present in apparently healthy corals but in far less abundance, suggesting it plays a role in the onset of bleaching and/or its severity, she added.

Credit: 
Oregon State University

Act now on wildfires, global climate change, human health, study says

Immediate actions are needed to limit the greenhouse gas emissions that are driving climate change that helps fuel wildfires, a Monash University study says.

A special report published in the
New England Journal of Medicine, led by Professor Yuming Guo and Dr Shanshan Li from the Monash School of Public Health and Preventive Medicine, summarises the enormous impacts of climate change on wildfire seasons and the sequential increased morbidity, mortality, and mental health impacts.

The report, which analysed numerous studies on wildfires over the past 20 years, says global climate change is fueling the three essential conditions for wildfires - fuel, oxygen and an ignition source. The world is seeing inconsistent rainfall, increased drought and hotter temperatures, leading to more flammable vegetation.

It says the global mean carbon dioxide (CO2) emissions from wildfires accounted for about 22 per cent of the carbon emission from burning fossil fuels between 1997-2016. The inconsistent approach to global forest management and the conversion of tropical savannas to agricultural lands is damaging the world's ability to absorb CO2 and cool the climate.

The report says projections suggest that if high greenhouse gas emissions continue, wildfire exposure could substantially increase to over 74 per cent of the global land mass by the end of the century.

However, if immediate climate mitigation efforts are taken to limit the global mean temperature increase to 2.0?C or 1.5?C, a corresponding 60 per cent and 80 per cent, respective increase in wildfire exposure could be avoided, the report says.

Reaching the 1.5°C target would require reducing global net CO2 emissions by about 45 per cent from 2010 levels by 2030 and reaching net zero around 2050. The 1.5°C target remains achievable if CO2 emissions decline by 7.6 per cent per year from 2020 to 2030.

The report says the devastating health impacts are illustrated by several large and - in some cases - unprecedented recent wildfires. These include the 2019-2020 Australian wildfires, the 2019 and 2020 Amazon fires in Brazil, the 2018 and 2020 wildfires in the western US, the 2017-2018 wildfires in British Columbia, Canada, and the ongoing record-breaking wildfires on the US West Coast.

Along with the increased eye irritation, corneal abrasions and respiratory impacts of the smoke, the psychological effects are equally as serious with post-traumatic stress disorder (PTSD), depression, and insomnia common. The psychological consequences of wildfire events can persist for years, with children and adolescents particularly vulnerable.

A 20-year study on adults exposed to an Australian bushfire disaster as children in 1983 found an increase in psychiatric morbidity in adulthood, with wildfire events associated with subsequent reductions in children's academic performance.

The report says the current exchange between wildfires and climate change is likely to form a reinforcing feedback loop, making wildfires and their health consequences increasingly severe, unless we can come together to reduce greenhouse gas emissions.

Credit: 
Monash University

Modelling extreme magnetic fields and temperature variation on distant stars

image: The maps show the heat distribution. The bue regions are cooler - and the yellow regions are hotter.

It describes data taken from the following magentars: 4U 0142+61, 1E 1547.0-5408, XTE J1810-197, SGR 1900 + 14

Image: 
University of Leeds

New research is helping to explain one of the big questions that has perplexed astrophysicists for the past 30 years - what causes the changing brightness of distant stars called magnetars.

Magnetars were formed from stellar explosions or supernovae and they have extremely strong magnetic fields, estimated to be around 100 million, million times greater than the magnetic field found on earth.

The magnetic field generates intense heat and x-rays. It is so strong it also affects the physical properties of matter, most notably the way that heat is conducted through the crust of the star and across its surface, creating the variations in brightness across the star which has puzzled astrophysicists and astronomers.

A team of scientists - led by Dr Andrei Igoshev at the University of Leeds - has developed a mathematical model that simulates the way the magnetic field disrupts the conventional understanding of heat being distributed uniformly and creates hotter and cooler regions where there may be a difference in temperature of one million degrees Celsius.

Those hotter and cooler regions emit x-rays of differing intensity - and it is that variation in x-ray intensity that is observed as changing brightness by space-borne telescopes.

The findings are published today (12 October) in the journal Nature Astronomy. The research was funded by the Science and Technology Facilities Council (STFC).

Dr Igoshev, from the School of Mathematics at Leeds, said: "We see this constant pattern of hot and cold regions. Our model - based on the physics of magnetic fields and the physics of heat - predicts the size, location and temperature of these regions - and in doing so, helps explain the data captured by satellite telescopes over several decades and which has left astronomers scratching their heads as to why the brightness of magnetars seemed to vary.

"Our research involved formulating mathematical equations that describe how the physics of magnetic fields and heat distribution would behave under the extreme conditions that exist on these stars.

"To formulate those equations took time but was straightforward. The big challenge was writing the computer code to solve the equations - that took more than three years."

Once the code was written, it then took a super-computer to solve the equations, allowing the scientists to develop their predictive model.

The team used the STFC-funded DiRAC supercomputing facilities at the University of Leicester.

Dr Igoshev said once the model had been developed, its predictions were tested against the data collected by space-borne observatories. The model was correct in ten out of 19 cases.

The magnetars studied as part of the investigation are in the Milky Way and typically 15 thousand light years away.

The other members of the research team were Professor Rainer Hollerbach, also from Leeds, Dr Toby Wood, from the University of Newcastle, and Dr Konstantinos N Gourgouliatos, from the University of Patras in Greece.

Credit: 
University of Leeds

Ultrasound screening may be limited in ability to predict perinatal complications

Delivering a newborn with macrosomia (weighing more than 8 pounds, 13 ounces at birth) may be associated with higher risk of adverse outcomes, including perinatal death and injuries related to traumatic delivery, such as stuck shoulders (shoulder dystocia). A study in PLOS Medicine by Gordon Smith at the University of Cambridge and colleagues suggests that third trimester fetal ultrasound screening has the ability to identify more pregnancies with macrosomia.

The diagnostic effectiveness of ultrasound screening in predicting the delivery of a macrosomic infant, shoulder dystocia and associated neonatal morbidity is not well established. To better understand the relationship between estimated fetal weight (EFW), macrosomia, and perinatal complications, researchers systematically reviewed the literature from four different clinical databases. The authors then analyzed 41 studies involving 112,034 non-high risk patients who had undergone a third trimester ultrasound screening as part of universal screening.

The authors found that a third trimester ultrasonic EFW showing increased risk of a large baby reliably predicted delivery of a macrosomic infant. However, a larger EFW was not strongly associated with the risk of shoulder dystocia in low and medium-risk pregnancies. The study was limited by variation in included studies representing differences in screening in various countries.

According to the authors, "We recommend caution prior to introducing universal third trimester screening for macrosomia as it would increase the rates of intervention, with potential iatrogenic harm, without clear evidence that it would reduce neonatal morbidity".

Credit: 
PLOS

A call for more comprehensive smoking cessation programs for cancer patients who smoke

image: In an editorial published in JAMA, UNC Lineberger's Adam Goldstein, MD, MPH, director of the UNC Tobacco Treatment Programs and professor in the UNC School of Medicine Department of Family Medicine, and his co-authors called for more funding and better reimbursement for smoking cessation counseling for cancer patients who smoke.

Image: 
UNC Lineberger Comprehensive Cancer Center

CHAPEL HILL, N.C.--Programs designed to help cancer patients stop using tobacco should be considered as important and impactful as providing the right drug at the right time and at the right dose to patients, according to researchers at the University of North Carolina Lineberger Comprehensive Cancer Center.

In an editorial published in JAMA, UNC Lineberger's Adam Goldstein, MD, MPH, director of the UNC Tobacco Treatment Programs and professor in the UNC School of Medicine Department of Family Medicine, and his co-authors called for more funding and better reimbursement for smoking cessation counseling for cancer patients who smoke.

Research has shown that providing intensive smoking cessation counseling to newly diagnosed cancer patients who smoke was associated with improved quality of life, fewer complications related to cancer treatments and longer survival. In addition, a study published last year found some cancer treatments were less effective in people who smoke, and this resulted in significantly greater costs for subsequent cancer treatments.

"Since cancer patients often get expensive chemotherapy for months to years to help cure their cancer, six months of inexpensive intensive tobacco cessation support at the time of diagnosis is scientifically proven, common sense and improves all outcomes of cancer care," said Goldstein.

Kimberly Shoenbill, MD, PhD, assistant professor of Family Medicine and a member of the Program on Health and Clinical Informatics at UNC School of Medicine, and Trevor Jolly, MBBS, assistant professor of medicine and UNC Lineberger member, are the editorial's two other authors.

Their editorial accompanied a study by Elyse Park, PhD, and colleagues that compared the impact of an intensive vs standard cancer center smoking cessation intervention, with different levels of intensity and frequency of counseling, support and medications. The study showed patients who completed the study and were part of the intensive treatment group - which had weekly, biweekly and monthly telephone counseling sessions and free cessation medication - achieved higher seven-day abstinence rates at six-month follow-up (34.5%) compared to patients in the standard treatment group (21.5%).

Goldstein said the study demonstrates for the first time the value and necessity of providing intensive smoking cessation counseling, of up to eight sessions per quit attempt, as a standard for cancer care. These findings, he said, should serve as a wake-up call for hospitals, cancer centers, physicians and payers.

"Excellence in cancer care is defined by a great team, delivering great care, with everyone focused on the patient and their family. A great cessation program in cancer centers requires unequivocal buy-in at all levels," Goldstein said. "You cannot have a great cancer hospital today without a great cessation program for cancer patients."

There are a number of challenges that cancer centers may experience in implementing intensive smoking cessation programs. This includes a reluctance by oncologists to provide the intensive counseling - which may be due, in part, to time constraints and lack of smoking cessation training - and insufficient reimbursement for counseling services.

"Medicare, Medicaid and most private insurers usually pay less than $100 for only four total counseling sessions per quit attempt, yet they will readily pay over $100,000 a year for drugs to treat cancer. Current fee-for-service reimbursement does not begin to cover the cost of providing intensive cessation counseling," Goldstein said. This reimbursement model is shortsighted, he noted, because the expense of providing intensive smoking cessation counseling results in immediate cost savings and improved health in the long run.

"For every $1 invested in a cancer center cessation program, $6 in savings from reduced future costs likely occurs, making intensive cessation counseling perhaps the most cost-effective, cheapest and safest cancer treatment currently possible," Goldstein said.

"Health care systems are rapidly changing to value-based care. As payers increasingly track outcomes of care, when they compare patients treated in health care systems that offer intensive cessation counseling that result in patients quitting smoking, compared to systems where they receive neither, the choice is obvious."

Credit: 
UNC Lineberger Comprehensive Cancer Center

Layer of strength, layer of functionality for biomedical fibers

image: Aligned core-sheath fiber arrays mass-produced by pressurized gyration invented at UCL Mechanical Engineering with the core and sheath of the fibers clearly distinguishable by electron microscopy

Image: 
Sunthar Mahalingam

WASHINGTON, October 13, 2020 -- Wound dressing, tissue scaffolding, controlled and sustained drug delivery, and cardiac patching are all biomedical processes requiring a material that combines strength with functionality. Core-sheath polymer fibers, fibers comprised of a strong core surrounded by a biologically applicable sheath layer, are an affordable way to meet these requirements.

In the journal Applied Physics Reviews, from AIP Publishing, researchers from University College London discuss methods of producing core-sheath polymer fibers and their promising applications. Those polymers are beneficial, because they bypass the need to search for one single material that meets all the necessary criteria, which can be next to impossible.

"You want strength, but you also want bioactivity," said Mohan Edirisinghe, one of the authors on the paper and Bonfield Chair of Biomaterials at UCL Mechanical Engineering. "So, if you align them in a core-sheath polymer, you have the strength of the core material, but the functionality comes from a bioactive polymer or ingredient that is in the sheath. That is a big advantage."

By carefully selecting the inner and outer layer materials, the physical properties of the fiber can be tuned for a variety of biomedical applications. For example, beyond the previously mentioned uses, core-sheath polymer fibers also have exciting potential in helping to combat the coronavirus pandemic.

"If you want to make a fibrous mask from a textile, you really need to have the strength, because you're going to wash it and use it," Edirisinghe said. "But on the other hand, you need an active material."

In this case, the active sheath material can be an antiviral agent like copper. Other components, like drugs or proteins, can also be embedded within the sheath layer during the manufacturing process, adding additional material capabilities.

"You don't want to waste a good material's functionality inside the material. You'll never see the benefit of it," said Edirisinghe.

Benefits aside, creating core-sheath polymer fibers can be difficult, requiring custom-made processing architectures and biologically relevant materials. Edirisinghe and his team have developed several manufacturing techniques that can be modified as appropriate.

A vessel containing a reservoir containing the core material is embedded within another reservoir containing the sheath material. The two are jetted out through vessel orifices simultaneously, creating a fiber with the core material surrounded by the sheath material.

"This is just the tip of the iceberg, because this is just two reservoirs with two materials, which become the sheath and core layers of the fibers, but you can extend this to three or four," Edirisinghe said. "In each layer, you can have a different drug that satisfies a different purpose."

Credit: 
American Institute of Physics

Long-term, frequent phone counseling helps cancer patients who smoke quit

BOSTON - Recently diagnosed cancer patients who smoke are significantly more likely to quit and remain tobacco-free if they receive frequent and sustained telephone counseling, according to a new study led by researchers at Massachusetts General Hospital (MGH). The study, published in JAMA, offers hope that these patients will respond better to treatment and enjoy improved quality of life while coping with cancer.

Studies suggest that between 10 percent and 30 percent of tobacco users who are diagnosed with cancer continue smoking. Moreover, half of those who stop relapse quickly. Cancer patients are not routinely referred to smoking-cessation programs, says psychologist Elyse R. Park, PhD, MPH, director of behavioral sciences at MGH's Tobacco Research and Treatment Center, and the lead author of the JAMA paper. Many clinicians may be reluctant to discuss the topic or may only recommend quitting to patients who have a smoking-related disease, such as lung cancer. However, explains Park, smoking can diminish the effectiveness of cancer treatment, as well as quality of life, for people with all forms of cancer.

"Cancer patients need help in order to achieve the best cancer treatment outcomes," says Park. Overall, fewer than 10 percent of smokers who attempt to quit succeed, according to the Centers for Disease Control and Prevention. Telephone counseling administered by certified tobacco treatment counselors has been shown to be an effective way to help smokers give up the habit, but most programs are relatively short term and focused on cancer prevention. Park and her team at MGH joined forces with psychologist Jamie S. Ostroff, PhD, and her colleagues at Memorial Sloan Kettering Cancer Center in New York City to investigate whether a sustained, more intensive program would improve quit rates among cancer patients.

The study included 303 patients with various forms of cancer. Half of the participants received phone calls from counselors four times a week for a month; during these sessions, counselors used evidence-based behavioral strategies designed to motivate the patient to continue abstaining from tobacco and cope with the stress of a cancer diagnosis. Patients also received advice about medications to aid smoking cessation, such as nicotine-replacement therapy. The intensive-treatment group received a similar schedule of telephone counseling, plus an additional four sessions every other week, for two months, and three monthly booster sessions. Patients in this group were also offered free smoking-cessation medications.

After six months, lab tests confirmed that 34.5 percent of the patients in the intensive-treatment program were abstaining from tobacco, compared to 21.5 percent of participants who received the shorter program with less frequent counseling. "We believe it was the ongoing, positive cessation support, in coordination with the oncology care these patients received, that led to the high success rate in the intensive-treatment group," says Park.

Park and her colleagues are now implementing this intensive tobacco treatment program at more than 40 community cancer centers around the United States. "This randomized trial demonstrated that tobacco treatment can and should be an integral part of comprehensive cancer care," says Park. "Cessation advice and support should start immediately, at the time of diagnosis, and it should be offered to all cancer patients, regardless of tumor type and stage of disease."

Credit: 
Massachusetts General Hospital

Young women who suffer a heart attack have worse outcomes than men

image: Figure showing risk factors, and clinical presentation, management and outcomes for men and women

Image: 
European Heart Journal

Women aged 50 or younger who suffer a heart attack are more likely than men to die over the following 11 years, according to a new study published today (Wednesday) in the European Heart Journal [1].

The study found that, compared to men, women were less likely to undergo therapeutic invasive procedures after admission to hospital with a heart attack or to be treated with certain medical therapies upon discharge, such as aspirin, beta-blockers, ACE inhibitors and statins.

The researchers, led by Ron Blankstein, professor of medicine at Harvard Medical School and a preventive cardiologist at Brigham and Women's Hospital, Boston, USA, found no statistically significant differences between men and women for deaths while in hospital, or from heart-related deaths during an average of more than 11 years' follow-up. However, women had a 1.6-fold increased risk of dying from other causes during the follow-up period.

Prof. Blankstein said: "It's important to note that overall most heart attacks in people under the age of 50 occur in men. Only 19% of the people in this study were women. However, women who experience a heart attack at a young age often present with similar symptoms as men, are more likely to have diabetes, have lower socioeconomic status and ultimately are more likely to die in the longer term."

The researchers looked at 404 women and 1693 men who had a first heart attack (a myocardial infarction) between 2000 and 2016 and were treated at the Brigham and Women's Hospital and Massachusetts General Hospital in the US. During a myocardial infarction the blood supply to the heart is blocked suddenly, usually by a clot, and the lack of blood can seriously damage the heart muscle. Treatments can include coronary angiography, in which a catheter is inserted into a blood vessel to inject dye so that an X-ray image can show if any blood vessels are narrowed or blocked, and coronary revascularisation, in which blood flow is restored by inserting a stent to keep the blood vessel open or by bypassing the blocked segment with surgery.

The median (average) age was 45 and 53% (1121) had ST-elevation myocardial infarction (STEMI), a type of heart attack where there is a long interruption to the blood supply caused by a total blockage of the coronary artery. Despite being a similar age, women were less likely than men to have STEMI (46.3% versus 55.2%), but more likely to have non-obstructive coronary disease. The most common symptom for both sexes was chest pain, which occurred in nearly 90% of patients, but women were more likely to have other symptoms as well, such as difficulty breathing, palpitations and fatigue.

Prof. Blankstein said: "Among patients who survived to hospital discharge, there was no significant difference in deaths from cardiovascular problems between men and women. Cardiovascular deaths occurred in 73 men and 21 women, 4.4% versus 5.3% respectively, over a median follow-up time of 11.2 years. However, when excluding deaths that occurred in hospital, there were 157 deaths in men and 54 death in women from all causes during the follow-up period: 9.5% versus 13.5% respectively, which is a significant difference, and a greater proportion of women died from causes other than cardiovascular problems, 8.4% versus 5.4% respectively, 30 women and 68 men. After adjusting for factors that could affect the results, this represents a 1.6-fold increased risk of death from any cause in women."

Women were less likely to undergo invasive coronary angiography (93.5% versus 96.7%) or coronary vascularisation (82.1% versus 92.6%). They were less likely to be discharged with aspirin (92.2% versus 95%), beta-blockers (86.6% versus 90.3%), angiotensin-converting enzyme inhibitors (ACE inhibitors) or angiotensin receptor blockers (53.4% versus 63.7%) and statins (82.4% versus 88.4%).

The study is the first to examine outcomes following heart attack in young men and women over such a long follow-up period. It shows that even after adjusting for differences in risk factors and treatments, women are more likely to die from any cause in the longer term. The researchers are unsure why this could be. Despite finding no significant difference in the overall number of risk factors, they wonder whether some factors, such as smoking, diabetes and psychosocial risk factors might have stronger adverse effects on women than on men, which overcome the protective effect of the oestrogen hormone in women.

Prof. Blankstein added: "As fewer women had STEMI and more had non-obstructive myocardial infarction, they are less likely to undergo coronary revascularisation or to be given medications such as dual anti-platelet therapy, which is essential after invasive heart procedures. Also, the absence of obstructive coronary artery disease may raise uncertainty regarding the diagnosis and whether such individuals truly had a myocardial infarction or have elevated enzymes due to other causes.

"While further studies will be required to evaluate the underlying reasons for these differences, clinicians need to evaluate and, if possible treat, all modifiable risk factors that may affect deaths from both cardiovascular and non-cardiovascular events. This could lead to improved prevention, ideally before, but in some cases, after a heart attack. We plan further research to assess underlying sex-specific risk factors that may account for the higher risk to women in this group, and which may help us understand why they had a heart attack at a young age."

In an accompanying editorial [2], Dr Marysia Tweet, assistant professor of medicine at the Mayo Clinic College of Medicine and Science, Minnesota, USA, writes that "it is essential to aggressively address traditional cardiovascular risk factors in young AMI [acute myocardial infarction] patients, especially among young women with AMI and a high burden of comorbidities. Assessing clinical risk and implementing primary prevention is imperative, and non-traditional risk factors require attention, although not always addressed". Examples include a history of pre-eclampsia, gestational diabetes and ovary removal and she points out that depression was twice as common among women in this study compared to men; "young women with depression are six times more likely to have coronary heart disease than women without depression".

She concludes: "This study . . . demonstrates the continued need - and obligation - to study and improve the incidence and mortality trajectory of cardiovascular disease in the young, especially women. We can each work towards this goal by increasing awareness of heart disease and 'heart healthy' lifestyles within our communities; engaging with local policy makers, promoting primary or secondary prevention efforts within our clinical practices; designing studies that account for sex differences; facilitating recruitment of women into clinical trials; requesting sex-based data when reviewing manuscripts; and reporting sex differences in published research."

Limitations of the research include: the researchers were unable to account for some potential factors that might be associated with patient outcomes or management, such as patient preferences or psychosocial factors; there were no data about whether patients continued to take their prescribed medications, or on sex-specific risk factors, such as problems relating to pregnancy; the small number of women in the study may have affected the results; and deaths before reaching hospital were not counted.

Credit: 
European Society of Cardiology

The Great Barrier Reef has lost half its corals

image: The Great Barrier Reef has lost half its corals in the past three decades. As more complex coral structure is lost, so too are the habitats for fish.

Image: 
Andreas Dietzel.

A new study of the Great Barrier Reef shows populations of its small, medium and large corals have all declined in the past three decades.

Lead author Dr Andy Dietzel, from the ARC Centre of Excellence for Coral Reef Studies (CoralCoE), says while there are numerous studies over centuries on the changes in the structure of populations of humans--or, in the natural world, trees--there still isn't the equivalent information on the changes in coral populations.

"We measured changes in colony sizes because population studies are important for understanding demography and the corals' capacity to breed," Dr Dietzel said.

He and his co-authors assessed coral communities and their colony size along the length of the Great Barrier Reef between 1995 and 2017. Their results show a depletion of coral populations.

"We found the number of small, medium and large corals on the Great Barrier Reef has declined by more than 50 percent since the 1990s," said co-author Professor Terry Hughes, also from CoralCoE.

"The decline occurred in both shallow and deeper water, and across virtually all species--but especially in branching and table-shaped corals. These were the worst affected by record-breaking temperatures that triggered mass bleaching in 2016 and 2017," Prof Hughes said.

The branching and table-shaped corals provide the structures important for reef inhabitants such as fish. The loss of these corals means a loss of habitat, which in turn diminishes fish abundance and the productivity of coral reef fisheries.

Dr Dietzel says one of the major implications of coral size is its effect on survival and breeding.

"A vibrant coral population has millions of small, baby corals, as well as many large ones-- the big mamas who produce most of the larvae," he said.

"Our results show the ability of the Great Barrier Reef to recover--its resilience--is compromised compared to the past, because there are fewer babies, and fewer large breeding adults."

The authors of the study say better data on the demographic trends of corals is urgently needed.

"If we want to understand how coral populations are changing and whether or not they can recover between disturbances, we need more detailed demographic data: on recruitment, on reproduction and on colony size structure," Dr Dietzel said.

"We used to think the Great Barrier Reef is protected by its sheer size--but our results show that even the world's largest and relatively well-protected reef system is increasingly compromised and in decline," Prof Hughes said.

Climate change is driving an increase in the frequency of reef disturbances such as marine heatwaves. The study records steeper deteriorations of coral colonies in the Northern and Central Great Barrier Reef after the mass coral bleaching events in 2016 and 2017. And the southern part of the reef was also exposed to record-breaking temperatures in early 2020.

"There is no time to lose--we must sharply decrease greenhouse gas emissions ASAP," the authors conclude.

Credit: 
ARC Centre of Excellence for Coral Reef Studies

Study first to tally biomass from oceanic plastic debris using visualization method

image: An image of diatoms colonizing polypropylene after 1 week incubation in the ocean. Note the green chloroplasts in the 'football shaped' diatom frustules (the cell walls made of glass stained red). Diatoms at such high coverage have the ability to make plastics sink due to their dense glass cell walls. Cellular DNA shows up as blue due to the stain the intercalates in between the DNA base pairs. The blue dotted lines are filamentous bacteria consisting of chains of cells, each blue dot represents a single cell with a bundle of DNA at the center. The scale bar is 20 microns in length.

Image: 
Shiye Zhao, Ph.D.

Trillions of plastic debris fragments are afloat at sea, creating the "perfect storm" for microbial colonization. Introduced more than 50 years ago, plastic substrates are a novel microbial habitat in the world's oceans. This "plastisphere" consists of a complex community comprised of bacterial, archaeal, and eukaryotic microorganisms and microscopic animals.

These unnatural additions to sea surface waters and the large quantity of cells and biomass carried by plastic debris has the potential to impact biodiversity, ecological functions and biogeochemical cycles within the ocean. Biofilm formation in the marine environment - a collective of one or more types of microorganisms that can grow on many different surfaces - is a complex process, involving many variables.

While several studies have surveyed microbial diversity and quantified specific members of these biofilm habitats, a new study is the first to holistically quantify total cell inventories under in situ conditions. This study is fundamentally different from others due to the relatively non-biased visualization methods used to arrive at a quantitative number for biomass, which is the first estimate of its kind.

Researchers from Florida Atlantic University's Harbor Branch Oceanographic Institute and Harriet L. Wilkes Honors College, in collaboration with Utrecht University, Netherlands, the University of Amsterdam, and The Royal Netherlands Institute for Sea Research (NIOZ), examined cell abundances, size, cellular carbon mass, and how photosynthetic cells differ on polymeric and glass substrates over time. They investigated nanoparticle generation from plastic such as polystyrene, which is known to disintegrate into nanoparticles in sunlight and ultraviolet radiation, and how this might disrupt microalgae.

Results of the study, published in the ISME Journal, a monthly publication of the International Society for Microbial Ecology, reveal that by measuring the average microbial biomass carrying capacity of different plastic polymers and, by extension, plastic marine debris in the global ocean, conservative estimates suggest that about 1 percent of microbial cells in the ocean surface microlayer inhabit plastic debris globally. This mass of cells would not exist if plastic debris was not in the ocean, and therefore, represents a disruption of the proportions of native flora in that habitat.

"In the open ocean, nutrients are limiting. Just like we need to put fertilizer on a garden, microorganisms in the ocean are limited by nitrogen, iron or phosphorous depending upon where they are -- except in the open ocean, there is typically no fertilizer, so something has to die for another organism to live," said Tracy Mincer, Ph.D., lead author and an assistant professor of biology/bio-geochemistry at FAU's Harbor Branch and Wilkes Honors College. "With the advantage of a surface, which concentrates nutrients, organisms colonizing plastics in the ocean are taking up those limiting nutrients that normally would have been consumed or out-competed by free-living microbes. So essentially, these microbes on plastics are taking habitat space away and represent the beginning of a regime shift for these habitats."

Using confocal laser scanning microscopy with sophisticated imaging software, researchers directly obtained data ranging from cell counts, size and the characterization of microbial morphotypes to complete three-dimensional constructs. They tested a range of chemically distinct substrates that included polypropylene, polystyrene, polyethylene and glass. Polypropylene is used by the automotive industry, for consumer goods such as packaging, industrial applications and the furniture market; polystyrene is used to make clear products like food packing or laboratory equipment; and polyethylene is the most widely used plastic in the world ranging from products such as clear food wrap to shopping bags to detergent bottles.

Data from the confocal laser scanning microscopy showed that early biofilms displayed a high proportion of diatoms (unicellular eukaryotic microalgae that have cell walls made of glass). These diatoms could play a key role in the sinking of plastic debris. Unexpectedly, plastic substrates appeared to reduce the growth of photosynthetic cells after eight weeks compared to glass.

"The quantification of cell numbers and microbial biomass on plastic marine debris is crucial for understanding the implications of plastic marine debris on oceanic ecosystems," said Shiye Zhao, Ph.D., first author and a post-doctoral fellow at FAU's Harbor Branch. "Future efforts should focus on how this biomass fluctuates with season and latitude and its potential to perturb the flux of nutrients in the upper layers of the ocean."

Credit: 
Florida Atlantic University

Modeling organic-field effect transistors with a molecular resolution

image: In an OFET, the charge carriers move through the organic semiconductor within a "channel" located at the interface with a dielectric material. Here, the dielectric is represented by the gray grid. The figure illustrates the calculated impact of the roughness of the dielectric surface (showing a 50 nm × 50 nm area) on the averaged carrier occupation The carrier occupation probabilities are represented in blue and indicate that the carriers essentially move within the "valleys" of the dielectric surface (adapted from Adv. Funct. Mater., 2018, 28, 1803096).

Image: 
©Science China Press

Field-effect transistors are key components of sensors, electrical circuits, or data storage devices. The transistors used to date have been mainly based on inorganic semiconductors such as silicon. More recently, organic materials have emerged, with semiconducting properties that have allowed the fabrication of organic field-effect transistors (OFETs). The use of organic components as the device active layer brings promising features such as easy processing and low cost. In addition to their device functionalities, OFETs have also developed into an important platform in the basic characterization of organic semiconductors, as they are now established as a useful tool to measure charge-carrier mobilities. Thus, providing a comprehensive description of OFET device performance becomes a key step in furthering the development of these devices and designing more efficient organic semiconductors. At the core of these investigations lie the device models, which provide the relationships between the measured current densities and the semiconducting properties of the organic materials. Needless to say, it is imperative that these OFET device models be accurate and reliable.

In an overview published in the Beijing-based National Science Review, scientists at the University of Arizona in the United States discuss recent advances in OFET device models that incorporate molecular-level parameters. In particular, they highlight the development of kinetic Monte Carlo-based device simulation methods and their successful application to the modeling of micrometer-sized OFETs. They also outline the paths require for further improvements of these molecular-level models for OFETs.

"In spite of the major differences in the charge-transport mechanisms of organic and inorganic semiconductors, it turns out that until recently the prevalent OFET device models were directly borrowed from those originally developed for FETs based on inorganic materials", these scientists state in their review article entitled "Developing Molecular-Level Models for Organic Field-Effect Transistors." They emphasize that: "Optimally, OFET device models should include factors such as the presence of discrete molecular levels, disorder, anisotropy, traps, grain boundaries, complex film morphology, and contact resistance. These factors are difficult to include as long as the organic semiconductor film is treated as a continuum medium. In other words, nano-scale, molecular-level details need to be incorporated into OFET device models."

In recent years, kinetic Monte Carlo-based methods have seen very substantial developments, which now allows an efficient modeling of OFETs with a molecular resolution. These new models have opened the way to a deeper understanding of the OFET device physics and provided the ability to connect directly the microscopic processes to macroscopic device performance. They have been successfully applied to describe fundamental aspects of OFETs such as the actual thickness of the effective channel and the impact of the dielectric surface morphology, as well as the issue of nonlinear current characteristics encountered more recently.

The University of Arizona scientists forecast that: "Through such continuous developments, molecular-level OFET device models will become an increasingly useful platform in the investigation of OFET devices and serve as a complementary tool for routine data analysis".

Credit: 
Science China Press

Wearable IT devices: Dyeing process gives textiles electronic properties

image: Polymerized glove that can be used to digitally capture hand movements.

Image: 
Oliver Dietze

Computer scientists at Saarland University show how these special textiles can be produced in a comparatively easy way, thus opening up new use cases.

"Our goal was to integrate interactive functionalities directly into the fibers of textiles instead of just attaching electronic components to them," says Jürgen Steimle, computer science professor at Saarland University. In his research group on human-computer interaction at Saarland Informatics Campus, he and his colleagues are investigating how computers and their operation can be integrated as seamlessly as possible into the physical world. This includes the use of electro-interactive materials.

Previous approaches to the production of these textiles are complicated and influence the haptics of the material. The new method makes it possible to convert textiles and garments into e-textiles, without affecting their original properties - they remain thin, stretchable and supple. This creates new options for quick and versatile experimentation with new forms of e-textiles and their integration into IT devices.

"Especially for devices worn on the body, it is important that they restrict movement as little as possible and at the same time can process high-resolution input signals", explains Paul Strohmeier, one of the initiators of the project and a scientist in Steimle's research group. To achieve this, the Saarbrücken researchers are using the in-situ polymerization process. Here, the electrical properties are "dyed" into the fabric: a textile is subjected to a chemical reaction in a water bath, known as polymerization, which makes it electrically conductive and sensitive to pressure and stretching, giving it so-called piezoresistive properties. By "dyeing" only certain areas of a textile or polymerizing individual threads, the computer scientists can produce customized e-textiles.

In their test runs, the researchers have produced gloves that can digitally capture hand movements, a zipper that transmits different electric currents depending on the degree of opening, and sports tapes that act as user interfaces that are attached to the body.

Also, materials other than textiles can be treated with the process. Audrey Briot, an artist from Paris, has created an evening gown with touch-sensitive feathers that generate sounds via a computer when touched. She polymerized the feathers using the Saarbrücken computer scientists' method. The dress was nominated for the STARTS Prize of the European Commission.

A scientific paper on the process entitled "PolySense: Augmenting Textiles with Electrical Functionality using In-Situ Polymerization" was written by the Human-Computer Interaction Research Group at the Saarland Informatics Campus at Saarland University. Participating researchers from Saarland University were Prof. Dr. Jürgen Steimle, Dr. Paul Strohmeier, Dr. Marc Teyssier and Dr. Bruno Fruchard. Also involved were Cedric Honnet (MIT Media Lab), Hannah Perner-Wilson (Kobakant) and Dr. Ana C. Baptista (CENIMAT/I3N, New University of Lisbon). The paper was published in 2020 at the world's largest conference in this field of research, the 'ACM Conference on Computer Human Interaction (CHI)'.

Credit: 
Saarland University

Physical activity in the morning could be most beneficial against cancer

One potential cause of cancer is circadian disruption, the misalignment of environmental cues (light, food intake, etc.) and our endogenous circadian rhythms. It is established that regular physical activity throughout lifetime can reduce cancer risk. This protective effect could be the most beneficial when physical activity is done in the morning -this is the main result of a recent study coordinated by the Barcelona Institute for Global Health (ISGlobal), a centre supported by the "la Caixa Foundation", together with the Department of Epidemiology at the Medical University of Vienna.

Most studies on circadian disruption and cancer risk focused on night shift work. Recent studies suggest that exposure to light at night and late food intake may play a role in the etiology of cancer. However, to date it remains unknown if the timing of physical activity could influence cancer risk through circadian disruption.

To address this question, the researchers examined the effect of timing of recreational physical activity on breast and prostate cancer risk in a population based case control study. They hypothesized that the beneficial effect of the longest done physical activity in reducing cancer risk could be stronger when done in the morning. They based their hypothesis on the results of an experimental study which showed that physical activity in the afternoon and in the evening can delay melatonin production, a hormone produced mainly during the night and with well-known oncostatic properties.

The analysis included 2,795 participants of the multicase-control (MCC-Spain) study in Spain. The researchers found that the beneficial effect of the physical activity (longest done throughout lifetime) to reduce breast and prostate cancer risk was stronger when the activity was regularly done in the morning (8-10 am). In men, the effect was similarly strong also for evening activity (7-11 pm). Results were unchanged when considering the most strenuous physical activity timing. Effects differed across chronotypes, the preference for sleeping and being active at a certain time of day. Early morning activity (8-10 am) seemed especially protective for late chronotypes, people who generally prefer to be active towards the evening.

In their paper, which was published in the International Journal of Cancer, the epidemiologists discuss how physical activity may influence human circadian rhythms and suggest possible biological mechanisms (e.g. alteration of melatonin and sex hormone production, nutrient metabolism etc.).

Overall the findings of this study indicate that "time of the day of physical activity is an important aspect that may potentiate the protective effect of physical activity on cancer risk", commented Manolis Kogevinas, Scientific Director of the Severo Ochoa Distinction at ISGlobal and coordinator of the study. "These results, if confirmed, may improve current physical activity recommendations for cancer prevention. Clear is that everyone can reduce his/her cancer risk simply by being moderately physically active for at least 150 minutes each week", he added.

Credit: 
Barcelona Institute for Global Health (ISGlobal)

Mental accounting is impacting sustainable behavior

Mental accounting is a concept that describes the mental processes we employ to organise our resource use. Human beings tend to create separate mental budget compartments where specific acts of consumption and payments are linked. This mechanism can be counter-productive when it comes to energy consumption and can have a negative impact on attempts to reduce carbon emissions. Psychologists from the University of Geneva (UNIGE), working in collaboration with the University of Applied Sciences and Arts in Western Switzerland (HES-SO Valais), have published a perspective in the highly influential journal Nature Energy. The article links theories and research on mental accounting to energy and sustainability behaviour, proposing concrete strategies to improve the impact of climate-control measures.

Mental accounting, a concept known by psychology researchers since the 1980s, describes how the human mind functions when performing acts of consumption. For instance, someone who has bought a cinema ticket in advance but who cannot find it on entering the cinema will typically not buy a second ticket: their film budget has already been spent! This example illustrates our tendency to mentally segment our budgets and link them to specific acts of consumption. &laquoThese basic cognitive mechanisms can help us better understand unsustainable behaviour. If they are taken into account, they could be used to fine-tune the way policy instruments are designed to fight climate change, improve prevention and promote sustainable behaviour», begins Tobias Brosch, professor in psychology of sustainable development at UNIGE's Faculty of Psychology and Educational Sciences and the Swiss Centre for Affective Sciences. &laquoFor this article, we used the currently ongoing discussions around the carbon tax to illustrate the impact of three mechanisms of mental accounting on behaviour and to propose ways to circumvent this impact».

Justifications, rebounds and labels

The spillover effect refers to the fact that we tend to justify one behaviour by another of our behaviours. &laquoSomeone who makes the effort to cycle to work every day will use this argument to justify, to himself or others, buying a plane ticket to go on holiday to the Seychelles. A possible intervention strategy to prevent this is to encourage people to create differentiated mental accounts using targeted messages», states the psychologist.

The rebound effect explains how actions can induce a negative energy balance when people fail to adapt their budgets to a new situation. For example, people who buy an energy-efficient car may feel inclined to use it more often, cancelling out potential energy savings. To tackle this phenomenon, the psychologists suggest informing people about the real energy costs of their new car so they can update their consumption budget.

Our minds create mental accounts with precise labels. The mental account that is opened when we receive a sum of money in a specific context determines what the money will be spent on. &laquoA monetary gift received for a birthday will be labelled 'pleasure', and will most likely be spent on pleasurable experiences», says Professor Brosch by means of illustration. This can be problematic in the context of sustainable decision-making. For instance, the financial returns on solar panels installed at home appear only indirectly in the electricity bill and are not explicitly labelled as &laquoenergy saving». Accordingly, people will not necessarily think about reinvesting this money in new sustainable measures. &laquoClear labels are needed. In Switzerland, part of the carbon levy is returned to citizens via a reduction in health insurance costs. It would be better to label such an income 'Climate action revenue'", argues Tobias Brosch.

Take the right measures but don't forget your values

The analysis carried out by the psychologists proposes concrete measures aimed at the political sphere so that pro-climate initiatives can be improved by factoring in human behaviour. &laquoWe need to clearly show the price of energy, make the message salient, and demonstrate the impact of consumption on CO2-emissions through concrete feedback», summarises Ulf Hahnel, senior researcher at UNIGE and first author of the study.

The approaches developed in the perspective help conceptualise spending and diversify mental accounts so that individuals can better adapt their behaviours. But Hahnel warns: &laquoBe careful to consider your values and not to fall into purely marketing-based initiatives. You cannot put sustainability labels on just any tax credit». &laquoBounded rationality, including mental accounting, can help introducing innovative policies for climate change mitigation in addition to price-oriented ones» adds Valentino Piana, senior economist at HES-SO, who contributed to the study. Professor Brosch concludes in the same tone: &laquoOur work helps to understand behaviour, how humans make choices and take decisions. Our goal isn't to abolish free will, but to provide a behavioural toolbox. Policymakers can use this knowledge to develop strategies based not just on scientific evidence, but also on ethical considerations.»

Credit: 
Université de Genève

Quantum physics: Physicists successfully carry out controlled transport of stored light

image: For the experiment, atoms of rubidium-87 are first pre-cooled and then transported to the main test area, which is a custom-made vacuum chamber. There they are cooled to temperatures of just a few microkelvins.

Image: 
photo/©: Windpassinger group

A team of physicists led by Professor Patrick Windpassinger at Johannes Gutenberg University Mainz (JGU) has successfully transported light stored in a quantum memory over a distance of 1.2 millimeters. They have demonstrated that the controlled transport process and its dynamics has only little impact on the properties of the stored light. The researchers used ultra-cold rubidium-87 atoms as a storage medium for the light as to achieve a high level of storage efficiency and a long lifetime.

"We stored the light by putting it in a suitcase so to speak, only that in our case the suitcase was made of a cloud of cold atoms. We moved this suitcase over a short distance and then took the light out again. This is very interesting not only for physics in general, but also for quantum communication, because light is not very easy to 'capture', and if you want to transport it elsewhere in a controlled manner, it usually ends up being lost," said Professor Patrick Windpassinger, explaining the complicated process.

The controlled manipulation and storage of quantum information as well as the ability to retrieve it are essential prerequisites for achieving advances in quantum communication and for performing corresponding computer operations in the quantum world. Optical quantum memories, which allow for the storage and on-demand retrieval of quantum information carried by light, are essential for scalable quantum communication networks. For instance, they can represent important building blocks of quantum repeaters or tools in linear quantum computing. In recent years, ensembles of atoms have proven to be media well suited for storing and retrieving optical quantum information. Using a technique known as electromagnetically induced transparency (EIT), incident light pulses can be trapped and coherently mapped to create a collective excitation of the storage atoms. Since the process is largely reversible, the light can then be retrieved again with high efficiency.

The future objective is to develop a racetrack memory for light

In their recent publication, Professor Patrick Windpassinger and his colleagues have described the actively controlled transport of such stored light over distances larger than the size of the storage medium. Some time ago, they developed a technique that allows ensembles of cold atoms to be transported on an 'optical conveyor belt' which is produced by two laser beams. The advantage of this method is that a relatively large number of atoms can be transported and positioned with a high degree of accuracy without significant loss of atoms and without the atoms being unintentionally heated. The physicists have now succeeded in using this method to transport atomic clouds that serve as a light memory. The stored information can then be retrieved elsewhere. Refining this concept, the development of novel quantum devices, such as a racetrack memory for light with separate reading and writing sections, could be possible in the future.

Credit: 
Johannes Gutenberg Universitaet Mainz