Culture

Study suggests increased risks for COVID-19 patients who smoke, vape

image: A study led by TTUHSC's Luca Cucullo, Ph.D., looked at the effect smoking and vaping may have on the cerebrovascular and neurological systems of COVID-19 patients.

Image: 
TTUHSC

As the SARS-CoV-2 virus, or COVID-19 has unfurled its tentacles across the globe, the severe respiratory and pulmonary disorders associated with the infection have become well known. However, recent case studies also have strongly suggested the presence of cerebrovascular-neurological dysfunction in COVID-19 patients, including large artery ischemic strokes that originate in one of the brain's larger blood-supplying arteries such as the carotid.

Luca Cucullo, Ph.D., and other researchers from the Texas Tech University Health Sciences Center (TTUHSC) have for years studied the effects smoking and vaping have on the cerebrovascular and neurological systems. Their research, and that of others, has shown smokers of tobacco and vaping products are more vulnerable to viral and bacterial infection than are non-smokers.

Based on those findings and the recent COVID-19 patient case studies, Cucullo and TTUHSC graduate research assistant Sabrina Rahman Archie reviewed the role smoking and vaping may play in the cerebrovascular and neurological dysfunction of those who contract the virus. Their study, "Cerebrovascular and Neurological Dysfunction under the Threat of COVID-19: Is There a Comorbid Role for Smoking and Vaping?" was published May 30 in the International Journal of Molecular Sciences.

In his previous research, Cucullo demonstrated how tobacco smoke can impair a person's respiratory function. From there, it can affect the vascular system and eventually the brain. Because COVID-19 also attacks the respiratory and vascular systems, he and Archie wanted to see if there were any reported cases indicating the virus may also affect the brain and lead to the onset of long-term neurological disorders like ischemic strokes. They also looked for evidence showing smoking and vaping can otherwise worsen the outcomes for COVID-19 patients, which Cucullo said seems to be the case.

Archie said some case studies demonstrate there are indeed stroke occurrences in COVID-19 patients and the rates appear to be increasing every day. In fact, one study comprised of 214 patients found that 36.45% of COVID patients had neurological symptoms, further indicating the virus is able to affect the cerebral vascular system. But how does this happen?

There are within the human body approximately 13 blood coagulation factors that can be increased due to hypoxia, a condition that occurs when the body is deprived of sufficient amounts of oxygen at the tissue level, as occurs with smoking. Archie said COVID-19 appears to also raise some blood procoagulant, especially the von Wellebrand Factor, a blood clotting protein that primarily binds carries coagulation factor VIII and promotes platelet adhesion at the site of wounds.

"When the coagulant factor will be increased in our body, there will be a higher chance of clot formation," Archie explained. "Ultimately, it will be responsible for several vascular dysfunctions, for example, hemorrhagic or ischemic stroke."

Because COVID-19 and smoking or vaping each increases blood coagulation factors that may eventually affect the cerebral vascular system, Cucullo believes the stroke risk may be higher still for COVID-19 patients who smoke.

"COVID-19 seems to have this ability to increase the risk for blood coagulation, as does smoke," Cucullo added. "This may ultimately translate in higher risk for stroke."

Recent clinical study data also shows some of the damage caused by COVID-19, especially to the respiratory system, is permanent. Cucullo said the same data indicates that patients who recover from COVID-19 still have an elevated risk for stroke and that age and physical activity don't seem to be factors. Some of those with the highest risk factors for long-term problems related to COVID-19 are young adults in their 20s and 30s who were active and considered to be in their physical prime.

"After COVID-19, some of those can barely take few steps without having breathing issues, so the recovery, it's kind of formal recovering, but some of these long-term effects remain," he added.

In addition to impairing the immune and vascular systems and triggering cerebrovascular and neurological dysfunction, smoking and vaping often worsen the outcomes for patients who contract influenza or other respiratory or pulmonary diseases. Because COVID-19 appears to affect many of the same systems within the body, Cucullo said it would seem logical to think the health risks are increased for COVID-19 patients who smoke, but the virus is too new to know for certain.

"We don't even know whether COVID-19 can get into the brain because nobody has actually checked for it yet," Cucullo said. "I think it's very early for this kind of study; the prime clinical concern is either a vaccine or trying to alleviate the symptoms, in particular the respiratory symptoms, so they didn't even get that far. We are planning to do something from that point of view; this is something we will definitely research."

Credit: 
Texas Tech University Health Sciences Center

Junk DNA might be really, really useful for biocomputing

image: Simple repeats form novel DNA structures that change how DNA is read out. They are called flipons and act as ON-OFF switches for compiling genetic programs. The messages produced carry information via codons. The codonware specifies the proteins present in wetware.

Image: 
Alan Herbert

Dr. Herbert asks what if our genome was far smarter than everyone previously believed? What if in the many DNA repeat elements lay the foundation for building a novel type of biocomputer? This approach would enable calculations performed with self-renewing wetware rather than stone-cold hardware, opening the door to logic circuits based on DNA that flips from one state to another, analogous to the way silicon switches "on" and "off".

How might this kind of DNA computer work? Simple repeats are called that because the DNA sequence repeats itself over and over again a number of times. The repeats actually are great for building DNA structures with useful properties. Instead of the everyday B-DNA described by Watson and Crick, repeats of certain DNA sequences can morph into some rather exotic higher-energy 3D structures. They can form left-handed duplexes, three-stranded triplexes, and four-stranded quadruplexes. The different DNA structures change how information is read out from DNA to make RNA.

Those repeat sequences that are able to switch from one DNA structure to another are called flipons. Collectively, Dr. Herbert refer to these flipons and their associated functionality as fliponware. Fliponware sets the stage for a cell to make the right wetware tools and run the right genetic program to get the job at hand done. It allows assembly of gene pieces into different RNA-based programs for a cell to overcome challenges from its environment.

Dr. Herbert provides examples of how flipons can be used to create genetic programs, describing ways they can be wired into the logic gates that computers need to function. The flipon logic gates can perform 'AND' or 'NOT' operations. Many can be combined to perform the Boolean operations essential to a universal computer as first described by Turing. Their use in the genome for logical operations resembles how Turing machines work, but instead of a Turing tape, the cell uses RNA to record the results. The series of processing steps decides whether the RNA message is stable or not and the genetic program to execute.

Many of the novel RNA assemblies are trashed without ever being used--they may not be needed or their assembly is defective (equivalent to a FALSE result from a Turing machine). Those messages that persist become the codonware of the cell (logically the same as a TRUE result from a Turing machine). The codonware that results then defines the cell's biology by the wetware they direct the production of..

This form of computation differs from previously described DNA computers built without flipons. These other devices depend on whether or not pairs of DNA sequences match each other. If they do, the matches enable the next calculation step. Flipon computers differ in many ways from those using DNA matching. First, some flipons switch very fast (in milliseconds) because all they need to do is change from one 3D structure to another without the need to search for a matching DNA to pair with. Turning a flipon ON or OFF is possible in many different ways. For example, just stretching the DNA can cause flipons to morph, or they can flip due to a change in temperature or variations in salt concentration.

The combination of fliponware, wetware and codonware is analogous to program code, machine language and hardware in a silicon computer. Each of these bioware sets has memory to enable learning. They use error correction to prevent bad outcomes. While flipons are made with simple sequences, codonware is made from complex sequence mixes. The information provides the cell very detailed information on the wetware to make. In simple terms, fliponware instructs which tools to use for a particular job, codonware tells how to make the tools, and wetware does the job.

Dr. Herbert adds, "I expect that soon there will be many exciting and beneficial applications for fliponware. Simple sequences are not your grandparent's junk--instead they are like the rules of thumb that simplify life, acquired over many years of evolution."

Fliponware has several immediate applications:

1. Therapeutic applications (target drugs to flipon states that enable cancers or inflammatory diseases)
2. Biosensors (to detect environmental changes)
3. Persistent DNA memory (exploiting the extreme stability of quadruplexes after they form)
4. Cellular switches (change of output in response to a change of input)
5. Novel DNA nano-architectures (the 3D structure formed depends on flipon state)

Credit: 
InsideOutBio

How a few negative online reviews early on can hurt a restaurant

COLUMBUS, Ohio -- Just a few negative online restaurant reviews can determine early on how many reviews a restaurant receives long-term, a new study has found.

The study, published online earlier this month in the journal Papers in Applied Geography, also found that a neighborhood's median household income affected whether restaurants were rated at all.

"These online platforms advertise themselves as being unbiased, but we found that that is not the case," said Yasuyuki Motoyama, lead author of the paper and an assistant professor of city and regional planning at The Ohio State University.

"The way these platforms work, popular restaurants get even more popular, and restaurants with some initial low ratings can stagnate."

The study evaluated reviews in Franklin County, Ohio, from the websites Yelp and Tripadvisor of about 3,000 restaurants per website. Franklin County, home to Columbus and Ohio State, is also home to the headquarters of more than 20 restaurant chains. Previous research has found that the food industry considers consumer preferences in the area to be a litmus test for the broader U.S. market.

The researchers collected reviews for restaurants published in May 2019, then analyzed those reviews by rating and geographic location. They considered demographics for each neighborhood, and noted the socioeconomics of each neighborhood, too, based on household income.

The study found that restaurants with a smaller number of reviews on sites like Yelp and TripAdvisor had higher likelihood of a low rating.

"The more reviews a restaurant received, the higher the average rating of the restaurant," said Kareem Usher, co-author of the paper and an assistant professor of city and regional planning at Ohio State. "But this has implications: If one of the first reviews a restaurant receives comes from a dissatisfied customer, and people check that later and think 'I don't want to go there' based on that one review, then there will be fewer reviews of that restaurant."

The opposite is true for restaurants that receive positive reviews or a large number of reviews: More people are likely to review those restaurants, improving the likelihood that a restaurant's average rating will be higher.

The study found that 17.6 percent of restaurants with only one to four reviews received a low rating on Yelp. But that decreased to 9.3 percent for those with between five and 10 reviews. On Tripadvisor, those with one to four reviews had a 5.6 percent probability of having a poor review, going down to 0.6 percent for those with five to 10 reviews.

Researchers also found that restaurants in several of the poorest neighborhoods in Franklin County tend not to be rated on the sites. However, the researchers did not find a direct link between a neighborhood's socioeconomics or racial makeup and the average rating of the restaurants there.

Motoyama cautioned that the study had some limits: It was conducted in one county, and future work could expand to include other areas around the country. The high level multivariate analysis could only use the Yelp data, as the majority of key information in Tripadvisor was missing. The researchers also did not analyze the content of the reviews, which could offer additional clues about bias.

But, he said, the study does indicate that online review sites can have significant effects on a restaurant's success or failure - and suggests that, perhaps, the sites can set up policies that might be more fair.

"Maybe these online platforms can withhold reviews until a restaurant gets a certain number of reviews - say, 10 or more," he said. "That way if there are two or three customers who are very dissatisfied with a particular experience, they are not directing the restaurant's success or failure."

Credit: 
Ohio State University

Coronavirus antibodies fall dramatically in first 3 months after mild cases of COVID-19

Correction Note:

Due to a math miscalculation in the study, a previous version of this release contained an error in the rate at which COVID-19 antibodies decline after infection. The correct rate is 36 days, not 73 as previously reported, which is actually a more dramatic rate of decay. The change is reflected under the findings section of the release.

FINDINGS

A study by UCLA researchers shows that in people with mild cases of COVID-19, antibodies against SARS-CoV-2 -- the virus that causes the disease -- drop sharply over the first three months after infection, decreasing by roughly half every 36 days on average. If sustained at that rate, the antibodies would disappear within about a year.

BACKGROUND

Previous reports have suggested that antibodies against the novel coronavirus are short-lived, but the rate at which they decrease has not been carefully defined. This is the first study to carefully estimate the rate at which the antibodies disappear.

METHOD

The researchers studied 20 women and 14 men who recovered from mild cases of COVID-19. Antibody tests were conducted at an average of 36 days and 82 days after the initial symptoms of infection.

IMPACT

The findings raise concerns about antibody-based "immunity passports," the potential for herd immunity and the reliability of antibody tests for estimating past infections. In addition, the findings may have implications for the durability of antibody-based vaccines.

Credit: 
University of California - Los Angeles Health Sciences

Weather-based decisions may reduce fungicide sprays on table beets

image: Effect of pydiflumetofen + difenoconazole on the severity of Cercospora leaf spot caused by Cercospora beticola in a small-plot, replicated trial conducted at Freeville in 2019.

Image: 
Sarah J. Pethybridge, Sandeep Sharma, Zachariah Hansen, Julie R. Kikkert, Daniel L. Olmstead, and Linda E. Hanson

A plant pathologist at Cornell University, Sarah J. Pethybridge supplies New York vegetable growers with the information they need to control soilborne diseases and adopt effective management strategies. She crafts her research around conversations with table beet growers about productivity issues in the field. These growers continuously expressed frustration with maintaining healthy foliage.

Cercospora leaf spot (CLS), which appears as small gray to black-colored lesions on leaves, is the dominant disease affecting table beet foliage in New York. CLS epidemics occur annually and can lead to defoliation and significant crop loss as healthy leaves are important to facilitate harvest by top-pulling.

Growers control CLS using fungicides. Through current management practices, growers begin spraying fungicides once they see one lesion per leaf then continue applying fungicides on a calendar basis.

Pethybridge and her team conducted a study to examine the efficacy of this treatment using three different fungicides and assessed the potential of using a risk-based model to improve fungicide timings. They found that two of the fungicides significantly improved CLS control if treatment was applied before the pathogen was present while the third fungicide worked best based on the current treatment model. Moreover, application of the first two fungicides were reduced from three to two using a weather-based risk model.

"This shows there is an opportunity to reduce spray frequency by scheduling on weather-based risk rather than calendar applications," Pethybridge explained. "But different fungicides vary in the optimal risk threshold. The action thresholds and risk thresholds need to be assessed for each fungicide to be used for optimal disease management, but there are opportunities for improving disease control and reducing fungicide use with these tactics."

Use of a weather-based decision support system to schedule fungicides for the control of CLS in table beet reduces unnecessary expense to the grower and unnecessary exposure of a fungal population to single-site modes of action posing a high risk of resistance development. For more information, read "Optimizing Cercospora Leaf Spot Control in Table Beet Using Action Thresholds and Disease Forecasting" in Plant Disease.

Credit: 
American Phytopathological Society

The Lancet Psychiatry: Study estimates impact of COVID-19 pandemic on UK mental health after first month of lockdown

Mental health declined substantially after the first month of COVID-19 lockdown, a survey of UK households published today in The Lancet Psychiatry journal suggests.

Among the 17,452 people who responded to the survey, the average level of mental distress increased in April 2020, compared to average scores before the pandemic (1.1 point increase in average mental distress score, from 11.5/36 points to 12.6/36 points). This increase in mental distress was 0.5 points higher than would be expected based on upward trends that have been observed over the past five years.

During late April 2020, more than one quarter of study participants reported a level of mental distress that is potentially clinically significant (27.3%), compared with one in five people before the pandemic (18.9%). However, the researchers stress their study is based on survey responses rather than clinical assessment and this does not mean that one in four people has a clinical mental illness.

The study reveals that some mental health inequalities that were present before the pandemic have widened. The increase in mental distress was greater among women than men (women: average adjusted increase of 0.92/36, men: 0.06/36), and in younger age groups than older people (16-24 year olds average adjusted increase: 2.69 points, 70 and over average increase: 0.17 points).

The findings also reveal new mental health inequalities after one month of lockdown, with people living with young children showing greater increases in mental distress than people from child-free homes.

The researchers say the increase in mental health distress in April 2020 may represent a spike in emotional response that might have stabilised or reduced as people adjusted to the restrictions imposed on daily life. However, as the economic fallout from the pandemic progresses, when furloughs turn into redundancies and mortgage holidays time out, the researchers say mental health inequalities will likely widen and deepen and must be monitored closely so that steps can be taken to mitigate against a rise in mental illness in these groups.

Dr Matthias Pierce, a co-author from the University of Manchester, said: "This is the first peer-reviewed study to track changes in UK population mental health from before the COVID-19 pandemic and into the subsequent lockdown period. Previous studies have focused on specific groups, such as key workers, rather than a random sample of the whole population. And many have used non-validated measures of mental health, or lacked comparable pre-COVID-19 baseline data against which to measure change." [1]

The latest findings are based on a long-term study of more than 40,000 UK households that has been tracking the mental health of the nation at annual intervals for more than ten years, called the UK Household Longitudinal Study (UKHLS).

The researchers used the 12-item General Health Questionnaire, which is a validated tool for measuring levels of mental distress. Participants are asked to reflect on the previous two weeks and report how often they have experienced symptoms such as difficulties sleeping or concentrating, problems with decision making or feeling overwhelmed. Each question is rated between 0 and 3, giving a total potential score of 36, where higher scores represent higher levels of mental distress. The researchers defined a clinically-relevant threshold of mental distress as experiencing four or more different symptoms at a higher level than usual. This was used to calculate the proportion of respondents experiencing clinically-relevant levels of mental distress.

Between 23 and 30 April 2020, one month after the UK lockdown was introduced, people who had responded to the most recent UKHLS surveys were invited to complete an online version of the questionnaire.

Some 17,452 people responded to the online survey out of a total study population of 42,330. All those included in the study were aged 16 and over in April 2020. Baseline data of mental health levels before the pandemic were available for comparison for 15,376 participants who were aged over 16 when they responded to either of the two previous UKHLS questionnaires.

The results highlight mental health inequalities that were present before the pandemic. Women had higher levels of mental distress than men on average (women: 13.6/36, men: 11.5/36). When comparing data from individuals from before and after lockdown and adjusting for prior trends, women also showed a greater increase in mental distress (women: average increase of 0.92/36, men: 0.06/36). Around one in three women had a score above the clinically-relevant threshold compared with one in five men (women: 33.3%, men: 20.4%). However, the researchers say mental distress in men may be more likely to manifest in ways not captured by the General Health Questionnaire, such as alcohol misuse, and further research is needed to examine this.

When scores from individual participants were compared before COVID-19 and in April 2020, the greatest increase in mental distress was seen in young people aged 18 to 24 (average increase of 2.69 points) and those aged 25 to 34 (average increase of 1.57 points), after adjustment for prior trends and other factors.

Other mental health inequalities that were present before the pandemic persisted, but did not widen. People living without a partner (13.8/36) and those with a pre-existing health condition that would make them more vulnerable to COVID-19 infection (13.7/36) had higher levels of mental distress than the average population (12.6/36). However, comparison of individuals' responses before and after lockdown did not reveal a larger than average increase in mental distress for these groups.

Mental distress was more common among people living in low income households (average score 13.9/36 in households of lowest income, compared with 12.0/36 for highest income homes). In addition, people who were unemployed before the lockdown had higher mental distress scores than those in employment (unemployed: 15.0/36, employed: 12.5/36).

However, the increase in mental distress relative to prior trends was greater among those who had jobs before the pandemic (average increase 0.63/36 points for employed vs -0.48/36 points for unemployed, after adjusting for other factors). This group includes those who may have been furloughed, lost their job or worried about losing their job, as well as those shifting to working from home. The researchers say this trend should be monitored closely as the effects of the pandemic on employment take hold and expected redundancies come to fruition.

Another inequality that emerged is that people living with young children aged 5 and under also showed a substantial increase in levels of mental distress scores (average increase of 1.45/36 points), compared with child-free households (average increase of 0.33/36 points).

There were too few responses to examine changes in mental health within ethnic groups from before and after the pandemic. However, average mental distress scores were higher for Asian people than white British (Asian: 13.7/36, white British: 12.5/36), but the the study was not powered to detect a change for other ethnic groups.

Key workers had a similar average score of mental distress to the general population (keyworker: 12.7/36, general population: 12.6/36), but were more likely to have a score above the clinically relevant threshold (keyworker: 29.9%, general population: 27.3%).

Sally McManus, joint senior author, of City, University of London, UK, said: "The pandemic has brought people's differing life circumstances into stark contrast. We found that, overall, pre-existing inequalities in mental health for women and young people have widened. At the same time new inequalities have emerged, such as for those living with pre-school children. These findings should help inform social and educational policies aimed at mitigating the impact of the pandemic on the nation's mental health, so that we can try to avoid a rise in mental illness in the years to come." [1]

Before the pandemic, the UKHLS questionnaire was carried out in person or over the phone using an interview format. The April wave of the study was carried out entirely online. The researchers caution that this may affect the answers that are given and could introduce bias. However, the degree of change in mental distress they observed means it is likely that these are attributable to the virus and the events associated with the pandemic.

Professor Kathryn Abel, joint senior author, of the University of Manchester, UK, said: "While COVID-19 infection is a greater physical health risk to older people, our study suggests that young people's mental health is being disproportionately affected by efforts to stop transmission of the virus. We would recommend policies focused on women, young people and those with preschool aged children as a priority to prevent future mental illness." [1]

Credit: 
The Lancet

Asteroid shower on the Earth-Moon system 800 million years ago revealed by lunar craters

image: An asteroid shower on the Earth-Moon system

Image: 
Artist's illustration, Murayama/Osaka Univ.

A research team led by Osaka University investigated the formation ages of 59 lunar craters with a diameter of approximately 20 km using the Terrain Camera (TC) onboard the lunar orbiter spacecraft Kaguya.

Kaguya (formerly SELENE, for SELenological and ENgineering Explorer), is a Japanese Space Agency (JAXA) lunar orbiter mission.

This group demonstrated that an asteroid of 100 km in diameter was disrupted 800 million years ago (800 Ma) and that at least (4–5)×1016 kg of meteoroids, approximately 30–60 times more than the Chicxulub impact, must have plunged into the Earth-Moon system. Their research results were published in Nature Communications.

Since a thin layer of iridium (Ir) enrichment (a rare earth element) 65.5 Ma had been detected worldwide, it is thought that an asteroid of 10–15 km in diameter hit the Earth and caused or greatly contributed to the Cretaceous mass extinction.

The probability of an asteroid of this size striking Earth is thought to be once in 100 million years. It is known that impact craters on Earth created before 600 Ma have been erased over the years by erosion, volcanism, and other geologic processes. Thus, to find out about ancient meteoroid impacts on Earth, they investigated the Moon, which has almost no erosion.

They investigated the formation age distribution of 59 large craters with diameters larger than approximately 20 km by examining the density of 0.1–1 km-diameter craters in the ejecta of these 59 craters. One of these examples is the Copernicus crater (93 km in diameter) and its surrounding craters (Figure 2). The density of 860 craters with diameter of 0.1–1 km (shown in green) was examined to derive the age of the Copernicus crater. As a result, 8 of 59 craters were found to be formed simultaneously (17 by a spike model), a world first. (Figure 3)

Considering crater scaling laws and collision probabilities with the Earth and Moon, at least (4–5)×1016 kg of meteoroids, approximately 30–60 times more than the Chicxulub impact, must have struck the Earth immediately before the Cryogenian (720–635 Ma), which was an era of great environmental and biological changes.

In addition, given the disruption age and orbit elements of existing asteroid families, it is highly likely that the disruption of the parent body of C-type asteroid Eulalia caused an asteroid shower. A C-type asteroid is a class expected to contain carbon in analogy to the carbonaceous chondrites (meteorites).

Because Eulalia's surface reflectance is similar to that of near-Earth C-type asteroid Ryugu, Eulalia has drawn attention as a parent body of a C-type Rubble pile, a celestial body consisting of numerous pieces of rock near the Earth. (Sugita et al. 2019)

Ryugu was probed by the asteroid explorer Hayabusa2, an asteroid sample-return mission operated by JAXA.

From these considerations, they concluded that sporadic meteorite bombardment due to the disruption of asteroids 800 Ma caused the following:

-Some of the resulting fragments fell on terrestrial planets and the Sun,

-Others stayed in an asteroid belt as the Eulalia family, and

-Remnants had orbital evolution as a member of near-Earth asteroids.

This research suggests the following possibilities:

1. An asteroid shower may have brought a large amount of phosphorus (P) to the Earth, affecting the terrestrial surface environment,

2. A recent C-type asteroid shower may have contaminated the lunar surface with volatile elements,

3. The Eulalia family, the parent body of a near-Earth C-type asteroid, may have brought an asteroid shower to the Earth and the Moon.

Lead author Professor Terada says, "Our research results have provided a novel perspective on earth science and planetary science. They will yield a wide range of positive effects in various research fields."

Credit: 
Osaka University

Genetic variant may explain why some women don't need pain relief during childbirth

Women who do not need pain relief during childbirth may be carriers of a key genetic variant that acts a natural epidural, say scientists at the University of Cambridge. In a study published today in the journal Cell Reports, the researchers explain how the variant limits the ability of nerve cells to send pain signals to the brain.

Childbirth is widely recognised as a painful experience. However, every woman's experience of labour and birth is unique, and the level of discomfort and pain experienced during labour varies substantially between women.

A collaboration between clinicians and scientists based at Addenbrooke's Hospital, part of Cambridge University Hospitals NHS Foundation Trust (CUH), and the University of Cambridge sought to investigate why some mothers report less pain during labour.

A group of women was recruited and characterised by the team led by Dr Michael Lee from the University's Division of Anaesthesia. All the women had carried their first-born to full term and did not request any pain relief during an uncomplicated vaginal delivery. Dr Lee and colleagues carried out a number of tests on the women, including applying heat and pressure to their arms and getting them to plunge their hands into icy water.

Compared to a control group of women that experienced similar births, but were given pain relief, the test group showed higher pain thresholds for heat, cold and mechanical pressure, consistent with them not requesting pain relief during childbirth. The researchers found no differences in the emotional and cognitive abilities of either group, suggesting an intrinsic difference in their ability to detect pain.

"It is unusual for women to not request gas and air, or epidural for pain relief during labour, particularly when delivering for the first time," said Dr Lee, joint first author. "When we tested these women, it was clear their pain threshold was generally much higher than it was for other women."

Next, senior co-author, Professor Geoff Woods, and his colleagues at the Cambridge Institute for Medical Research sequenced the genetic code of both groups of women and found that those in the test group had a higher-than-expected prevalence of a rare variant of the gene KCNG4. It's estimated that one approximately 1 in 100 women carry this variant.

KCNG4 provides the code for the production of a protein that forms part of a 'gate', controlling the electric signal that flows along our nerve cells. As the joint first author Dr Van Lu showed, sensitivity of this gatekeeper to electric signals that had the ability to open the gate and turn nerves on was reduced by the rare variant.

This was confirmed in a study involving mice led by Dr Ewan St. John Smith from the Department of Pharmacology, who showed that the threshold at which the 'defective' gates open, and hence the nerve cell switches 'on', is higher - which may explain why women with this rare gene variant experience less pain during childbirth.

Dr St. John Smith, senior co-author, explained: "The genetic variant that we found in women who feel less pain during childbirth leads to a 'defect' in the formation of the switch on the nerve cells. In fact, this defect acts like a natural epidural. It means it takes a much greater signal - in other words, stronger contractions during labour - to switch it on. This makes it less likely that pain signals can reach the brain."

"Not only have we identified a genetic variant in a new player underlying different pain sensitivities," added senior co-author Professor Frank Reimann, "but we hope this can open avenues to the development of new drugs to manage pain."

"This approach of studying individuals who show unexpected extremes of pain experience also may find wider application in other contexts, helping us understand how we experience pain and develop new drugs to treat it," said Professor David Menon, senior co-author.

Credit: 
University of Cambridge

Photon-based processing units enable more complex machine learning

image: The photonic tensor core performs vector-matrix multiplications by utilizing the efficient interaction of light at different wavelengths with multistate photonic phase change memories.

Image: 
Mario Miscuglio

WASHINGTON, July 21, 2020 -- Machine learning performed by neural networks is a popular approach to developing artificial intelligence, as researchers aim to replicate brain functionalities for a variety of applications.

A paper in the journal Applied Physics Reviews, by AIP Publishing, proposes a new approach to perform computations required by a neural network, using light instead of electricity. In this approach, a photonic tensor core performs multiplications of matrices in parallel, improving speed and efficiency of current deep learning paradigms.

In machine learning, neural networks are trained to learn to perform unsupervised decision and classification on unseen data. Once a neural network is trained on data, it can produce an inference to recognize and classify objects and patterns and find a signature within the data.

The photonic TPU stores and processes data in parallel, featuring an electro-optical interconnect, which allows the optical memory to be efficiently read and written and the photonic TPU to interface with other architectures.

"We found that integrated photonic platforms that integrate efficient optical memory can obtain the same operations as a tensor processing unit, but they consume a fraction of the power and have higher throughput and, when opportunely trained, can be used for performing inference at the speed of light," said Mario Miscuglio, one of the authors.

Most neural networks unravel multiple layers of interconnected neurons aiming to mimic the human brain. An efficient way to represent these networks is a composite function that multiplies matrices and vectors together. This representation allows the performance of parallel operations through architectures specialized in vectorized operations such as matrix multiplication.

However, the more intelligent the task and the higher accuracy of the prediction desired, the more complex the network becomes. Such networks demand larger amounts of data for computation and more power to process that data.

Current digital processors suitable for deep learning, such as graphics processing units or tensor processing units, are limited in performing more complex operations with greater accuracy by the power required to do so and by the slow transmission of electronic data between the processor and the memory.

The researchers showed that the performance of their TPU could be 2-3 orders higher than an electrical TPU. Photons may also be an ideal match for computing node-distributed networks and engines performing intelligent tasks with high throughput at the edge of a networks, such as 5G. At network edges, data signals may already exist in the form of photons from surveillance cameras, optical sensors and other sources.

"Photonic specialized processors can save a tremendous amount of energy, improve response time and reduce data center traffic," said Miscuglio.

For the end user, that means data is processed much faster, because a large portion of the data is preprocessed, meaning only a portion of the data needs to be sent to the cloud or data center.

Credit: 
American Institute of Physics

Scientists observe learning processes online in the brain

Stimulating the fingertip rhythmically for a sustained period of time, markedly improves touch sensitivity of this finger. A research team led by Associate Professor Dr. Hubert Dinse at Ruhr-Universität Bochum (RUB) analysed the impact of this process in the brain. Using electroencephalography (EEG), the scientists recorded neuronal activity of brain areas associated with tactile processing. They were able to observe changes in activity over time - possibly illustrating a learning process. The team reported their findings in Frontiers in Human Neuroscience on 30 June 2020.

Learning through repetition

In daily life, people learn through practice and repetition, which is possible by a brain process called neuronal plasticity. A prominent example of a cellular basis for such plasticity processes is called long-term potentiation, the ability of neurons to increase communication efficiency with other neurons they are connected to.

In analogy to the rhythmic neuronal activity underlying long-term potentiation, the RUB team has developed a method of stimulation-based learning. Thereby senses, such as the sense of sight or touch, are rhythmically stimulated. A well-studied example of this is electrical stimulation of the fingertips, which - if administered at the correct frequency - has been shown to increase tactile sensitivity in the stimulated fingertip. Studies show that this fingertip stimulation leads to significant plasticity processes in the somatosensory cortex. However, it has not yet been proven whether long-term potentiation is the foundation of these processes.

Two groups, two experiments

The RUB neuroscientists have studied stimulation-based learning in volunteers with EEG recordings. Their goal was to assess neuronal activity as well as its development during this learning process. As such, they carried out two experiments with two groups of subjects. The first experiment served as a control study to confirm that the method of air stimulation administered through an inflatable membrane used in this experiment has the same positive effect on touch sensitivity as the established method of electrical stimulation. The electrical version could not be used in this case, as it distorts the signal of EEG recordings with electrical artefacts.

In the second experiment, volunteers received the aforementioned painless air-stimulation on their fingertip for 40 minutes. At the same time, the activity in the somatosensory cortex of the test subjects was measured with EEG. The scientists concentrated on the area of the brain associated with sensory processing of the hand.

Nerve cells adapt their activity

"Using electroencephalic measurements of brain activity, we were able to show that large cell ensembles adapt their activity to the frequency of the stimulation during the active stimulation phases. This reaction remains stable over 20 minutes, without any signs of habituation, which is very similar to cellular long-term potentiation," explains Dr. Marion Brickwedde, first author of the study.

Furthermore, the scientists were also able to observe how neuronal responses to the stimulation changed over time. They found that the shape of event-related potentials, which represent stimulus processing in the brain, was changing. The event-related desynchronization of the alpha rhythm, a typical response to tactile stimuli, was also reduced after 20 minutes.

"These processes may represent excitability changes in tactile brain areas, that is, a directly observable learning process. It is not yet possible to draw the definite conclusion that the here-applied finger stimulation actually triggers long-term potentiation in the human sensorimotor cortex. But taking into account previous findings which show, for example, the dependence of both processes on the same neuronal receptor type, the accumulating evidence speaks volumes", explains Marion Brickwedde.

Credit: 
Ruhr-University Bochum

Recycling Japanese liquor leftovers as animal feed produces happier pigs and tastier pork

Tastier pork comes from pigs that eat the barley left over after making the Japanese liquor shochu. A team of professional brewers and academic farmers state that nutrients in the leftover fermented barley may reduce the animals' stress, resulting in better tasting sirloin and fillets.

"Kyushu, in Western Japan is well-known historically for making shochu and for its many pig farms. We hope collaborative research projects like ours can directly benefit the local community and global environment," said Yasuhisa Ano, the first author of the research paper published in Food Chemistry. Ano is affiliated with the Kirin Central Research Institute at Kirin Holdings Co., Ltd.

Currently, the mash of leftovers that remains after distilling out the alcohol is considered industrial waste and is often disposed of in ways that create more climate-changing carbon dioxide. Feeding distillation leftovers to farm animals can improve the animals' quality of life, lower farmers' and brewers' costs, appeal to discerning foodies, and benefit the environment by reducing food waste.

Japanese shochu can be made from barley, potatoes, rice or other starches first decomposed with mold, then fermented with yeast, and finally distilled to an alcohol content usually greater than 20 percent. Incidentally, Japanese sake is a fermented drink always made from rice with an alcohol content typically around 15 percent.

Leftovers lower stress

Researchers at the University of Tokyo fed six pigs a standard diet supplemented with shochu distillation remnants, the dried mixture of barley, mold and yeast left over after distilling out the shochu. Pigs fed shochu remnants from age 3 to 6 months had higher amounts of antibodies called IgA in their saliva, indicating that shochu remnants kept the pigs healthier than the standard diet. Additionally, pigs fed shochu remnants had lower stress levels than pigs fed the normal diet supplemented with fresh barley, as measured by the amount of cortisol, a common stress hormone, in their saliva.

Other studies have linked healthier responses to stress to two protein building blocks called leucine and histidine peptides, which barley shochu contains in abundance.

The UTokyo research team performed additional tests in mice to study the effect of barley shochu distillation remnants on stress. Mice that ate the distillation remnants just once directly before a stressful event returned to normal behavior faster than other mice. The mice who ate the shochu remnants also had normal levels of dopamine in their brains after the stressful event, indicating a better response to stress.

Diet of leftovers makes tastier pork

Researchers suspected that the lower stress and better health throughout the pigs' lives created higher quality meat, but they asked flavor experts from Kirin for a blind taste test.

According to the experts' palates, both sirloin and fillet cuts of pork from the shochu remnant-fed pigs were higher quality than meat from pigs that ate the standard diet: better umami, tenderness, juiciness and flavor.

"We saw no difference in the pigs' weight gain between the two diets and the pigs were slaughtered at the standard six months of age, meaning any difference in the quality of meat was not because of a difference in quantity of fat," said Associate Professor Junyou Li from the University of Tokyo, a co-author of the research publication.

That higher quality taste was likely due to chemical differences in the meat. Fat from the higher-quality meat melted at lower temperatures, which creates the delicious melt-in-your-mouth texture. That fat was also made up of a higher percentage of oleic acid, an unsaturated fatty acid linked by other studies to improved levels of "healthy" LDL cholesterol.

"We hope that identifying these benefits for the animals and creating a premium tasting product for consumers will increase farmers' motivation to try a new diet for their pigs," said Professor Masayoshi Kuwahara, director of the University of Tokyo Animal Resource Science Center and last author of the research publication.

Credit: 
University of Tokyo

Women's burden increases in COVID-19 era

Ros Wong, from Flinders University's Climate and Sustainability Policy Research (CASPR) group, was part of a team conducting research across four countries to understand the extent to which COVID-19 restrictions affect women and men differently.

"Women have had to endure additional burdens associated with both paid and unpaid work, often without consideration or the alleviation of other life responsibilities," says Ms Wong, who has completed her PhD at Flinders University this year.

"Women were also tasked with the ongoing organisation of their homes and families under pandemic conditions."

Ms Wong conducted interviews with women from Sri Lanka, Malaysia, Vietnam and Australia that highlight intersections between COVID-19 and gendered burdens, particularly in frontline work, unpaid care work and community activities.

"Our analysis during the early months of the pandemic indicates that women's burdens are escalating. We estimate that women will endure a worsening of their burdens until the pandemic is well under control, and for a long time after."

Ms Wong is critical that public policy and health efforts have not sufficiently acknowledged issues concerned with the associations between gender and disease outbreaks.

She says the study's results will be fundamental to understanding the broader impact both during the crisis and during societal recovery.

"It is critical that public policy and health efforts are proactive in devising transformative approaches that address women's subordinate position in the context of this disease," she says. "In our analysis, we consistently identified that women's burdens across all spheres were not only heavier, but also more dangerous."

Under COVID 19 restrictions in Sri Lanka, women working in frontline health care roles say they faced discrimination in supermarkets when buying groceries, were threatened with eviction, and refused access on public transport.

In Malaysia, only the male head of the household was allowed to shop. Combined with only one person being allowed in a car, this meant that many women were confined to the house unless employed as a frontline worker. However, after a few weeks, this restriction was relaxed, mainly because men struggled to shop effectively and buy basic necessities required for a family.

News media in Vietnam portrayed COVID-19 restrictions as providing the perfect opportunity for women to relax, enjoy a chance for renewed intimacy and to spoil their men - this despite women having to wait in long queues to purchase food and their increased caring duties.

In Australia, childcare and schools did remain open to aid the many health and essential workers with childcare to continue working. However, this placed an extra burden on women as they negotiated school runs, extra housework and caring duties while still employed in essential work.

"COVID-19 restrictions for many women demonstrated once again that women continue to be disadvantaged during natural disasters, war and global pandemics," says Ms Wong.

Credit: 
Flinders University

A survey on optical memory and optical RAM technologies

image: a master-slave scheme, b feedback loop scheme, c injection-locking technique and d phase-change material (PCM) properties in the case of GST compounds.

Image: 
by Alexoudi, T., Kanellos, G.T. & Pleros, N.

Over the past decades, "storing light" has appeared as a rather controversial statement, given that a photon's inherent nature hinders its spatial confinement. The first research efforts in demonstrating optical memory functionality started as a fascinating experimental exercise and two decades later the remarkable achievements of integrated optical memories and optical random access memories (RAMs) introduced a new roadmap for light-based information storage that can offer fast access times, high bandwidth and seamless cooperation with optical interconnect lines.

In a new paper published in Light Science & Application, a team of three Greek researchers, Dr. Theoni Alexoudi and Prof. Nikos Pleros from Department of Informatics of Aristotle University of Thessaloniki in Greece together with their co-worker Prof. George T. Kanellos from Bristol University in UK have evaluated the progress witnessed in the optical memory domain over the past 25 years. Their article provides a thorough analysis on the state-of-the-art integrated optical memory technologies and optical RAMs, shedding light on the physical mechanisms behind demonstrated optical memory devices. In the same analysis, optical memory implementations are also being classified and evaluated via their performance metrics highlighting the benefits of different optical technologies. The authors provide a comprehensive guide for the transition from elementary optical memory units towards advanced memory functionalities such as optical RAM operation and report recent achievements towards this direction. Finally, an analysis is presented for the next steps that optical memory technologies must undertake to release a viable and practical alternative memory roadmap.

These scientists summarize some of the key-findings of their review study:

"Optical memories have gradually penetrated into multiple application sectors that include processing, routing, and computing however, modern memory applications call for advanced memory schemes with random access functionality on top of the simple storage mechanism."

"Optical memories have witnessed an impressive progress in terms of footprint. Their footprint reduced by 12-orders-of-magnitude going from m2 to μm2 during the last 20 years while at the same time electronic counterparts were reduced only by 3 orders-of-magnitude." they added.

"Optical memory integration roadmap has to be shaped around a high-yield and low-cost fabrication technology allowing for dense optical memory architectures to arrive at scales, complexities and cost-efficiencies similar to those of their electronic counterparts." the scientist forecast.

Credit: 
Light Publishing Center, Changchun Institute of Optics, Fine Mechanics And Physics, CAS

Mindfulness training helps men manage anger

The last few months have been particularly difficult for people living in a violent relationship.

But a few glimmers of hope are finally emerging from the coronavirus nightmare.

"For a lot of people, the shutdown has been an extreme situation with a lot of stress. Those of us who work with people on anger management have felt really concerned about what might be going on within the four walls of their homes," says Merete Berg Nesset.

For many years Nesset has worked on treating angry people who beat, yell and threaten. Now she is on the flip side, working on a doctorate at the Norwegian University of Science and Technology on the same topic.

COVID-19 has taken a toll. People have lost their jobs. No one is quite sure what will happen with the economy. Many people are feeling uncertain about the future.

"We know that financial difficulties, unemployment and psychological challenges are linked to aggression and violence. The level of stress clearly increases further when parents also become responsible for teaching their children at home. Situations that are already difficult have escalated for a lot of people who have conflicts from before or a prior mental health problem, because there are fewer opportunities to get away," says Nesset.

But there is hope.

Nesset has just published a study showing that treatment can work very well. What she did was to divide 125 men who applied for help with anger management into two groups.

One group received cognitive-behavioural group therapy using what is called the Brøset model.

The other group participated in a stress management course based on mindfulness. Partners in both groups participated through several surveys conducted before, during and after treatment.

The results following treatment were equally good for both groups:

Prior to treatment, 60 per cent of the men had committed sexual violence against their relationship partners. That is, they demanded sex or threatened sex with a partner. Almost no one reported such violent episodes after treatment.

Prior to treatment, 85 per cent of the men reported physical violence. A large percentage had committed violence that resulted in harm to their partner. After treatment, this percentage dropped to ten per cent.

Prior to treatment, 87 per cent of participants reported psychological or emotional violence, such as threats and derogatory comments. This number declined by 25 per cent but was not as dramatic a drop as for the other types of violence. Nesset says it takes a long time to experience feeling safe.

"There was a high level of both sexual and physical violence before treatment began. It was more than we'd imagined beforehand. When we checked what the partners experienced, we got a slightly different picture of what was actually going on. We know that a lot of angry men hit their partners, but we were surprised that so many committed sexual assaults. At this point the agreement between the husband and partner was low - that is, the partner reported more cases than the man did," says Nesset.

The backdrop for the study was to check whether treating mood disorders using the Brøset model has an effect. In a lot of studies, the control group receives a placebo, or no treatment.

"Unfortunately, about 25 per cent of all killings in Norway are partner killings. Because domestic violence is a public health problem with major health consequences for those exposed to the violence, we found it unethical not to offer treatment. So what we studied was the effectiveness of two types of treatment. Both worked," says Nesset.

One treatment involved eight group sessions in a type of mindfulness training called MBSR, which stands for mindfulness-based stress management. The course was led by psychologist Nina Flor Thunold who at that time worked at St. Olavs Hospital, Østmarka division, in a district east of Trondheim.

The course was not designed specifically for anger management but for illness in general, and the content was defined in advance - regardless of why any individual was in the course.

The second treatment involved 15 sessions of cognitive-behavioural group therapy. The program was developed at St. Olavs Hospital and is called the Brøset model. The therapy has different stages, with the first phase being to stop the violence. According to Nesset, you can do that without understanding why you become violent.

After this phase you explore patterns of violence and map the situations that trigger violence for you, what thoughts and feelings arise and what actions repeat themselves.

"Some people who are violent are offended easily. During treatment, participants find out what makes them feel offended, what thoughts and feelings they should pay particular attention to, and we create action plans for how the they can handle negative emotions without using violence. A lot of the treatment is about understanding yourself," says Nesset.

She says the decline in violence was greater than she had anticipated.

"I didn't expect the decline to be so big. It's really promising that the treatment works," says Nesset.

To clarify: In the past, smaller studies have been conducted of people who were on a waiting list for treatment and comparing them with people already receiving treatment. Those who received treatment experienced a greater reduction in violence than those on the waiting list.

Treatment that uses the Brøset model is offered throughout Norway. Each year, about 400 men get help to become a better version of themselves. Those who need help will receive individual support until a group course is available.

Credit: 
Norwegian University of Science and Technology

Genetic testing could improve screening for osteoporosis

An international team of scientists has developed a novel genetic measure that could dramatically improve how doctors assess the risk of sustaining a fracture due to osteoporosis or fragility

A full genome profile can be generated for approximately £35-40 per patient, a cost that is comparable to or lower than the cost of an X-ray to measure bone mineral density

By generating a single genomic profile, researchers can also identify multiple risk factors for diseases like cancer, cardiovascular disease and osteoporosis

Embedding genetic testing into routine clinical practice could improve the efficiency and cut costs of screening for common diseases such as osteoporosis, according to new research.

An international team of scientists, including researchers from the University of Sheffield, has developed a novel genetic measure that could dramatically improve how doctors assess the risk of sustaining a fracture due to osteoporosis or fragility.

The new study published in the journal PLOS Medicine demonstrates how more extensive applications of genomic screening might be used to improve the delivery of healthcare.

Researchers tested whether a risk score gathered from information across a panel of over 20,000 genes could be used as a substitute for a measure of bone strength called heel quantitative ultrasound speed of sound (SOS).

The risk score, termed gSOS, was developed using the UK Biobank which provided SOS measurements for 341,449 individuals. The international research team then applied gSOS alongside the Sheffield-developed FRAX tool, which evaluates the fracture risk of patients based on individual models that integrate clinical risk factors as well as bone mineral density, to determine its impact on the need for actual measurements of bone strength which are usually carried out in hospital by X-ray.

The study estimated that the application of gSOS could reduce the number of FRAX tests and bone mineral density-based FRAX tests by 37 per cent and 41 per cent, respectively, while maintaining a high sensitivity and specificity to identify individuals who should be recommended for intervention.

A full genome profile can be generated for approximately £35-40 per patient, a cost that is comparable to or lower than the cost of an X-ray to measure bone mineral density.

Eugene McCloskey, Professor in Adult Bone Diseases at the University of Sheffield and Director of the Medical Research Council Versus Arthritis Centre for Integrated Research in Musculoskeletal Ageing, said: "Fractures can have severe consequences, including hospitalisation, prolonged rehabilitation, loss of independence and even death.

"As the population ages, the urgency of improving preventive measures becomes all the more intense. Bone strength, a key component underlying fracture risk, is highly heritable (up to 85 per cent determined by our genes), and is therefore a strong candidate for assessment through genetic screening.

"While the impact of this research is not immediate as it requires each individual's genome to be available for calculation of their gSOS, it is of great importance for the future of medical practice."

Lead researcher, Dr Brent Richards, a geneticist at the Lady Davis Institute's Centre for Clinical Epidemiology and Professor of Medicine, Human Genetics, and Epidemiology and Biostatistics at McGill University, said: "By generating a single genomic profile, we can identify multiple risk factors for diseases like cancer, cardiovascular disease and osteoporosis.

"Importantly, we could reduce the number of specific tests to which we need to subject our patients if we knew whether they have the genetic markers predisposing them to particular conditions.

"A simple investment in genotyping would give us a more refined understanding of who should be screened, allowing us to concentrate on individuals at higher risk."

Credit: 
University of Sheffield