Culture

Minimum legal age for cannabis use should be 19, study suggests

The optimal minimum legal age for non-medical cannabis use is 19 years of age, according to a study published in BMC Public Health.

A team of researchers at the Memorial University of Newfoundland, Canada, investigated how Canadians who started using cannabis at several young ages differed across important outcomes (educational attainment, cigarette smoking, self-reported general and mental health) in later-life.

Dr Hai Nguyen, lead author of the study said: "Prior to legalisation, the medical community recommended a minimum legal age of 21 or 25 for non-medical cannabis use in Canada. This recommendation was based on scientific evidence around the potential adverse impacts of cannabis on cognitive development. However, policymakers feared a high minimum legal age may lead to large underground markets, with those under the legal age continuing to use cannabis illegally. Ultimately, a lower legal age of 18 or 19 was decided across provinces, however there remains ongoing debate and calls to raise the legal age to 21."

To determine whether the age at which people begin to use cannabis may have an impact on later-life outcomes, the authors analysed data from nationally-representative Canadian Tobacco Use Monitoring Surveys (CTUMS) and Canadian Tobacco, Alcohol and Drugs Surveys (CTADS) conducted between 2004 and 2015, which annually interview up to 20,000 individuals aged 15 years and older.

The authors found different optimal minimum legal ages depending on the outcome of interest. For smoking, respondents who first used cannabis aged 19-20 were less likely to smoke cigarettes later in life than those who first used cannabis aged 18, but no significant difference was found in those who started using cannabis at an older age, indicating an optimum legal age of 19.

The number of respondents reporting a high level of completed education was 16% higher among those who first used cannabis between the ages of 21 and 24, relative to those who first used it before age 18, suggesting an optimal minimum legal age of 21.

General health was significantly better among those who started using cannabis aged 18, relative to those who started before age 18, but no significant difference was found among those who started at an older age, suggesting a minimum legal age of 18. However, mental health outcomes were found to be higher among those who first used cannabis aged 19-20 than before age 18, suggesting a minimum legal age of 19.

Dr Nguyen said: "The lower level of completed education reported in those who first used cannabis at an earlier age may reflect poor neurological development or a higher 'drop-out' rate from further education. It is also possible that those who initiate cannabis use early may use it as a gateway for further illicit drug use, resulting in poorer health in later life, which may explain the poor general or mental health scores recorded in the study."

Dr Nguyen said: "Taking into account all measured outcomes, our results indicate that, contrary to the Canadian federal government's recommendation of 18 and the medical community's support for 21 or 25, 19 is the optimal minimum legal age for non-medical cannabis use. Keeping the legal age below 21 may strike a balance between potential increases in underground markets and illegal use, and avoiding the adverse outcomes associated with starting to use cannabis at an earlier age."

The authors caution that as the study used self-reported data, respondents may not have accurately recalled the age at which they first used cannabis. Data was also collected prior to legalization of non-medical cannabis in Canada and the authors were unable to predict the impact of cannabis use after legalization. They suggest that further research is needed to establish potential causal effects between the age at which cannabis is first used and the outcomes measured in the study, and could also focus on additional outcomes such as driving behaviours and street drug use.

Credit: 
BMC (BioMed Central)

The Lancet: COVID-19 may be linked to rare inflammatory disorder in young children, first detailed reports on 10 patients from Italy suggests

Detailed analysis from the epicentre of the Italian COVID-19 outbreak describes increase in cases of rare Kawasaki-like disease in young children, adding to reports of similar cases from New York, USA and South East England, UK. Syndrome is rare and experts stress that children remain minimally affected by SARS-CoV-2 infection overall.

Doctors in the Bergamo province of Italy [1] have described a series of ten cases of young children with symptoms similar to a rare inflammatory disease called Kawasaki Disease appearing since the COVID-19 pandemic arose in the Lombardy region of Northern Italy, in a report published today in The Lancet.

Only 19 children had been diagnosed with the condition in that area in the five years up to the middle of February 2020, but there were 10 cases between 18 February and 20 April 2020. The latest reports could represent a 30-fold increase in the number of cases, although researchers caution that it is difficult to draw firm conclusions with such small numbers.

Eight of the 10 children brought to hospital after 18 February 2020 tested positive for the SARS-coronavirus-2 virus (SARS-CoV-2) in an antibody test. All of the children in the study survived, but those who became ill during the pandemic displayed more serious symptoms than those diagnosed in the previous five years.

Kawasaki Disease is a rare condition that typically affects children under the age of five. It causes blood vessels to become inflamed and swollen. The typical symptoms include fever and rash, red eyes, dry or cracked lips or mouth, redness on the palms of the hands and soles of the feet, and swollen glands. Typically, around a quarter of children affected experience cardiac complications, but the condition is rarely fatal if treated appropriately in hospital. It is not known what triggers the condition but it is thought to be an abnormal immune overreaction to an infection.

Dr Lucio Verdoni, author of the report from the Hospital Papa Giovanni XXIII in Bergamo, Italy, said: "We noticed an increase in the number of children being referred to our hospital with an inflammatory condition similar to Kawasaki Disease around the time the SARS-CoV-2 outbreak was taking hold in our region. Although this complication remains very rare, our study provides further evidence on how the virus may be affecting children. Parents should follow local medical advice and seek medical attention immediately if their child is unwell. Most children will make a complete recovery if they receive appropriate hospital care." [2]

The study authors carried out a retrospective review of patient notes from all 29 children admitted to their paediatric unit with symptoms of Kawasaki Disease from 1 January 2015 to 20 April 2020.

Before the COVID-19 outbreak, the hospital treated around one case of Kawasaki Disease every three months. Between 18 February and 20 April 2020, 10 children were treated for symptoms of the disease. The increase could not be explained by an increase in hospital admissions, as the number of patients admitted during that time period was six fold lower than before the virus was first reported in the area.

Children who presented at hospital with symptoms after 18 February 2020 were older on average (mean age 7.5 years) than the group diagnosed in the previous five years (mean age 3 years). They also appeared to experience more severe symptoms than past cases, with more than half (60%, 6/10 cases) having heart complications, compared with just 10% of those treated before the pandemic (2/19 cases). Half of the children (5/10) had signs of toxic shock syndrome, whereas none of the children treated before February 2020 had this complication. All patients before and after the pandemic received immunoglobulin treatment, but 80% of children during the outbreak (8/10) required additional treatment with steroids, compared with 16% of those in the historical group (4/19).

Two of the patients treated after 18 February 2020 (2/10) tested negative for SARS-CoV-2 on an antibody test. The researchers say the test used is not 100% accurate (95% sensitivity and 85-90% specificity), suggesting these could be false negative results. In addition, one of the patients had recently been treated with a high dose of immunoglobulin, a standard treatment for Kawasaki Disease, which could have masked any antibodies to the virus.

Taken together, the authors say that their findings represent an association between an outbreak of SARS-CoV-2 virus and an inflammatory condition similar to Kawasaki Disease in the Bergamo province of Italy. The researchers say the COVID-related cases should be classified as 'Kawasaki-like Disease', as the symptoms were different and more severe in patients treated after March 2020. However, they caution that their report is based on only a small number of cases and larger studies will be required to confirm the association. They also warn that other countries affected by the COVID-19 pandemic might expect to see a similar rise in cases similar to Kawasaki Disease.

Dr Lorenzo D'Antiga, lead author of the study from the Hospital Papa Giovanni XXIII in Bergamo, Italy, said: "We are starting to see case reports of children presenting at hospital with signs of Kawasaki Disease in other areas hit hard by the COVID-19 pandemic, including New York and South East England [3, 4]. Our study provides the first clear evidence of a link between SARS-CoV-2 infection and this inflammatory condition, and we hope it will help doctors around the world as we try to get to grips with this unknown virus." [2]

Dr Annalisa Gervasoni, another author of the study and a Paediatric Specialist at the Hospital Papa Giovanni XXIII in Bergamo, Italy, said: "In our experience, only a very small proportion of children infected with SARS-CoV-2 develop symptoms of Kawasaki Disease. However, it is important to understand the consequences of the virus in children, particularly as countries around the world grapple with plans to start relaxing social distancing policies." [2]

Writing in a linked Comment, Professor Russell Viner, President of the Royal College of Paediatrics and Child Health and Professor of Adolescent Health, UCL Great Ormond Street Institute of Child Health, UK, (who was not involved in the study), said: "Although the Article suggests a possible emerging inflammatory syndrome associated with COVID-19, it is crucial to reiterate--for parents and health-care workers alike--that children remain minimally affected by SARS-CoV-2 infection overall. Understanding this inflammatory phenomenon in children might provide vital information about immune responses to SARS-CoV-2 and possible correlates of immune protection that might have relevance both for adults and children. In particular, if this is an antibody-mediated phenomenon, there might be implications for vaccine studies, and might also explain why some children become very ill with COVID-19, while the majority are unaffected or asymptomatic."

Credit: 
The Lancet

Researchers invent technology to remedy 3D printing's 'weak spot'

Allowing users to create objects from simple toys to custom prosthetic parts, plastics are a popular 3D printing material. But these printed parts are mechanically weak -- a flaw caused by the imperfect bonding between the individual printed layers that make up the 3D part.

Researchers at Texas A&M University, in collaboration with scientists in the company Essentium, Inc. have now developed the technology needed to overcome 3D printing's "weak spot." By integrating plasma science and carbon nanotube technology into standard 3D printing, the researchers welded adjacent printed layers more effectively, increasing the overall reliability of the final part.

"Finding a way to remedy the inadequate bonding between printed layers has been an ongoing quest in the 3D printing field," said Micah Green, associate professor in the Artie McFerrin Department of Chemical Engineering. "We have now developed a sophisticated technology that can bolster welding between these layers all while printing the 3D part."

Their findings were published in the February issue of the journal Nano Letters.

Plastics are commonly used for extrusion 3D printing, known technically as fused-deposition modeling. In this technique, molten plastic is squeezed out of a nozzle that prints parts layer by layer. As the printed layers cool, they fuse to one another to create the final 3D part.

However, studies show that these layers join imperfectly; printed parts are weaker than identical parts made by injection molding where melted plastics simply assume the shape of a preset mold upon cooling. To join these interfaces more thoroughly, additional heating is required, but heating printed parts using something akin to an oven has a major drawback.

"If you put something in an oven, it's going to heat everything, so a 3D-printed part can warp and melt, losing its shape," Green said. "What we really needed was some way to heat only the interfaces between printed layers and not the whole part."

To promote inter-layer bonding, the team turned to carbon nanotubes. Since these carbon particles heat in response to electrical currents, the researchers coated the surface of each printed layer with these nanomaterials. Similar to the heating effect of microwaves on food, the team found that these carbon nanotube coatings can be heated using electric currents, allowing the printed layers to bond together.

To apply electricity as the object is being printed, the currents must overcome a tiny space of air between the printhead and the 3D part. One option to bridge this air gap is to use metal electrodes that directly touch the printed part, but Green said this contact can introduce inadvertent damage to the part.

The team collaborated with David Staack, associate professor in the J. Mike Walker '66 Department of Mechanical Engineering, to generate a beam of charged air particles, or plasma, that could carry an electrical charge to the surface of the printed part. This technique allowed electric currents to pass through the printed part, heating the nanotubes and welding the layers together.

With the plasma technology and the carbon nanotube-coated thermoplastic material in place, Texas A&M and Essentium researchers added both these components to conventional 3D printers. When the researchers tested the strength of 3D printed parts using their new technology, they found that their strength was comparable to injection-molded parts.

"The holy grail of 3D printing has been to get the strength of the 3D-printed part to match that of a molded part," Green said. "In this study, we have successfully used localized heating to strengthen 3D-printed parts so that their mechanical properties now rival those of molded parts. With our technology, users can now print a custom part, like an individually tailored prosthetic, and this heat-treated part will be much stronger than before."

Credit: 
Texas A&M University

New study could help better predict rainfall during El Niño

image: Composites of CPC precipitation (millimeters/day) broken down by MJO phase from 1979 to 2017 for active MJO days in November-April during all El Niño (positive ENSO) days. The columns show (left) MJO-only rain, (center) ENSO-only rain, and (right) MJO + ENSO rain.

Image: 
Marybeth Arcodia

MIAMI--Researchers at the University of Miami (UM) Rosenstiel School of Marine and Atmospheric Science have uncovered a new connection between tropical weather events and U.S. rainfall during El Niño years. The results can help explain why California received significantly less rainfall than predicted during the 2015 El Niño event while massive flooding occurred in the Mississippi River basin.

UM Rosenstiel School graduate student Marybeth Arcodia analyzed 39-years of weather data from the National Centers for Environmental Prediction-National Center for Atmospheric Research Reanalysis Project to understand how the Madden-Julian Oscillation (MJO), a phenomenon of sub-seasonal atmospheric variability in the tropical Indo-Pacific Oceans, leads to pressure and rainfall anomalies over the North Pacific and North America.

Results from the study show that when both an El Niño Southern Oscillation (ENSO) and MJO event are occurring simultaneously, the rainfall pattern typically seen from ENSO can be considerably altered for a few days to weeks due to interference from the MJO.

The researchers found that ENSO modifies the teleconnection signals associated with the Madden-Julian Oscillation in the United States. While at the same time, the El Niño Southern Oscillation acts to interfere with the MJO signals, resulting in significantly enhanced or masked rainfall anomalies in the U.S.

"Although the source of changes in rainfall patterns is coming from thousands of miles away in the tropical Indian and Pacific oceans, our study shows just how connected the tropics and the United States can be," said Arcodia, the study's lead author and UM Rosenstiel School Ph.D student. "If we have a better grasp on how the climate system works and how different aspects of our atmosphere work together, we can have more confidence in our weather and climate forecasts."

The results from this study offer potential to help better understand Earth's weather and climate system.

Credit: 
University of Miami Rosenstiel School of Marine, Atmospheric, and Earth Science

Malaria vaccine: Could this 'ingredient' be the secret to success?

This ingredient is an essential component of the malaria parasite, a protein known as RPL6, which makes the parasite "visible" to a type of immune cells, the T cells, in the liver. The researchers added the protein to their existing vaccine strategy, known as 'prime and trap', and tested their discovery in mice, finding the combination offered complete protection against malaria.

Malaria parasites are deposited on the skin when an infected mosquito feeds. They quickly make their way to the liver where they develop for several days before progressing to the bloodstream - a dangerous step where it multiplies and runs rampant. At this stage, symptoms can include fever, multi-organ failure and potentially death.

The research, published today in Cell Host & Microbe, builds upon the 2016 discovery that demonstrated the existence of T cells that are resident in the liver and can efficiently protect against malaria, which led to the development of the 'prime and trap' vaccination strategy. This vaccine works in two stages. The first, a 'priming' stage, sets the immune response in motion, boosting the army of malaria-specific T cells in the body and helping to attract them to the liver. The second 'trapping' stage pulls an abundance of these T cells into the liver and then converts these cells into liver-resident T cells to permanently guard the liver from malaria infection.

The research team - co-led by University of Melbourne Professor Bill Heath and Dr Daniel Fernandez-Ruiz from the Doherty Institute (a joint venture between the University of Melbourne and The Royal Melbourne Hospital) and Associate Professor Irina Caminschi from the Monash Biomedicine Discovery Institute - believe this breakthrough advances our understanding of the requirements for protective immunity to malaria; a significant question that has been baffling scientists for decades.

Dr Fernandez Ruiz said identifying specific antigens that can be used in a vaccine has been an enormous challenge in malaria research.

"With more than 200 million cases of malaria each year and almost half a million deaths , many of them children under the age of five, the need for a malaria vaccine is urgent," said Dr Fernandez Ruiz.

"Our discovery is a significant step forward in our efforts to produce an effective and efficient vaccine, which completely protects mice against infection by malaria parasites."

Associate Professor Irina Caminschi said: "By uncloaking the parasite and engineering an army of immune cells ready to kill parasite-infected cells, we effectively nipped the disease in the bud.

"This study has opened the door for further research, such as identifying equivalent malaria antigens that are protective for humans, taking us one step closer to developing an effective malaria vaccine," she concluded.

Credit: 
University of Melbourne

Adolescence is ruff for dogs too

image: Dr Lucy Asher and her dog Martha

Image: 
Glen Asher-Gordon

New research led by scientists from Newcastle University and the University of Nottingham has shown that typical teenage behaviour doesn't just occur in young humans - it happens in dogs too.

The study, headed by Dr Lucy Asher from Newcastle University, is the first to find evidence of adolescent behaviour in dogs.

The researchers found dogs were more likely to ignore commands given by their caregiver and were harder to train at the age of eight months, when they are going through puberty. This behaviour was more pronounced in dogs which had an insecure attachment to their owner.

But Dr Asher, a Senior Lecturer in Precision Animal Science, in the University's School of Natural and Environmental Sciences, warns adolescence can be a vulnerable time for dogs as many are taken to shelters for rehoming at this age.

"This is a very important time in a dog's life," she explains. "This is when dogs are often rehomed because they are no longer a cute little puppy and suddenly, their owners find they are more challenging and they can no longer control them or train them. But as with human teenage children, owners need to be aware that their dog is going through a phase and it will pass."

The team, which also included researchers from the University of Edinburgh, looked at a group of 69 dogs to investigate behaviour in adolescence. They monitored obedience in the Labradors, Golden Retrievers and cross breeds of the two, at the ages of five months - before adolescence - and eight months- during adolescence.

Dogs took longer to respond to the 'sit' command during adolescence, but only when the command was given by their caregiver, not a stranger. The odds of repeatedly not responding to the sit command from the caregiver were higher at eight months compared to five months. However, the response to the 'sit' command improved for a stranger between the five and eight month tests.

Further evidence was found when the team looked a larger group of 285 Labradors, Golden Retrievers and German Shepherds and cross breeds of them. Owners and a trainer less familiar with each dog filled in a questionnaire looking at 'trainability'. It asked them to rate statements such as: 'Refuses to obey commands, which in the past it was proven it has learned' and 'Responds immediately to the recall command when off lead'

Caregivers gave lower scores of 'trainability' to dogs around adolescence, compared to when they were aged five months or 12 months. However, again trainers reported an increase in a trainability between the ages of five and eight months.

The experts also found that in common with humans, female dogs with insecure attachments to their caregivers (characterised by higher levels of attention seeking and anxiety when separated from them) were more likely to reach puberty early. This data provides the first evidence of cross-species impact of relationship quality on reproductive timing, highlighting another parallel with parent-child relationships.

Dr Naomi Harvey, co-author of the research from the University of Nottingham's School of Veterinary Medicine and Science and the charity Dog's Trust, says that whilst the results of this study may not come as a surprise to many dog owners, it has important consequences.

"Many dog owners and professionals have long known or suspected that dog behaviour can become more difficult when they go through puberty" says Dr Harvey. "But until now there has been no empirical record of this. Our results show that the behaviour changes seen in dogs closely parallel that of parent-child relationships, as dog-owner conflict is specific to the dog's primary caregiver and just as with human teenagers, this is a passing phase."

"It's very important that owners don't punish their dogs for disobedience or start to pull away from them emotionally at this time" added Dr Asher. "This would be likely to make any problem behaviour worse, as it does in human teens".

Credit: 
Newcastle University

COVID-19's silent spread: Princeton study explores role of symptomless transmission

image: Researchers at Princeton University looked at the evolutionary strategies that pathogens employ to spread through a population and found that symptomless transmission, a tactic employed by the virus that causes COVID-19, can be a successful strategy for spread through the population.

Image: 
Chadi Saad-Roy, Princeton University

COVID-19's rapid spread throughout the world has been fueled in part by the virus' ability to be transmitted by people who are not showing symptoms of infection.

Now, a study by researchers at Princeton has found that this silent phase of transmission can be a successful evolutionary strategy for pathogens such as viruses like the one that causes COVID-19. The study was published May 8 in the journal Proceedings of the National Academy of Sciences.

The study examined the pros and cons of silent transmission on the pathogen's long-term survival. Does transmission without symptoms enable the pathogen to infect greater numbers of people? Or does the lack of symptoms eventually lessen transmission and reduce the pathogen's long-term survival?

The answer could inform how public health experts plan control measures such as quarantines, testing and contract tracing.

"An asymptomatic stage for various reasons could provide certain benefits to the pathogen," said Bryan Grenfell, Princeton's Kathryn Briger and Sarah Fenton Professor of Ecology and Evolutionary Biology and Public Affairs, Woodrow Wilson School. "With the COVID-19 crisis, the importance of this asymptomatic phase has become extremely relevant."

Like more complex organisms, viruses can evolve by natural selection. New variants are generated by mutation and if these changes benefit pathogen transmission, then that strain of the virus will spread.

Species with strategies that contribute to their success will survive, while species with strategies that don't promote transmission -- such as killing the host before the virus can transmit to new susceptible individuals -- will eventually die out.

"Viral evolution involves a tradeoff between increasing the rate of transmission and maintaining the host as a base of transmission," said Simon Levin, Princeton's James S. McDonnell Distinguished University Professor in Ecology and Evolutionary Biology. "Species that navigate this tradeoff more effectively than others will come to displace those others in the population."

Levin said it is useful to think about disease from the viewpoint of a game between the pathogen and the host. "These are host-parasite interactions," Levin said, "and thinking about them from an evolutionary perspective is something we, along with many other scientists, have been interested in for a long time."

As the COVID-19 pandemic shows, a silent infection has certain short-term advantages. It makes control strategies - such as identification, quarantine and contact tracing -- difficult to implement. Infectious people who lack symptoms tend to go about their lives, coming in contact with many susceptible people. In contrast, a person who develops a fever and cough may be more likely to self-isolate by, for example, staying home from work.

However there are also drawbacks: asymptomatic people may generate fewer infectious particles and thus fewer will escape from the infected person, for example in a violent sneeze or forceful cough. The overall transmission could be reduced over time.

The researchers used disease modeling to explore the tradeoffs between these scenarios.

They undertook the study long before the novel coronavirus burst on the scene. In fact, graduate student Chadi Saad-Roy started the study in May 2019, initially to consider influenza, which also has significant asymptomatic infection.

"I wondered why asymptomatic flu would arise in evolution," Saad-Roy said, "and so as a team we formulated a simple model to try to understand why evolution would favor such behavior."

To conduct the research, Saad-Roy worked with Grenfell, Levin and Ned Wingreen, Princeton's Howard A. Prior Professor in the Life Sciences and a professor of molecular biology and the Lewis-Sigler Institute for Integrative Genomics.

Pathogens can exhibit a variety of behaviors that contribute to their spread.

Some viruses, such as HIV, spread before symptoms are identified. Other viruses transmit around the time that symptoms appear. For example, the now-eradicated virus that caused smallpox tended to generate significant symptoms by the time transmission started.

Most pathogens probably employ a combination of silent and symptomatic strategies.

To study the effect of symptomless transmission, the team made modifications to a standard mathematical model of how a disease spreads through a population. The model breaks the population up into compartments representing susceptible, infected and recovered individuals.

In the team's version of the model, the researchers further broke the "infected" compartment into two stages. In the first infected stage, the researchers could vary the level of symptoms so that some individuals will have no symptoms, others will have some symptoms and others will have significant symptoms. In the second infected stage, the individuals are fully symptomatic. The team focused not just on the effect of symptom variation on disease spread, but also on the evolutionary consequences of exhibiting varying levels of symptoms in the first stage.

The team found that successful strategies emerged when the first stage of infection was completely symptomless, fully symptomatic, and somewhere in between. They also found that the range of being symptomatic, from no symptoms to maximum symptoms, could be altered by small changes in disease control strategies.

The implication of this last part of the analysis is that disease control strategies could, over long time periods, influence which strategy a pathogen deploys, and thus have impacts on the course of an epidemic.

Saad-Roy also found that the model helps explain a lot of the epidemiological models being employed to understand diseases. "It's a general framework to explain a broader range of epidemiological models," he said.

He said he is not surprised that the virus that causes COVID-19 employs symptomless spread. "Based on our model," he said, "it's a natural evolutionary end point for certain diseases."

Credit: 
Princeton University

Not all psychopaths are violent; a new study may explain why some are 'successful' instead

RICHMOND, Va. (May 12, 2020) -- Psychopathy is widely recognized as a risk factor for violent behavior, but many psychopathic individuals refrain from antisocial or criminal acts. Understanding what leads these psychopaths to be "successful" has been a mystery.

A new study conducted by researchers at Virginia Commonwealth University sheds light on the mechanisms underlying the formation of this "successful" phenotype.

"Psychopathic individuals are very prone to engaging in antisocial behaviors but what our findings suggest is that some may actually be better able to inhibit these impulses than others," said lead author Emily Lasko, a doctoral candidate in the Department of Psychology in the College of Humanities and Sciences. "Although we don't know exactly what precipitates this increase in conscientious impulse control over time, we do know that this does occur for individuals high in certain psychopathy traits who have been relatively more 'successful' than their peers."

The study, "What Makes a 'Successful' Psychopath? Longitudinal Trajectories of Offenders' Antisocial Behavior and Impulse Control as a Function of Psychopathy," will be published in a forthcoming issue of the journal Personality Disorders: Theory, Research, and Treatment.

When describing certain psychopathic individuals as "successful" versus "unsuccessful," the researchers are referring to life trajectories or outcomes. A "successful" psychopath, for example, might be a CEO or lawyer high in psychopathic traits, whereas an "unsuccessful" psychopath might have those same traits but is incarcerated.

The study tests a compensatory model of "successful" psychopathy, which theorizes that relatively "successful" psychopathic individuals develop greater conscientious traits that serve to inhibit their heightened antisocial impulses.

"The compensatory model posits that people higher in certain psychopathic traits (such as grandiosity and manipulation) are able to compensate for and overcome, to some extent, their antisocial impulses via increases in trait conscientiousness, specifically impulse control," Lasko said.

To test this model, the researchers studied data collected about 1,354 serious juvenile offenders who were adjudicated in court systems in Arizona and Pennsylvania.

"Although these participants are not objectively 'successful,' this was an ideal sample to test our hypotheses for two main reasons," the researchers write. "First, adolescents are in a prime developmental phase for the improvement of impulse control. Allowing us the longitudinal variability we would need to test our compensatory model. Second, offenders are prone to antisocial acts, by definition, and their rates of recidivism provided a real-world index of 'successful' versus 'unsuccessful' psychopathy phenotypes."

The study found that higher initial psychopathy was associated with steeper increases in general inhibitory control and the inhibition of aggression over time. That effect was magnified among "successful" offenders, or those who reoffended less.

Its findings lend support to the compensatory model of "successful" psychopathy, Lasko said.

"Our findings support a novel model of psychopathy that we propose, which runs contradictory to the other existing models of psychopathy in that it focuses more on the strengths or 'surpluses' associated with psychopathy rather than just deficits," she said. "Psychopathy is not a personality trait simply composed of deficits -- there are many forms that it can take."

Lasko is a researcher in VCU's Social Psychology and Neuroscience Lab, which seeks to understand why people try to harm one another. David Chester, Ph.D., director of the lab and an assistant professor of psychology, is co-author of the study.

The study's findings could be useful in clinical and forensic settings, Lasko said, particularly for developing effective prevention and early intervention strategies in that it could help identify strengths that psychopathic individuals possess that could deter future antisocial behavior.

Credit: 
Virginia Commonwealth University

Is pulmonary rehab after hospitalization for COPD associated with better survival?

What The Study Did: Claims data for nearly 200,000 Medicare patients were used to examine the association between starting pulmonary rehabilitation within 90 days of being hospitalized for chronic obstructive pulmonary disease (COPD) and survival after one year. Pulmonary rehabilitation involves exercise training and self-management education.

Authors: Peter K. Lindenauer, M.D., M.Sc., of the University of Massachusetts Medical School-Baystate, Springfield, Massachusetts, is the corresponding author.

To access the embargoed study: Visit our For The Media website at this link https://media.jamanetwork.com/ 

(doi:10.1001/jama.2020.4437)

Editor's Note: The article includes conflict of interest and funding/support disclosures. Please see the article for additional information, including other authors, author contributions and affiliations, conflicts of interest and financial disclosures, and funding and support.

Credit: 
JAMA Network

Learning what's dangerous is costly, but social animals have a way of lowering the price

What would you do if the person standing next to you would suddenly scream and run away? Would you be able to carry on calmly with what you're doing, or would you panic? Unless you're James Bond, you're most likely to go for the second option: panic.

But now imagine another scenario: while out on the street, the person walking in front of you suddenly freezes: she stops moving and becomes perfectly still. What would you do?

"Here the answer becomes more tricky", says Marta Moita, head of the behavioral Neuroscience lab at the Champalimaud Centre for the Unknown, in Lisbon, Portugal. "Even though freezing is one of the three basic instinctive defense behaviors [along with fight and flight], animals don't instinctively know that when others freeze, they are actually responding to a threat."

For social animals such as ourselves, being able to tell if a group member senses a threat, can be a matter of life and death. How does this learning happen? To find an answer to this question, Moita and her team engaged in a series of studies. Their most recent findings are presented in two scientific articles, one that was published today (May 12th) in the journal Plos Biology and another that was published a few months ago in the journal Current Biology. Together, their results reveal a mechanism by which animals acquire fear of freezing and outline the neural circuitry that underlies the expression of that fear.

"No pain, no gain"

How is it that some fear responses are innate, while others must be learned? The answer is not fully known, but a good guess would be that because the world is ever-changing, animals have to be able to flexibly adapt to their environment.

For instance, when an animal freezes, it essentially stops moving. But is lack of motion necessarily a sign of danger? "The answer is no", says Moita. "There are situations where an animal stops moving that are perfectly benign; it might be grooming or observing something. But then, this harmless cue can transform into a sign of danger. We wanted to find out how it happens."

In the study published in the journal Current Biology, Moita and her team tested various experimental scenarios with rats. They found out that first and foremost, the animal has to go through a process that is called "auto-conditioning", meaning that the learning does not happen by observing others, but through first-hand experience. And more than that, it can only happen if specific criteria are fulfilled. "We were a bit surprised by the results, because it turns out that the learning mechanism is quite strict", says Andreia Cruz, the first author of the study.

The team discovered that for a rat to adopt freezing as a social cue, it has to go through a learning experience that consists of two key components: pain and immobility. Either one without the other is not enough.

"For instance, animals that experience a mild foot shock [which is a painful event] and then freeze as a result, learn to recognize freezing in other group members as a threat. But when we prevented the subsequent freezing response by removing the rat from the experimental box immediately after the foot shock, the learning didn't happen", Cruz explains.

It may seem harsh, but in fact, as Moita points out, this manner of learning is an enormously beneficial way for animals to avoid danger. "The rat underwent a single painful experience [a mild foot shock] that taught it that freezing is a response to a negative event. As a consequence, now it doesn't need to learn first-hand the full range of scenarios that can cause painful experiences. Instead, it just needs to be attentive to how its group members behave."

Hearing and fearing silence

Creating an association between freezing and danger means that new neural connections were formed in the brain. But before diving into the neural circuits, there was still an important question that needed to be addressed: which brain areas might be involved in the expression of this newly learned fear?

"Learning happens by associating cognitive elements that were previously unrelated", Moita explains. "For instance, in the famous Pavlov experiment, dogs learned that the sound of a bell meant that they were about to receive food. So two previously unrelated things - bell sound and food - became associated in the brain."

Moita points out that several cognitive elements may be associated with this newly acquired defensive response, among them is a special kind of auditory cue - silence.

The team previously discovered that rats who learned to use freezing as an alarm cue were actually detecting the sudden onset of silence. "When a rat freezes, it stops moving. Which effectively means that it stops generating sound", Moita explains. "We found that this transition from sound to silence can become a social cue by which rats recognize that another group member is freezing."

Following this line of thought, the team focused on the brain's fear-learning center and the auditory system. Their results - describing a new neural map that spans these structures - were published today in the journal Plos Biology.

A newly discovered neural map

The first question that comes to mind is: how can the auditory system hear silence? Moita explains that to answer this question, you have to think about it in reverse. "We believe that it's not silence per se that the brain is detecting, it's actually the cessation of sound."

The auditory system is made up of many thousands of neurons, each of which has a "personal preference" for certain features of auditory information. For example, some neurons respond to high-frequency sounds, others to the onset of sound. And then, there are "offset neurons" that respond to the cessation of sound. Those are the neurons the team suspects to be the ones that detect silence.

"Offset neurons are abundant in a particular area within a brain region called the auditory thalamus. When we blocked the activity of this area, animals that have adapted freezing as a social cue and would normally respond to the sudden onset of silence, did not", explains Ana Perreira, the first author of the study.

Importantly, this same auditory region connects to the lateral amygdala - a brain area crucial for learning to respond to threatening sounds. Could it also be involved in fearing silence? The team discovered that the answer is "yes". "Our results show that the lateral amygdala is not only important for associating sound and danger, but also silence and danger", says Perreira.

The team used these results together with others obtained in this study, to generate a map of how the brain expresses fear of freezing. "The pathway we identified expands the network that processes auditory cues in the context of danger", says Moita. "More broadly, our work sets the stage to further our understanding of how sensory stimuli and their behavioral relevance are encoded in the brain", she concludes.

Credit: 
Champalimaud Centre for the Unknown

Why visual perception is a decision process

Neuroscientists at the Ruhr-Universität Bochum (RUB) together with colleagues at the Freiburg University show that this is not strictly the case. Instead, they show that prediction errors can occasionally appear as visual illusion when viewing rapid image sequences. Thus, rather than being explained away prediction errors remain in fact accessible at final processing stages forming perception. Previous theories of predictive coding need therefore to be revised. The study is reported in Plos One on 4. May 2020.

Our visual system starts making predictions within a few milliseconds

To fixate objects in the outside world, our eyes perform far more than one hundred thousand of rapid movements per day called saccades. However, as soon as our eyes rest about 100 milliseconds, the brain starts making predictions. Differences between previous and current image contents are then forwarded to subsequent processing stages as prediction errors. The advantage to deal with differences instead of complete image information is obvious: similar to video compression techniques the data volume is drastically reduced. Another advantage turns up literally only at second sight: statistically, there is a high probability that the next saccade lands on locations where differences to previous image contents are largest. Thus, calculating potential changes of image content as the differences to previous contents prepares the visual system early on for new input.

To test whether the brain uses indeed such a strategy, the authors presented rapid sequences of two images to human volunteers. In the first image two gratings were superimposed, in the second image only one of the gratings was present. The task was to report the orientation of the last seen single grating. In most cases, the participants correctly reported the orientation of the present orientation, as expected. However, surprisingly, in some cases an orientation was perceived that was exactly orthogonal to the present orientation. That is, participants saw sometimes the difference between the previous superimposed gratings and the present single grating. „Seeing the difference instead of the real current input is here a visual illusion that can be interpreted as directly seeing the prediction error," says Robert Staadt from the Institute of Neural Computation of the RUB, first author of the study.

Avoiding the pigeonhole benefits flexibility

„Within the framework of the predictive coding theory, prediction errors are mostly conceived in the context of higher cognitive functions that are coupled to conscious expectations. However, we demonstrate that prediction errors also play a role in the context of highly dynamic perceptual events that take place within fractions of a second," explains Dr. Dirk Jancke, head of the Optical Imaging Group at the Institute of Neural Computation. The present study reveals that the visual system simultaneously keeps up information about past, current, and possible future image contents. Such strategy allows both stability and flexibility when viewing rapid image sequences. „Altogether, our results support hypotheses that consider perception as a result of a decision process," says Jancke. Hence, prediction errors should not be sorted out too early, as they might become relevant for following events.

Visual perception underlies decision making

In next studies the scientists will scrutinize the sets of parameters that drive the perceptual illusion most effectively. Besides straightforward physical parameters like stimulus duration, brightness, and contrast, other, more elusive factors that characterize psychological features might be involved. The authors' long-term perspective is the development of practical visual tests that can be used for an early diagnosis of cognitive disorders connected to rapid perceptual decision processes.

Credit: 
Ruhr-University Bochum

Measuring methane from space

image: Methane bubbles form underneath ice in lakes in Alaska. These bubbles can be seen from space thus yielding information on methane emissions.

Image: 
Melanie Engram

The greenhouse gas methane is a crucial factor of climate change in the Arctic and globally as well. Yet, methane emissions in the far north are difficult to measure across large regions. In the past, researchers usually scaled-up selective point measurements to generate landscape-scale methane flux estimates. Now, a group of researchers from Alaska and Germany is reporting for the first time on remote sensing methods that can observe thousands of lakes and thus allow more precise estimates of methane emissions. The study, in which several researchers from the GFZ German Research Centre for Geosciences were involved, is published in the journal Nature Climate Change. According to the results, the total emissions estimated so far must be revised downwards.

Led by first author Melanie Engram (University of Alaska in Fairbanks, USA), the team investigated more than 5,000 lakes in Alaska using data from a radar satellite. Thus, the researchers estimated methane flux from the influence of gas bubbles on the ice-water interface on frozen lakes. They compared these flux estimates with numerous direct measurements on the ground and with the results of a joint airborne measurement campaign of the Alfred Wegener Institute (AWI) and GFZ. "Although the remote sensing method only recorded one of three possible emission mechanisms, namely release by rising methane bubbles (and not release by diffusion or transport through plants), this recorded path is often the most effective," comments co-author Torsten Sachs by e-mail from the research vessel "Polarstern", where he is currently conducting airborne measurements as part of the year-long MOSAiC drift expedition.

The satellite data and the models based on them show that the bottom-up estimates of methane emissions from lakes in the Arctic have so far been too high. Moreover, a number of other important findings were obtained for the determination of natural greenhouse gas emissions in the Arctic. While it was confirmed that small lakes emit more methane than large ones, the study also showed that low-emitting large lakes play an important role for regional flux estimates. Most methane escapes from so-called "thermokarst lakes" in organic-rich sediments of interior Alaska, formed by the thawing of ice-rich permafrost.

Credit: 
GFZ GeoForschungsZentrum Potsdam, Helmholtz Centre

Lighting the path for cells

Highly complex organisms can arise from a single cell, which is one of the true miracles of nature. Substances known as morphogens have an important role in this development, namely by signalling to cells where they should go and what they should do. These signal molecules guide biological processes such as the formation of body axes or the wiring of the brain. To investigate such processes in more detail, researchers have to be able to position the signal molecules among living cells in three-dimensional space. This was made possible by a new method developed by Nicolas Broguiere and his colleagues in the research group headed by Marcy Zenobi-Wong. Their work is being published today in the journal Advanced Materials.

Drawing with light

"Our approach makes it possible to distribute bioactive molecules in a hydrogel with a high degree of precision," says Zenobi-Wong, Professor of Tissue Engineering and Biofabrication in the Department of Health Sciences and Technology at ETH Zürich. When living cells are encapsulated in the hydrogel, they can detect these biochemical signals. One such signal, nerve growth factor, determines the direction in which nerve fibres grow. In a method called two-photon patterning, the researchers used a laser to draw a 3D pattern of this molecule in the hydrogel.

"Wherever the light is focused in the material, it triggers a chemical reaction that anchors the nerve growth factor to the hydrogel," Broguiere explains. "We carefully optimised the design of the photosensitive hydrogel so that the signal molecules attach only in the areas exposed to the laser - and nowhere else." Their new approach can create "paintings" of morphogens with details one thousand times smaller than a millimetre - the size of a single nerve fibre. The researchers could then observe through a microscope how the neurons follow the mapped-out pattern. "With this new method, we can now guide neurons effectively in 3D, using their own biochemical language," Broguiere says.

When nerve fibres tear

Many biologists have long dreamt of instructing cells to grow in a particular direction. The new approach developed by the ETH research group brings them one step closer to fulfilling that dream. Zenobi-Wong and Broguiere believe their innovation also offers potential benefits for medicine - for example, if a nerve is severed during an accident, the reconnection happens haphazardly and full function is not restored. "I don't want to give the impression that we're ready to start treating patients with this method," Zenobi-Wong says, "but in the future, a refined version of our approach could help show neurons the right path directly in the body, thereby improving recovery from neural injuries."

Credit: 
ETH Zurich

Dock and harbor: A novel mechanism for controlling genes

image: Top. The PML body (a green spherical structure) Genomic DNA (yellow & brown balls)

Bottom. The image on the left depicts a graphical representation of ALaP, which facilitated isolation of the PML body-chromatin complex based on the light signals. The image on the right depicts the YS300-bound PML body preventing DNMT3A from accessing nearby genes.

Image: 
Kanazawa University

The genetic information within our cells is what makes humans unique. The cell nucleus has a complex structure that harbors this genetic information. The main component of the nucleus is chromatin, an intercalated pool of genes and proteins. Promyelocytic leukemia (PML) bodies are structures found closely associated with chromatin, suggesting that they may be involved with genetic function. However, the exact nature of the relationship between PML bodies and genes is unknown. In research conducted by a team at National Institutes of Natural Sciences (NINS) led by Yusuke Miyanari who recently joined NanoLSI at Kanazawa University, shows how PML bodies can modulate certain genes and the potential implications of these actions.

In order to visualize and track the exact location of PML bodies on the chromatin, the team developed a method known as APEX-mediated chromatin labeling and purification (ALaP). A fluorescent dye was first coupled with the PML bodies such that the light emitted by this dye could help trace the bodies. Subsequently, the PML body-chromatin complex could be isolated and the genes within it sequenced and identified.

This technique was tested in cells of mice and resulted in successful extraction of the complex without any structural damage. The chromatin region anchored to the PML bodies within this complex was then identified as YS300, a short area of the Y chromosome. What's more, a cluster of genes in the vicinity of YS300 was also found to be impacted--some genes were suppressed and some were activated. PML bodies were thus somehow controlling the activity of these neighboring genes, which prompted the team to try and understand how.

In order for genes to be activated they must undergo a process known as DNA methylation. However, the structures of the suppressed genes suggested that they had been deprived of this process. A closer examination of the entire complex revealed that PML bodies were docked onto YS300 in a manner which prevented DNMT3A, a core regulator of DNA methylation, from accessing the genes. PML bodies were therefore physically restricting DNMT3A thereby preventing genetic activation.

"Our study sheds light on a newly discovered role of PML bodies in the regulation of the cluster genes and revealed a novel mechanism to regulate gene expression by 3D nuclear organization," summarize the researchers. PML bodies are heavily involved in nervous system development, stress responses, and cancer suppression. Their newly found role can help understand whether and how gene regulation is involved in any of these cellular processes. Additionally, PML bodies can also be exploited as a potential switch to control the activity of certain genes.

Credit: 
Kanazawa University

The economic gap also affects the consumption of screens by children

The presence and variety of mobile devices in Spanish households, regardless of social and economic circumstances, has been mainstream for years. Several studies focus on parental mediation in children's consumption of smart screens, although there is a lack of scientific evidence concerning how the level of training and the professional profile of mothers and fathers affect children's digital media consumption.

A study by Mònika Jiménez and Pilar Medina, researchers with the Department of Communication at UPF, and Mireia Muntaña, a researcher at the Department of Information Sciences at the UOC, analyses the influence of families' social and educational level on the consumption of smart screen contents. The study was published in the advanced online edition of the journal Comunicar on 15 April and the results were obtained in the framework of the project MediaCorp "Media representation of the unhealthy body image. Development of a prevention tool in children aged 5 to 8 years: I like my body", funded by the Ministry of Economy, Industry and Competitiveness.

"This study provides data on the role of parents in areas such as their role in the construction of children's body image, or their involvement in children's media consumption patterns", says Mònika Jimenez, co-author of the study, with Pilar Medina, co-principal investigator of MediaCorp.

The research, published in Comunicar, examines parents' level of education as well as their professional category. It uses a quantitative methodology based on a sample of 792 children (363 boys and 429 girls) aged 5 to 9 years, in three Spanish cities; 196 in Barcelona, 320 in Madrid, and 276 in Seville. It analysed their television, mobile phone, tablet, computer and video game consumption.

The results suggest that the lower the mother's level of studies and professional status, the greater the consumption by children of contents via mobile devices. "We would not like to criminalize mothers with limited financial resources but the results show that the economic gap also affects the consumption of screens by children," said Pilar Medina, co-author of the research. The study highlights the importance of considering parents' educational and professional levels as an opportunity to better understand the consumption of smart screens and to design of family strategies that foster critical thinking and digital media education.

Credit: 
Universitat Pompeu Fabra - Barcelona