Tech

Scaling up genome editing big in tiny worms

image: Because the tiny Caenorhabditis elegans worms have specialized cell types and developmental processes they are excellent for investigating human gene regulation processes.

Image: 
: Rajewsky Lab, MDC

Understanding the effects of specific mutations in gene regulatory regions - the sections of DNA and RNA that turn genes on and off - is important to unraveling how the genome works, as well as normal development and disease. But studying a large variety of mutations in these regulatory regions in a systematic way is a monumental task. While progress has been made in cell lines and yeast, few studies in live animals have been done, especially in large populations.

Experimental and computational biologists at the Max Delbrück Center for Molecular Medicine in the Helmholtz Association (MDC) have teamed up to establish an approach to induce thousands of different mutations in up to 1 million microscopic worms and analyze the resulting effects on the worms' physical traits and functions.

"With cell lines, you are missing development processes, many cell-types, as well as interaction between cell types that all affect gene regulation," says Jonathan Froehlich, a Ph.D. candidate in MDC's Systems Biology of Gene Regulatory Elements Lab in the Berlin Institute for Medical Systems Biology (BIMSB) and co-first paper author. "We can now really test these regulatory sequences in the environment where they are important and observe the consequences on the organism."

Worms meet CRISPR-Cas9

The tiny Caenorhabditis elegans worms are an excellent proxy for investigating gene regulation processes in humans. "We are so similar to them," Froehlich says. "They have specialized tissues, they have muscles, they have nerves, they have skin, they have a gut, a reproductive system. For gene regulation studies, it's important that you have specialized cell types and development."

To efficiently induce a variety of mutations in a large C. elegans population, the researchers turned to the gene editing tool, CRISPR-Cas9. They identified up to 10 sections of DNA to be cut by the Cas9 enzyme, which is guided to those spots by RNA. But the researchers didn't send in any other instructions, leaving the organisms to repair the DNA breaks through natural mechanisms. This leads to a variety of mutations in the form of deletions or insertions of genetic code, which are called "indels."

Rolling the dice

Often in the realm of genome editing, scientists want to be very precise to see how one mutation will affect a system. Not so in this experimental set up, which aims to look at a variety of mutations all at once.

"One part is controlled, the part where we design the guide RNAs and tell the Cas9 nuclease where to go, but the outcome of this is semi-random," says Froehlich. "You will have many different types of outcomes and we can see what the effect is on the animal."

Notably, the researchers only needed to manipulate a parent generation of C. elegans. They added the Cas9 system to the parents; when the worms were exposed to heat for two hours the enzyme went to work cutting the DNA in reproductive germline cells. Then the hermaphrodite worms reproduced, resulting in thousands of offspring containing a variety of mutations. No need to modify the genomes of worms one by one.

crispr-DART

To identify the resulting mutations in hundreds of thousands of worms, the team used a wide variety of genomic sequencing techniques, producing a huge volume of data. To efficiently analyze it, they teamed up with MDC's Bioinformatics & Omics Data Science Platform.

Dr. Bora Uyar, a bioinformatics scientist, first looked for existing tools that could help answer necessary questions, such as: was the Cas9 system activated, were the targeted areas of DNA cut, and which sequences are important for genome function? "I tried the tools that existed and none were designed to address these problems with such a wide variety of data types and large number of mutations, and produce the interactive data visualizations we ultimately wanted," Uyar says.

So, he set to work designing a new software package, called crispr-DART - short for Downstream Analysis and Reporting Tool. It is an homage to the parallel editing approach, which is not 100% controlled and so doesn't always lead to mutations in the target areas. "That's why I call it crispr-DART, you are throwing some arrows in the genome and the tool tells you if you are actually successful or not," Uyar says.

The software, which is publicly available, can handle a variety of different sequencing data types - long read, short read, single reads, paired reads, DNA, RNA. The system quickly processes the samples, even as new types of information are added to the mix, helping identify interesting findings, such as the efficiency of the protocol and how mutations compare to controls.

"crispr-DART follows the principles we use in our other pipelines, where reproducibility, usability and informative reporting are very important components," says Dr. Altuna Akalin, who heads MDC's Bioinformatics & Omics Data Science Platform.

Surprise finding

Using the new protocol, the team was able to connect several mutations in regulatory regions to specific physiological effects. They also made an unexpected finding. Two microRNA binding sites in a gene called lin-41 have long been thought to work together to control gene expression. With their parallel editing system, the team induced mutations in one or the other site, and then in both sites together. As long as one site was intact, the worms developed normally. But if both sites were mutated, gene expression continued unregulated, the worms did not develop normally and died.

"This demonstrates nicely how this system can be used to study gene regulation during development," says Professor Nikolaus Rajewsky, Scientific Director of MDC's Berlin Institute for Medical Systems Biology (BIMSB), who oversaw the project. "We look forward to applying this parallel genomic editing approach to more questions."

Credit: 
Max Delbrück Center for Molecular Medicine in the Helmholtz Association

Skoltech studies collective behavior of nanosatellites

image: Scientists from the Skoltech Space Center (SSC) have developed nanosatellite interaction algorithms for scientific measurements using a tetrahedral orbital formation of CubeSats that exchange data and apply interpolation algorithms to create local maps of physical measurements in real time.

The study presents an example of geomagnetic field measurement, which shows that these data can be used by other satellites for attitude control and, therefore, provided on a data-as-a-service basis.

Image: 
Skoltech

Scientists from the Skoltech Space Center (SSC) have developed nanosatellite interaction algorithms for scientific measurements using a tetrahedral orbital formation of CubeSats that exchange data and apply interpolation algorithms to create local maps of physical measurements in real time. The study presents an example of geomagnetic field measurement, which shows that these data can be used by other satellites for attitude control and, therefore, provided on a data-as-a-service basis. The research was published in the journal Advances in Space Research.

SSC is the research lead within the Nanosatellites Swarm project ("Roy MKA") performed by a consortium of several Russian universities and included in the ISS experimental program led by RSC Energia. "Roy MKA" aims to deploy autonomous groups of CubeSats and verify their swarm behavior.

For one of the "Roy MKA" experiments, SSC researchers suggested a tetrahedral formation, which provides an ability to measure the geomagnetic field at any point on orbit. The system is fully autonomous, which means that satellites can process and update measurement data on board and predict magnetic field values by interpolation.

"We use the Kriging interpolation which helps to select the magnetic field values in accordance with its characteristics (autocorrelation). Since the magnetic field is three-dimensional, we have to use a tetrahedron, the simplest three-dimensional simplex with three points on a plane. Thus, we have chosen a 4-satellite formation as the smallest possible configuration for the task. Our project may be the first to create such a configuration of nanosatellites," lead author and Skoltech PhD student Anton Afanasev explains.

The study demonstrated that the Kriging interpolator is a universal tool when it comes to processing data from small satellites. Satellites exchange data about their positions and measurements to create a self-organizing system capable of demonstrating collective behavior and performing tasks common for the constellation, which constitutes the key goal of the "Roy MKA" project.

"An important practical outcome of this study is that it can improve the performance of the attitude control and station-keeping systems that use magnetometers (magnetic field sensors). Notably, improved attitude control can also be used by other spacecraft that happen to be in close proximity to the constellation of the satellites that exchange magnetic field data and make them more accurate using the Kriging algorithms. Processing swarm measurements can evolve into a GPS-type service, which will enable distributing magnetic field values rather than object velocities and coordinates," Anton adds.

The new method can be used to build large constellations at a lower total cost by using less expensive sensors.

Credit: 
Skolkovo Institute of Science and Technology (Skoltech)

Study shows powered prosthetic ankles can restore a wide range of functions for amputees

image: This image shows the study participant performing a squatting activity with different prosthetic devices. When the participant used his daily prosthesis (left picture) he had a limited range of motion and had to bend his back significantly to lift objects off the ground. When using a neural controlled prosthetic ankle (right picture) he was able to voluntarily control his prosthesis ankle joint force and angle and keep a healthy posture while lifting weight from the ground.

Image: 
Aaron Fleming

A recent case study from North Carolina State University and the University of North Carolina at Chapel Hill demonstrates that, with training, neural control of a powered prosthetic ankle can restore a wide range of abilities, including standing on very challenging surfaces and squatting. The researchers are currently working with a larger group of study participants to see how broadly applicable the findings may be.

"This case study shows that it is possible to use these neural control technologies, in which devices respond to electrical signals from a patient's muscles, to help patients using robotic prosthetic ankles move more naturally and intuitively," says Helen Huang, corresponding author of the study. Huang is the Jackson Family Distinguished Professor in the Joint Department of Biomedical Engineering at NC State and UNC.

"This work demonstrates that these technologies can give patients the ability to do more than we previously thought possible," says Aaron Fleming, first author of the study and a Ph.D. candidate in the joint biomedical engineering department.

Most of the existing research on robotic prosthetic ankles has focused solely on walking using autonomous control. Autonomous control, in this context, means that while the person wearing the prosthesis decides whether to walk or stand still, the fine movements involved in those movements happen automatically - rather than because of anything the wearer is doing.

Huang, Fleming and their collaborators wanted to know what would happen if an amputee, working with a physical therapist, trained with a neurally controlled powered prosthetic ankle on activities that are challenging with typical prostheses. Would it be possible for amputees to regain a fuller range of control in the many daily motions that people make with their ankles in addition to walking?

The powered prosthesis in this study reads electrical signals from two residual calf muscles. Those calf muscles are responsible for controlling ankle motion. The prosthetic technology uses a control paradigm developed by the researchers to convert electrical signals from those muscles into commands that control the movement of the prosthesis.

The researchers worked with a study participant who had lost one leg about halfway between the knee and the ankle. The participant was fitted with the powered prosthetic ankle and did an initial evaluation. The patient then had five training sessions with a physical therapist, each lasting about two hours, over the course of two-and-a-half weeks. After the training was completed, the participant did a second evaluation.

After training, the study participant was able to do a variety of tasks that had been difficult before, such as going from sitting to standing without any external assistance or squatting to pick something up off the ground without compensating for the movement with other body parts. But one of the most pronounced differences was the study participant's stability, whether standing or moving. This was reflected in both empirical evaluations - such as testing the patient's stability when standing on foam - and in the patient's level of confidence in his own stability.

"The concept of mimicking natural control of the ankle is very straightforward," Huang says. "But implementation of this concept is more complicated. It requires training people to use residual muscles to drive new prosthetic technologies. The results in this case study were dramatic. This is just one study, but it shows us what is feasible."

"There is also a profound emotional impact when people use powered prosthetic devices that are controlled by reading the electrical signals that their bodies are making," Fleming says. "It is much more similar to the way people move intuitively, and that can make a big difference in how people respond to using a prosthesis at all."

The researchers are already having more people go through the training paradigm and are expanding their testing to assess the results of that training.

Before making this technology more widely available, the researchers also want to engage in real-world beta testing - with people using the robotic prosthesis during their daily routines.

"As with any prosthetic device for lower limbs, you have to make sure the device is consistent and reliable, so that it doesn't fail when people are using it," Huang says.

"Powered prostheses that exist now are very expensive and are not covered by insurance," Fleming says. "So there are issues related to access to these technologies. By attempting to restore normal control of these type of activities, this technology stands to really improve quality of life and community participation for individuals with amputation. This would make these expensive devices more likely to be covered by insurance in the future if it means improving the overall health of the individual."

Credit: 
North Carolina State University

New research shows how immune response to TB differs in babies

The immune response to tuberculosis (TB) differs in adults and newborn babies due to the way immune cells use energy to kick into gear in a bid to kill the bacteria. This fresh discovery - just published in leading journal, Frontiers in Immunology - offers hope for improving treatments for what remains a deadly disease.

TB is still one of the biggest infectious killers in the world and babies are more likely than adults to get this infection and for it to spread outside of the lungs. Thanks to the work of scientists in Professor Joseph Keane's TB Immunology lab, based in the Trinity Translational Medicine Institute at St James's Hospital, we now have a better idea why.

Dr Cilian Ó Maoldomhnaigh, a paediatrician who undertook this research as part of a PhD project explains:

"We found fundamental differences in how a baby's immune cells respond to TB compared to those of an adult and hope that this will eventually lead to new ways to target this infection.

"Babies' immune cells don't have the capacity to change their energy profile in the same way adults do after being exposed to bacteria. This shift in cellular energetics is crucial to mounting pro-inflammatory responses that defend the host against infection. We also found that babies produce less TNF, which is a key inflammatory mediator needed to fight TB."

"We would like to thank all of the parents who consented for us to take placental blood post-birth, from which we collected the baby immune cells, and to our funders, the National Children's Research Centre and the Royal City of Dublin Hospital Trust, for making this research possible."

The work also highlights that a special shift in the way cells use energy, called the Warburg effect, occurs in human immune cells.

Though this has been documented in animal models previously, it is the first time that it has been shown in human cells and the timing and type of energy shifts are different than that described in animal cells.

By understanding how the immune system uses energy, scientists and doctors are aiming to develop better treatments to help support the immune response to infection.

Senior author on the paper, Dr Sharee Basdeo, is excited about the impact this work may have. She said:

"Our work indicates that the function and energy profile of human immune cells in response to infection may change during childhood development, and with ageing.

"Understanding the differences in the profiles of key immune cells over a human lifetime will enable us to design immuno-supportive therapies to help protect vulnerable populations, such as the very old and the very young, against infections."

Credit: 
Trinity College Dublin

Atom interferometry demonstrated in space for the first time

image: An example of an interference pattern produced by the atom interferometer

Image: 
photo/©: Maike Lachmann, IQO

Extremely precise measurements are possible using atom interferometers that employ the wave character of atoms for this purpose. They can thus be used, for example, to measure the gravitational field of the Earth or to detect gravitational waves. A team of scientists from Germany has now managed to successfully perform atom interferometry in space for the first time - on board a sounding rocket. "We have established the technological basis for atom interferometry on board of a sounding rocket and demonstrated that such experiments are not only possible on Earth, but also in space," said Professor Patrick Windpassinger of the Institute of Physics at Johannes Gutenberg University Mainz (JGU), whose team was involved in the investigation. The results of their analyses have been published in Nature Communications.

A team of researchers from various universities and research centers led by Leibniz University Hannover launched the MAIUS-1 mission in January 2017. This has since become the first rocket mission on which a Bose-Einstein condensate has been generated in space. This special state of matter occurs when atoms - in this case atoms of rubidium - are cooled to a temperature close to absolute zero, or minus 273 degrees Celsius. "For us, this ultracold ensemble represented a very promising starting point for atom interferometry," explained Windpassinger. Temperature is one of the determining factors, because measurements can be carried out more accurately and for longer periods at lower temperatures.

Atom interferometry: Generating atomic interference by spatial separation and subsequent superposition of atoms

During the experiments, the gas of rubidium atoms was separated using laser light irradiation and then subsequently superpositioned. Depending on the forces acting on the atoms on their different paths, several interference patterns can be produced, which in turn can be used to measure the forces that are influencing them, such as gravity.

Laying the groundwork for precision measurements

The study first demonstrated the coherence, or interference capability, of the Bose-Einstein condensate as a fundamentally required property of the atomic ensemble. To this end, the atoms in the interferometer were only partially superimposed by means of varying the light sequence, which, in the case of coherence, led to the generation of a spatial intensity modulation. The research team has thus demonstrated the viability of the concept, which may lead to further experiments targeting the measurement of the Earth's gravitational field, the detection of gravitational waves, and a test of Einstein's equivalence principle.

Even more measurements will be possible when MAIUS-2 and MAIUS-3 are launched

In the near future, the team wants to go further and investigate the feasibility of high-precision atom interferometry to test Einstein's principle of equivalence. Two more rocket launches, MAIUS-2 and MAIUS-3, are planned for 2022 and 2023, and on these missions the team also intends to use potassium atoms, in addition to rubidium atoms, to produce interference patterns. By comparing the free fall acceleration of the two types of atoms, a test of the equivalence principle with previously unattainable precision can be facilitated. "Undertaking this kind of experiment would be a future objective on satellites or the International Space Station ISS, possibly within BECCAL, the Bose Einstein Condensate and Cold Atom Laboratory, which is currently in the planning phase. In this case, the achievable accuracy would not be constrained by the limited free-fall time aboard a rocket," explained Dr. André Wenzlawski, a member of Windpassinger's research group at JGU, who is directly involved in the launch missions.

The experiment is one example of the highly active research field of quantum technologies, which also includes developments in the fields of quantum communication, quantum sensors, and quantum computing.

The MAIUS-1 sounding rocket mission was implemented as a joint project involving Leibniz University Hannover, the University of Bremen, Johannes Gutenberg University Mainz, Universität Hamburg, Humboldt-Universität zu Berlin, the Ferdinand-Braun-Institut in Berlin, and the German Aerospace Center (DLR). Financing for the project was arranged by the Space Administration of the German Aerospace Center and funds were provided by the German Federal Ministry for Economic Affairs and Energy on the basis of a resolution of the German Bundestag.

Credit: 
Johannes Gutenberg Universitaet Mainz

More exposure to political TV ads heightens anxiety

ITHACA, N.Y. - We've all seen them: political ads on television that promise doom gloom if Candidate X is elected, and how all your problems will be solved if you choose Candidate Y. And Candidate Y, of course, approves this message.

Beyond attempting to move a large swath of the population to vote one way or another, the seemingly constant bombardment of negativity in the name of our democratic process is anxiety-inducing, researchers have found.

"Many of my friends and family members wind up quite stressed out, for lack of a better word, during each election season," said Jeff Niederdeppe, professor in the Department of Communication in the College of Agriculture and Life Sciences, "and I've seen this pattern repeat itself across the last several election cycles."

Niederdeppe is lead author of "Exposure to Televised Political Campaign Advertisements Aired in the United States 2015-2016 Election Cycle and Psychological Distress," which published April 3 in Social Science & Medicine.

Also contributing was Rosemary Avery, professor of policy analysis and management in the College of Human Ecology, and Jiawei Liu, a postdoctoral researcher in the Department of Communication, along with colleagues from the University of Minnesota School of Public Health, Wesleyan University and the Johns Hopkins Bloomberg School of Public Health.

Their research, Niederdeppe said, uncovered evidence that exposure to one form of political messaging - televised campaign ads - was associated with increased odds of a person being diagnosed with anxiety by a doctor.

For the study, Niederdeppe's team purchased and conducted secondary analysis on two large national datasets:

Kantar/CMAG's database on TV airings for campaign ads appearing between Jan. 1, 2015 and Election Day 2016; and
data from five waves of the Simmons National Consumer Survey on TV viewing patterns and consumer behavior completed between Nov. 10, 2015, and March 7, 2017.
The latter survey, used to gauge consumer preferences, also included a detailed section on health ailments and engagement with doctors about those concerns.

The Simmons survey asked respondents, "Have you been told by a doctor or other health care professional in the past year that you have ..." followed by a series of conditions that respondents could check as applicable - anxiety, depression and insomnia, as well as a negative control condition (cancer), in an attempt to clearly illustrate ad exposure's link to mental wellness.

The study found consistent positive association between the volume of campaign advertising exposure and a reported diagnosis of anxiety among U.S. adults, suggesting that elections themselves may contribute to individual-level mental health issues.

One of the aims of the study was assessing whether associations between campaign ad exposure and mental health outcomes varied by political party of the respondent, the party of the candidate featured in the ad, or the office under consideration.

The nature of the 2016 presidential campaign made that particularly relevant, Niederdeppe said.

"We included that analysis [of the presidential vs. non-presidential ads] as a check, essentially, to say, 'Is this a Trump vs. Clinton effect?'" he said. "Donald Trump and Hillary Clinton had a uniquely divisive rhetoric and campaign and so one possible explanation would say, 'Hey, is this just an outlier, because it's 2016?'"

Political ads in general have gotten increasingly negative, Niederdeppe said, because research has shown that people pay attention to and remember those types of ads more than positive messages.

Political ads also tend to catch people "in environments where they're not looking for them - a commercial break embedded within other programming," he said. "You don't watch 'Jeopardy!' with the purpose of seeking out political information. But there it is."

Niederdeppe hopes this research can be part of a larger conversation on the effect of campaign advertising on public health.

"If these results play out in future election cycles, there's an immediate sort of public health readiness kind of element to this," he said. "If you know that these political ads are going to potentially increase the number of people who need treatment for anxiety, say, then you can prepare as a sort of public health infrastructure by offering by broadening treatment or having treatment plans ready for how you might deal with this particular form of anxiety."

He said it could also inform the existence and tone of campaign ads themselves.

"I would never say you have to ban all political ads," he said. "But should you consider the broader health implications of political messaging? When you're considering the right way to go about regulating this in the future, I think the answer is yes, that should be one factor."

Credit: 
Cornell University

Closer to human -- Mouse model more accurately reproduces fatty liver disease

Human non-alcoholic fatty liver disease (NAFLD) is a little-understood condition that significantly increases the risk of inflammation, fibrosis and liver cancer and ultimately requires liver transplant.

"NAFLD has been difficult to study mainly because we had no good animal model," said corresponding author Dr. Karl-Dimiter Bissig, who was at Baylor during the development of this project and is now at Duke University.

The disease has both genetic and nutritional components, which have been hard to understand in human studies, and murine models until now had not accurately reflected typical characteristics of human livers with the disease.

Part mouse, part human

"Our goal was to have a mouse model that would allow us to study the disorder and test potential treatments," said co-first author Dr. Beatrice Bissig-Choisat, assistant professor at Duke University. "Applying our lab's yearslong expertise developing chimeric mouse models, those that combine both human and murine cells, we developed mice with livers that were part human and part murine."

The team fed a high-fat diet to the chimeric mice for 12 weeks, then they looked at the livers under the microscope and also studied their metabolic functions and gene expression, comparing them with those of normal mice and of humans with NAFLD.

"We were surprised by the striking differences we observed under the microscope," Bissig said. "In the same liver, the human liver cells were filled with fat, a typical characteristic of the human disease, while the mouse liver cells remained normal."

Next, the researchers analyzed the products of metabolism, in particular the metabolism of fats, of the human liver cells in the mouse model and identified signatures of clinical NAFLD.

"For instance, when mice that received human liver cells fed on a high-fat diet, they started to show features of cholesterol metabolism that looked more like what a patient shows than what other previous animal models showed," said co-first author Dr. Michele Alves-Bezerra, instructor of molecular physiology and biophysics at Baylor. "We made the same observation regarding genes that are regulated after the high-fat diet. All the analyses pointed at cholesterol metabolism being changed in this model in a way that closely replicates what we see in humans."

The researchers also investigated whether the gene expression profiles of the human liver cells in the chimeric model supported the microscopy and metabolic findings.

"We discovered that, compared to the normal mouse liver cells in our model, the fat-laden human liver cells had higher levels of gene transcripts for enzymes involved in cholesterol synthesis," said co-author Dr. Neil McKenna, associate professor of molecular and cellular biology and member of the Dan L Duncan Comprehensive Cancer Center at Baylor. "We wanted to see whether this was also the case in human NAFLD livers."

The team used the web-based platform called the Signaling Pathways Project to create a NAFLD consensome, which surveys previously published clinical studies to identify transcripts whose expression is consistently different between NAFLD livers and healthy ones.

"Using the NAFLD consensome we discovered that, indeed, compared to normal livers, NAFLD livers have consistently higher levels of cholesterol synthesis enzyme transcripts," McKenna said. "This is additional confirmation of the clinical accuracy of our NAFLD model."

Together, the microscopy, metabolic and gene transcription evidence support that the chimeric model closely replicates clinical NAFLD. With this model, researchers have an opportunity to advance the understanding and treatment of this serious condition for which there is no effective therapy.

Not quite human

Another important contribution of this work is that it clearly shows that human and murine cells can be quite different in their responses to factors such as diet, and we have to be careful when interpreting mouse studies of human conditions," Bissig said.

"Here we have a model in which human liver cells respond like in humans. We propose that this model can be used to better understand NAFLD and to identify effective therapies."

Credit: 
Baylor College of Medicine

Study cements age and location of hotly debated skull from early human Homo erectus

image: One of two new hominin specimens, a partial pelvis, found at the East Turkana site in Kenya.

Image: 
A. Hammond/© AMNH

A new study verifies the age and origin of one of the oldest specimens of Homo erectus--a very successful early human who roamed the world for nearly 2 million years. In doing so, the researchers also found two new specimens at the site--likely the earliest pieces of the Homo erectus skeleton yet discovered. Details are published today in the journal Nature Communications.

"Homo erectus is the first hominin that we know about that has a body plan more like our own and seemed to be on its way to being more human-like," said Ashley Hammond, an assistant curator in the American Museum of Natural History's Division of Anthropology and the lead author of the new study. "It had longer lower limbs than upper limbs, a torso shaped more like ours, a larger cranial capacity than earlier hominins, and is associated with a tool industry--it's a faster, smarter hominin than Australopithecus and earliest Homo."

In 1974, scientists at the East Turkana site in Kenya found one of the oldest pieces of evidence for H. erectus: a small skull fragment that dates to 1.9 million years. The East Turkana specimen is only surpassed in age by a 2-million-year-old skull specimen in South Africa. But there was pushback within the field, with some researchers arguing that the East Turkana specimen could have come from a younger fossil deposit and was possibly moved by water or wind to the spot where it was found. To pinpoint the locality, the researchers relied on archival materials and geological surveys.

"It was 100 percent detective work," said Dan Palcu, a geoscientist at the University of São Paulo and Utrecht University who coordinated the geological work. "Imagine the reinvestigation of a 'cold case' in a detective movie. We had to go through hundreds of pages from old reports and published research, reassessing the initial evidence and searching for new clues. We also had to use satellite data and aerial imagery to find out where the fossils were discovered, recreate the 'scene,' and place it in a larger context to find the right clues for determining the age of the fossils."

Although located in a different East Turkana collection area than initially reported, the skull specimen was found in a location that had no evidence of a younger fossil outcrop that may have washed there. This supports the original age given to the fossil.

Within 50 meters of this reconstructed location, the researchers found two new hominin specimens: a partial pelvis and a foot bone. Although the researchers say they could be from the same individual, there's no way to prove that after the fossils have been separated for so long. But they might be the earliest postcrania--"below the head"--specimens yet discovered for H. erectus.

The scientists also collected fossilized teeth from other kinds of vertebrates, mostly mammals, from the area. From the enamel, they collected and analyzed isotope data to paint a better picture of the environment in which the H. erectus individual lived.

"Our new carbon isotope data from fossil enamel tell us that the mammals found in association with the Homo fossils in the area were all grazing on grasses," said Kevin Uno, a paleoecologist at Columbia University's Lamont-Doherty Earth Observatory. "The enamel oxygen isotope data suggest it was a relatively arid habitat based on comparisons to other enamel data from this area."

The work suggests that this early H. erectus was found in a paleoenvironment that included primarily grazers that prefer open environments to forest areas and was near a stable body of water, as documented by freshwater sponges preserved in the rocks.

Key to the field work driving this study were the students and staff from the Koobi Fora Field School, which provides undergraduate and graduate students with on-the-ground experience in paleoanthropology. The school is run through a collaboration between The George Washington University and the National Museums of Kenya, and with instructors from institutions from around North America, Europe, and Africa. "This kind of renewed collaboration not only sheds new light on verifing the age and origin of Homo erectus but also promotes the National Museums of Kenya's heritage stewardship in research and training," said Emmanuel Ndiema, the head of archaeology at the National Museums of Kenya.

Credit: 
American Museum of Natural History

Why are there relatively few aftershocks for certain cascadia earthquakes?

In the Cascadia subduction zone, medium and large-sized "intraslab" earthquakes, which take place at greater than crustal depths within the subducting plate, will likely produce only a few detectable aftershocks, according to a new study.

The findings could have implications for forecasting aftershock seismic hazard in the Pacific Northwest, say Joan Gomberg of the U.S. Geological Survey and Paul Bodin of the University of Washington in Seattle, in their paper published in the Bulletin of the Seismological Society of America.

Researchers now calculate aftershock forecasts in the region based in part on data from subduction zones around the world. But Cascadia intraslab earthquakes produce fewer aftershocks compared to others in subduction zones around the world. In Cascadia, these aftershock rates are lower by more than half that of the global average, Gomberg and Bodin concluded.

They also suggest that aftershock rates for Cascadia earthquakes generally appear consistent with a "clock-advance" model, in which the mainshock causes tectonically loaded fault patches to slip earlier than they would have under the normal background seismicity of the region.

Gomberg and Bodin decided to study the phenomenon further after recent intraslab earthquakes in Mexico and Alaska produced robust aftershock sequences. "This was startling because the lore in Cascadia was that intraslab earthquakes had puny aftershock sequences," Gomberg explained, noting that in Cascadia three magnitude 6.5 to 6.8 intraslab earthquakes in 1949, 1965 and 2001 produced few to no aftershocks.

"Additionally, the USGS has begun to generate quantitatively estimated aftershock forecasts based initially on global patterns" she added, "and given these contrasting experiences, it seemed time to generate some objective numbers to base Cascadia's forecasts on."

The researchers analyzed earthquake catalogs produced by the Geological Survey of Canada and the Pacific Northwest Seismic Network from January 1985 to January 2018. Mainshocks that took place in the upper plate produced the most aftershocks, they found. Aftershock productivity was lowest for intraplate earthquakes in the Puget Lowlands portion of the subduction zone (which contains the Seattle metropolitan area), while aftershock rates were variable at the northern end of the zone near Vancouver Island and within the expected range for the southern end near Cape Mendocino.

The tectonic environment at each end of the subduction zone could help explain why aftershock production is higher there, the researchers said. Multiple plate boundaries meet in these areas, which could "concentrate stress, so more faults exist and are closer to failure than in other areas," they noted.

The reasons why Cascadia aftershock production is so low compared to global rates are still unclear, but "one strong possibility would seem to be that temperature for the deeper slab earthquakes is a dominant controlling parameter," said Bodin, noting that "the young, hot Juan de Fuca plate is being jammed beneath North America" in Cascadia.

The deeper the earthquake, the higher the temperatures, and the researchers did find that aftershock productivity decreases with depth, Bodin explained. "However, this is not so different than southern Mexico, where, as we noted, recent intraslab mainshocks have supported vigorous aftershock sequences."

Gomberg and Bodin said their analysis was limited by the fact that seismicity rates in Cascadia are generally low and there are sparse data to constrain the location and depth of most earthquakes in the region. Methods that help researchers detect and locate smaller earthquakes could provide a better sense of overall aftershock rates and the physical processes that control them, they suggested.

Credit: 
Seismological Society of America

Amoeba biology reveals potential treatment target for lung disease

image: Illustration of cilia and surface hydration among normal airway cells and those affected by cigarette smoke.

Image: 
Corinne Sandone, Johns Hopkins Medicine

In a series of experiments that began with amoebas -- single-celled organisms that extend podlike appendages to move around -- Johns Hopkins Medicine scientists say they have identified a genetic pathway that could be activated to help sweep out mucus from the lungs of people with chronic obstructive pulmonary disease a widespread lung ailment.

"Physician-scientists and fundamental biologists worked together to understand a problem at the root of a major human illness, and the problem, as often happens, relates to the core biology of cells," says Doug Robinson, Ph.D., professor of cell biology, pharmacology and molecular sciences, medicine (pulmonary division), oncology, and chemical and biomedical engineering at the Johns Hopkins University School of Medicine.

Chronic obstructive pulmonary disease (COPD) is the fourth leading cause of death in the U.S., affecting more the 15 million adults, according to the U.S. Centers for Disease Control and Prevention. The disease causes the lungs to fill up with mucus and phlegm, and people with COPD experience chronic cough, wheezing and difficulty breathing. Cigarette smoking is the main cause in as many as three-quarters of COPD cases, and there is no cure or effective treatment available despite decades of research.

In a report on their new work, published Feb. 25 in the Journal of Cell Science, the researchers say they took a new approach to understanding the biology of the disorder by focusing on an organism with a much simpler biological structure than human cells to identify genes that might protect against the damaging chemicals in cigarette smoke.

Robinson and his collaborator, Ramana Sidhaye, M.D., also a professor of medicine in the Division of Pulmonology at Johns Hopkins, with their former lab member Corrine Kliment, M.D., Ph.D., counted on the knowledge that as species evolved, genetic pathways were frequently retained across the animal kingdom.

Enter the soil-dwelling amoeba Dictyostelium discoideum, which has long been studied to understand cell movement and communication. The scientists pumped lab-grade cigarette smoke through a tube and bubbled it into the liquid nutrients bathing the amoeba. Then, the scientists used engineered amoeba to identify genes that could provide protection against the smoke.

Looking at the genes that provided protection, creating "survivor" cells, one family of genes stood out among the rest: adenine nucleotide translocase (ANT). Proteins made by this group of genes are found in the membrane, or surface, of a cell's energy powerhouse structure, known as mitochondria. Typically, mitochondria help make the fuel that cells use to survive. When an ANT gene is highly active, cells get better at making fuel, protecting them from the smoke.

Kliment, Robinson and the team suspected they also help amoeba overcome the damaging effects of cigarette smoke.

To better understand how ANT genes behave in humans, the scientists studied tissue samples of cells lining the lungs taken from 28 people with COPD who were treated at the University of Pittsburgh and compared the lung cells' genetic activity with cells from 20 people with normal lung function.

The scientists found that COPD patients had about 20% less genetic expression of the ANT2 gene than those with normal lung function. They also found that mice exposed to smoke lose ANT2 gene expression.

Next, Robinson, Kliment and their research team sought to discover how ANT2 might provide protection from cigarette smoke chemicals and, in the process, discovered something completely unexpected.

Cells lining the lungs use fingerlike projections called cilia to sweep mucus and other particles out of the lungs. In mammals, including people, the scientists found that the ANT2 gene produces proteins that localize in and around the cilia that work to release tiny amounts of the cell's fuel into a watery substance next to the cell. The fuel enhances the ability of the cilia to "beat" rhythmically and regularly to sweep away the mucus.

"In COPD patients, mucus becomes too thick to sweep out of the lungs," says Robinson.

The Johns Hopkins Medicine team found that, compared with human lung cells with normal ANT2 function, cilia in human lung cells lacking ANT2 beat 35% less effectively when exposed to smoke. In addition, the watery liquid next to the cell was about half the height of normal cells, suggesting the liquid was denser, which can also contribute to lower beat rates.

When the scientists genetically engineered the lung cells to have an overactive ANT2 gene and exposed them to smoke, the cells' cilia beat with the same intensity as normal cells not exposed to smoke. The watery layer next to these cells was about 2.5 times taller than that of cells lacking ANT2.

"Cells are good at repurposing cellular processes across species, and in our experiments, we found that mammals have repurposed the ANT gene to help deliver cellular cues to build the appropriate hydration layer in airways," says Robinson. "Who would have thought that a mitochondrial protein could also live at the cell surface and be responsible for helping airway cilia beat and move?"

Robinson says that further research may yield discoveries to develop gene therapy or drugs to add ANT2 function back into lung-lining cells as a potential treatment for COPD.

Credit: 
Johns Hopkins Medicine

Researchers develop new method for putting quantum correlations to the test

Physicists from Swansea University are part of an international research collaboration which has identified a new technique for testing the quality of quantum correlations.

Quantum computers run their algorithms on large quantum systems of many parts, called qubits, by creating quantum correlations across all of them. It is important to verify that the actual computation procedures lead to quantum correlations of desired quality.

However, carrying out these checks is resource-intensive as the number of tests required grows exponentially with the number of qubits involved.

Researchers from the College of Science, working with colleagues from Spain and Germany, have now proposed a new technique that helps to overcome this problem by significantly reducing the number of measurements while increasing the resilience against noise.

Their method offers a solution to the problem of certifying correlations in large systems and is explained in a new paper which has just been published in PRX Quantum, a prestigious journal from American Physical Society.

Research fellow Dr Farid Shahandeh, the lead scientist of this research, said: "To achieve this we combine two processes. Firstly, consider a juicer - it extracts the essence of the fruit by squeezing it into a small space. Similarly, in many cases quantum correlations in large systems can also be concentrated in smaller parts of the system. The 'squeezing' is done by measurements on the rest of the system called the localization process.

"Suppose the juicer directly converts the fruit into juice boxes without any labels. We don't know what is inside -- it could be apple juice, orange juice, or just water. One way to tell would be to open the box and taste it. The quantum comparison of this is to measure a suitable quantity that tells us whether quantum correlations exist within a system or not.

"This process is called witnessing and we call the combination of the two approaches conditional witnessing."

In their research the physicists prove their method is efficient and generically tolerates higher levels of noise in experiments. They have also compared their approach with previous techniques in a class of quantum processors that use ions to demonstrate its efficiency.

Dr Shahandeh, the recipient of a Royal Commission for the Exhibition of 1851 research fellowship, added: "This is of crucial importance in current technology where the addition of each qubit unavoidably amplifies the complexity of quantum states and experimental imperfections."

Credit: 
Swansea University

Study warns of 'oxygen false positives' in search for signs of life on other planets

image: By varying the initial inventory of volatile elements in a model of the geochemical evolution of rocky planets, researchers obtained a wide range of outcomes, including several scenarios in which a lifeless rocky planet around a sun-like star could evolve to have oxygen in its atmosphere.

Image: 
Illustration by J. Krissansen-Totton

In the search for life on other planets, the presence of oxygen in a planet's atmosphere is one potential sign of biological activity that might be detected by future telescopes. A new study, however, describes several scenarios in which a lifeless rocky planet around a sun-like star could evolve to have oxygen in its atmosphere.

The new findings, published April 13 in AGU Advances, highlight the need for next-generation telescopes that are capable of characterizing planetary environments and searching for multiple lines of evidence for life in addition to detecting oxygen.

"This is useful because it shows there are ways to get oxygen in the atmosphere without life, but there are other observations you can make to help distinguish these false positives from the real deal," said first author Joshua Krissansen-Totton, a Sagan Fellow in the Department of Astronomy and Astrophysics at UC Santa Cruz. "For each scenario, we try to say what your telescope would need to be able to do to distinguish this from biological oxygen."

In the coming decades, perhaps by the late 2030s, astronomers hope to have a telescope capable of taking images and spectra of potentially Earth-like planets around sun-like stars. Coauthor Jonathan Fortney, professor of astronomy and astrophysics and director of UCSC's Other Worlds Laboratory, said the idea would be to target planets similar enough to Earth that life might have emerged on them and characterize their atmospheres.

"There has a been a lot of discussion about whether detection of oxygen is 'enough' of a sign of life," he said. "This work really argues for needing to know the context of your detection. What other molecules are found in addition to oxygen, or not found, and what does that tell you about the planet's evolution?"

This means astronomers will want a telescope that is sensitive to a broad range of wavelengths in order to detect different types of molecules in a planet's atmosphere.

The researchers based their findings on a detailed, end-to-end computational model of the evolution of rocky planets, starting from their molten origins and extending through billions of years of cooling and geochemical cycling. By varying the initial inventory of volatile elements in their model planets, the researchers obtained a surprisingly wide range of outcomes.

Oxygen can start to build up in a planet's atmosphere when high-energy ultraviolet light splits water molecules in the upper atmosphere into hydrogen and oxygen. The lightweight hydrogen preferentially escapes into space, leaving the oxygen behind. Other processes can remove oxygen from the atmosphere. Carbon monoxide and hydrogen released by outgassing from molten rock, for example, will react with oxygen, and weathering of rock also mops up oxygen. These are just a few of the processes the researchers incorporated into their model of the geochemical evolution of a rocky planet.

"If you run the model for Earth, with what we think was the initial inventory of volatiles, you reliably get the same outcome every time--without life you don't get oxygen in the atmosphere," Krissansen-Totton said. "But we also found multiple scenarios where you can get oxygen without life."

For example, a planet that is otherwise like Earth but starts off with more water will end up with very deep oceans, putting immense pressure on the crust. This effectively shuts down geological activity, including all of the processes such as melting or weathering of rocks that would remove oxygen from the atmosphere.

In the opposite case, where the planet starts off with a relatively small amount of water, the magma surface of the initially molten planet can freeze quickly while the water remains in the atmosphere. This "steam atmosphere" puts enough water in the upper atmosphere to allow accumulation of oxygen as the water breaks up and hydrogen escapes.

"The typical sequence is that the magma surface solidifies simultaneously with water condensing out into oceans on the surface," Krissansen-Totton said. "On Earth, once water condensed on the surface, escape rates were low. But if you retain a steam atmosphere after the molten surface has solidified, there's a window of about a million years when oxygen can build up because there are high water concentrations in the upper atmosphere and no molten surface to consume the oxygen produced by hydrogen escape."

A third scenario that can lead to oxygen in the atmosphere involves a planet that is otherwise like Earth but starts off with a higher ratio of carbon dioxide to water. This leads to a runaway greenhouse effect, making it too hot for water to ever condense out of the atmosphere onto the surface of the planet.

"In this Venus-like scenario, all the volatiles start off in the atmosphere and few are left behind in the mantle to be outgassed and mop up oxygen," Krissansen-Totton said.

He noted that previous studies have focused on atmospheric processes, whereas the model used in this study explores the geochemical and thermal evolution of the planet's mantle and crust, as well as the interactions between the crust and atmosphere.

"It's not computationally intensive, but there are a lot of moving parts and interconnected processes," he said.

Credit: 
University of California - Santa Cruz

Giant electronic conductivity change driven by artificial switch of crystal dimensionality

image: The direct 3D-2D structural phase transition was induced in (Pb1?xSnx)Se alloy epitaxial films by using a nonequilibrium growth technique. Reversible giant electronic property change was attained at x ~ 0.5 originating in the abrupt band structure switch from gapless Dirac-like state to semiconducting state.

Image: 
Tokyo Tech

The electronic properties of solid materials are highly dependent on crystal structures and their dimensionalities (i.e., whether the crystals have predominantly 2D or 3D structures). As Professor Takayoshi Katase of Tokyo Institute of Technology notes, this fact has an important corollary: "If the crystal structure dimensionality can be switched reversibly in the same material, a drastic property change may be controllable." This insight led Prof. Katase and his research team at Tokyo Institute of Technology, in partnership with collaborators at Osaka University and National Institute for Materials Science, to embark on research into the possibility of switching the crystal structure dimensionality of a lead-tin-selenide alloy semiconductor. Their results appear in a paper published in a recent issue of the peer-reviewed journal Science Advances.

The lead-tin-selenide alloy, (Pb1?xSnx)Se is an appropriate focus for such research because the lead ions (Pb2+) and tin ions (Sn2+) favor distinct crystal dimensionalities. Specifically, pure lead selenide (PbSe) has a 3D crystal structure, whereas pure tin selenide (SnSe) has a 2D crystal structure. SnSe has bandgap of 1.1 eV, similar to the conventional semiconductor Si. Meanwhile, PbSe has narrow bandgap of 0.3 eV and shows 1 order of magnitude higher carrier mobility than SnSe. In particular, the 3D (Pb1-xSnx)Se has gathered much attention as a topological insulator. That is, the substitution for Pb with Sn in the 3D PbSe reduces the band gap and finally produces a gap-less Dirac-like state. Therefore, if these crystal structure dimensionality can be switched by external stresses such as temperature, it would lead to a giant functional phase transition, such as large electronic conductivity change and topological state transition, enhanced by the distinct electronic structure changes.

The alloying PbSe and SnSe would manipulate the drastic transition in structure, and such (Pb1-xSnx)Se alloy should induce strong frustration around phase boundaries. However, there is no direct phase boundary between the 3D PbSe and the 2D SnSe phases under thermal equilibrium. Through their experiments, Prof. Katase and his research team successfully developed a method for growing the nonequilibrium lead-tin-selenide alloy crystals with equal amounts of Pb2+ and Sn2+ ions (i.e., (Pb0.5Sn0.5)Se) that underwent direct structural phase transitions between 2D and 3D forms based on temperature. At lower temperatures, the 2D crystal structure predominated, whereas at higher temperatures, the 3D structure predominated. The low-temperature 2D crystal structure was more resistant to electrical current than the high-temperature 3D crystal was, and as the alloy was heated, its resistivity levels took a sharp dive around the temperatures at which the dimensionality phase transition occurred. The present strategy facilitates different structure dimensionality switching and further functional property switching in semiconductors using artificial phase boundary.

In sum, the research team developed a form of the semiconductor alloy (Pb1?xSnx)Se that undergoes temperature-dependent crystal dimensionality phase transitions, and these transitions have major implications for the alloy's electronic properties. When asked about the importance of his team's work, Prof. Katase notes that this form of the (Pb1?xSnx)Se alloy can "serve as a platform for fundamental scientific studies as well as the development of novel function in semiconductor technologies." This specialized alloy may, therefore, lead to exciting new semiconductor technologies with myriad benefits for humanity.

Credit: 
Tokyo Institute of Technology

Crop rotations with beans and peas offer more sustainable and nutritious food production

Growing more legumes, like beans and lentils, is potentially a more sustainable and nutritious approach to European agriculture, shows a new study in Frontiers in Sustainable Food Systems. This study presents some of the first holistic evidence that adding legumes to traditional crop rotations (typically including barley, wheat and rapeseed) offers significant environmental benefits as well as increased nutritional value for humans and livestock.

"This strategy can contribute significantly to the specific European Union Green Deal Farm to Fork objectives to reduce greenhouse gas emissions, chemical pesticide use and synthetic fertilizer use," says first author Marcela Porto Costa, of Bangor University in the UK. "For example, in Scotland, we've shown that the introduction of a legume crop into the typical rotation reduced external nitrogen requirements by almost half whilst maintaining the same output of food measured in terms of potential human nutrition."

A sustainable source of nitrogen

All crops need the critical nutrient nitrogen in order to grow and, for most crops, farmers must provide nitrogen via fertilizers. However, it has become increasingly clear that conventional fertilizers are not sustainable--they require significant energy to produce, they are depleting finite resources and they pollute the surrounding environments.

The European Union Green Deal Farm to Fork strategy specifically aims to address this problem, with goals to cut greenhouse emissions and chemical pesticide use by 50%, as well as reducing synthetic fertilizer use by 20% by 2030. In contrast to other crop types, legumes are among the only crops that are capable of getting all of the nitrogen they need simply from the air around them. This is thanks to a symbiotic partnership with bacteria that transforms nitrogen in the air into a form that can be used by plants.

Legume crops not only don't require fertilizer themselves, they also enrich the soil with nitrogen, reducing the need for nitrogen fertilizers for future non-legume crops. From a nutritional perspective, legumes are also one of the most nutrient-rich crops, providing protein, fiber, folate, iron, potassium, magnesium and vitamins.

Calculating nutritional delivery

Costa and her collaborators' new approach is more comprehensive than previous calculations of environmental footprints because it compares 10 different crop sequences using 16 different impact categories. Their assessment also represents a timeframe of 3 to 5 years and three different European climates in Italy, Romania and Scotland.

"Our innovative approach goes beyond simple food footprints by looking at the footprint of delivering a specific quantity of human, or livestock, nutrition from all crops produced within representative crop rotations," says Dr David Styles, who coordinated the study and is based at the University of Limerick in Ireland. "This provides a clearer picture of inter-crop effects and the overall efficiency of different cropping sequences in delivering nutritious food (or livestock feed)."

So far this approach only calculates the potential nutritional delivery. The amount of nutrition ultimately delivered by different rotations will also depend on how foods are processed and sold. Further research is also needed to develop better calculations for livestock feed. The team plans to extend this approach to other types of crop rotations and additional agricultural locations and climates.

"Our results strengthen evidence on the positive role that healthy diet transitions could make to environmental sustainability," says Styles. "Legumes provide a healthier balance of carbohydrates, protein and fiber compared with cereal crops, and could improve the nutritional profile of the food we eat."

"These results also highlight the need for whole-system (multi-crop, farm-to-fork) thinking when designing interventions to drive sustainable food systems so that we can deliver better nutrition whilst reducing environmental impacts," adds Costa.

Credit: 
Frontiers

Doctors still reluctant to prescribe medical cannabis: McMaster

image: Jason Busse, associate director of the Michael G. DeGroote Centre for Medicinal Cannabis Research at McMaster

Image: 
McMaster University

Hamilton, ON (April 13, 2021) - Ontario doctors are still hesitant to prescribe medical cannabis to patients suffering long-term pain 20 years after it was first introduced, says a new study carried out at McMaster University.

Physicians surveyed said their main concerns relate to possible ill-effects and a lack of understanding regarding their effectiveness as painkillers.

Of particular concern among doctors were potentially harmful effects on cognitive development, a possible worsening of existing mental illnesses in patients and the drug's effects in older adults, which may include dizziness or drowsiness.

Meanwhile, the number of Canadians using medical cannabis has soared from just under 24,000 in June 2015 to 377,000 by September 2020.

"This paper is demonstrating that there is a real perceived need by family physicians that more evidence, education and guidance is needed, so they can better help patients who are asking about this treatment," said Jason Busse, associate director of the Michael G. DeGroote Centre for Medicinal Cannabis Research at McMaster.

Six of the 11 physicians surveyed also raised the issue of how legal recreational cannabis affected its medical counterpart, but 10 said therapeutic variants should remain an option. Recreational cannabis, which has different formulation than medical cannabis, was legalized in Canada in October 2018.

The report says the "increased use of medical cannabis was likely the result of the easing of regulations, greater availability given the growing numbers of producers and cannabis clinics and reduced stigma around the use of cannabis for therapeutic purposes."

However, doctors are still hampered by a lack of proper guidance, while medical cannabis products have not undergone the same rigorous trials as other pharmaceutical drugs on the market, said Busse, associate professor of anesthesia.

In 2019, the Canadian Medical Association said that although cannabis may offer patients relief when conventional therapies fail, a lack of evidence surrounding the risks and benefits of its use makes it difficult for physicians to advise patients properly.

"When you have such widespread recreational and medical cannabis use, there is a real challenge for healthcare providers who are not trained in prescribing it," said Busse.

Researchers conducted telephone interviews with the doctors between January and October 2019 and published their findings in the Canadian Medical Association Journal Open.

Credit: 
McMaster University