Brain

Saliva can be more effective than nasopharyngeal swabs for COVID-19 testing

image: Schematic overview of sample processing and severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) assay workflow, depicting main steps. Matched nasopharyngeal swab (NPS) and saliva sample pairs collected in health care and community settings were tested and validated as follows. Top panel: NPS or saliva samples were processed with protocol U for nucleic acid extraction using a semi-automated instrument, followed by RT-PCR for N and ORF1ab gene targets and internal control (IC) used as extraction and RT-PCR IC. Middle panel: Saliva samples processed with SalivaAll protocol that included a saliva homogenization step using a bead mill homogenizer before RNA extraction and downstream processing. Bottom panel: Saliva samples were homogenized using a bead mill homogenizer (SalivaAll protocol) before pooling samples with a five-sample pooling strategy for SARS-CoV-2 testing.

Image: 
Nikhil S. Sahajpal

Philadelphia, June 10, 2021 - The collection of nasopharyngeal swab (NPS) samples for COVID-19 diagnostic testing poses challenges including exposure risk to healthcare workers and supply chain constraints. Saliva samples are easier to collect but can be mixed with mucus or blood, and some studies have found they produce less accurate results. A team of researchers has found that an innovative protocol that processes saliva samples with a bead mill homogenizer before real-time PCR (RT-PCR) testing results in higher sensitivity compared to NPS samples. Their protocol appears in The Journal of Molecular Diagnostics, published by Elsevier.

"Saliva as a sample type for COVID-19 testing was a game changer in our fight against the pandemic. It helped us with increased compliance from the population for testing along with decreased exposure risk to the healthcare workers during the collection process," said lead investigator Ravindra Kolhe, MD, PhD, Department of Pathology, Medical College of Georgia, Augusta University, Augusta, GA, USA.

The study included samples from a hospital and nursing home as well as from a drive-through testing site. In the first phase (protocol U), 240 matched NPS and saliva sample pairs were tested prospectively for SARS-CoV-2 RNA by RT-PCR. In the second phase of the study (SalivaAll), 189 matched pairs, including 85 that had been previously evaluated with protocol U, were processed in an Omni bead mill homogenizer before RT-PCR testing. An additional study was conducted with samples with both protocol U and SalivaAll to determine if bead homogenization would affect the clinical sensitivity in NPS samples. Finally, a five-sample pooling strategy was evaluated. Twenty positive pools containing one positive and four negative samples were processed with the Omni bead homogenizer before pooling for SARS-CoV-2 RT-PCR testing and compared to controls.

In Phase I, 28.3 percent of samples tested positive for SARS-CoV-2 from either NPS, saliva, or both. The detection rate was lower in saliva compared to NPS (50.0 percent vs. 89.7 percent). In Phase II, 50.2 percent of samples tested positive for SARS-CoV-2 from either saliva, NPS, or both. The detection rate was higher in saliva compared to NPS samples (97.8 percent vs. 78.9 percent). Of the 85 saliva samples tested with both protocols, the detection rate was 100 percent for samples tested with SalivaAll and 36.7 percent with protocol U.

Dr. Kolhe observed that the underlying issues associated with lower sensitivity of saliva to RT-PCR testing could be attributed to the gel-like consistency of saliva samples, which made it difficult to accurately pipet samples into extraction plates for nucleic acid extraction. Adding the homogenization step rendered the saliva samples to uniform viscosity and consistency, making it easier to pipet for the downstream assay.

Dr. Kolhe and his colleagues also successfully validated saliva samples in the five-sample pooling strategy. The pooled testing results demonstrated a positive agreement of 95 percent, and the negative agreement was found to be 100 percent. Pooled testing will be critical for SARS-CoV-2 mass surveillance as schools reopen, travel and tourism resume, and people return to offices.

"Monitoring SARS-CoV-2 will remain a public health need," Dr. Kolhe said. "The use of a non-invasive collection method and easily accessible sample such as saliva will enhance screening and surveillance activities and bypass the need for sterile swabs, expensive transport media, and exposure risk, and even the need for skilled healthcare workers for sample collection."

Credit: 
Elsevier

Ocean microplastics: First global view shows seasonal changes and sources

An estimated 8 million tons of plastic trash enters the ocean each year, and most of it is battered by sun and waves into microplastics--tiny flecks that can ride currents hundreds or thousands of miles from their point of entry.

The debris can harm sea life and marine ecosystems, and it's extremely difficult to track and clean up.

Now, University of Michigan researchers have developed a new way to spot ocean microplastics across the globe and track them over time, providing a day-by-day timeline of where they enter the water, how they move and where they tend to collect.

The approach relies on the Cyclone Global Navigation Satellite System, or CYGNSS, and can give a global view or zoom in on small areas for a high-resolution picture of microplastic releases from a single location.

The technique is a major improvement over current tracking methods, which rely mainly on spotty reports from plankton trawlers that net microplastics along with their catch.

"We're still early in the research process, but I hope this can be part of a fundamental change in how we track and manage microplastic pollution," said Chris Ruf, the Frederick Bartman Collegiate Professor of Climate and Space Science at U-M, principal investigator of CYGNSS and senior author on a newly published paper on the work.

Their initial observations are revealing.

Season changes in the Great Pacific Garbage Patch

The team found that global microplastic concentrations tend to vary by season, peaking in the North Atlantic and Pacific during the Northern Hemisphere's summer months. June and July, for example, are the peak months for the Great Pacific Garbage Patch, a convergence zone in the North Pacific where microplastic collects in massive quantities.

Concentrations in the Southern Hemisphere peak during its summer months of January and February. Concentrations tend to be lower during the winter, likely due to a combination of stronger currents that break up microplastic plumes and increased vertical mixing that drives them further beneath the water's surface, researchers say.

The data also showed several brief spikes in microplastic concentration at the mouth of the Yangtze River--long suspected to be a chief source.

"It's one thing to suspect a source of microplastic pollution, but quite another to see it happening," Ruf said. "The microplastics data that has been available in the past has been so sparse, just brief snapshots that aren't repeatable."

The researchers produced visualizations that show microplastic concentrations around the globe. Often the areas of accumulation are due to prevailing local water currents and convergence zones, with the Pacific patch being the most extreme example.

"What makes the plumes from major river mouths noteworthy is that they are a source into the ocean, as opposed to places where the microplastics tend to accumulate," Ruf said.

Ruf says the information could help organizations that clean up microplastics deploy ships and other resources more efficiently. The researchers are already in talks with a Dutch cleanup organization, The Ocean Cleanup, on working together to validate the team's initial findings. Single-point release data may also be useful to the United Nations agency UNESCO, which has sponsored a task force to find new ways to track the release of microplastics into the world's waters.

Hurricane-tracking satellites set their sights on plastic pollution

Developed by Ruf and U-M undergraduate Madeline Evans, the tracking method uses existing data from CYGNSS, a system of eight microsatellites launched in 2016 to monitor weather near the heart of large storm systems and bolster predictions on their severity. Ruf leads the CYGNSS mission.

The key to the process is ocean surface roughness, which CYGNSS already measures using radar. The measurements have mainly been used to calculate wind speed near the eyes of hurricanes, but Ruf wondered whether they might have other uses as well.

"We'd been taking these radar measurements of surface roughness and using them to measure wind speed, and we knew that the presence of stuff in the water alters its responsiveness to the environment," Ruf said. "So I got the idea of doing the whole thing backward, using changes in responsiveness to predict the presence of stuff in the water."

Using independent wind speed measurements from NOAA, the team looked for places where the ocean seemed less rough than it should be given the wind speed. They then matched those areas up with actual observations from plankton trawlers and ocean current models that predict the migration of microplastic. They found a high correlation between the smoother areas and those with more microplastic.

Ruf's team believes the changes in ocean roughness may not be caused directly by the microplastics, but instead by surfactants--a family of oily or soapy compounds that lower the surface tension on a liquid's surface. Surfactants tend to accompany microplastics in the ocean, both because they're often released along with microplastics and because they travel and collect in similar ways once they're in the water.

Credit: 
University of Michigan

A new approach will help save X-ray studies from failing results

image: Authors of the article.

Image: 
IKBFU

X-rays are widely used to study the structures of various objects. New sources of x-rays, like Free Electron Lasers and 4th generation synchrotrons are being built around the Globe. The best optics for the new sources is usually made of the single crystal materials, such as silicon, germanium or diamond. However, the ideal periodicity of crystals leads to some unwanted diffraction losses - X-ray glitches. This effect causes dips in the intensity of the radiation transmitted through the optical element, down to zero. Scientists from the Immanuel Kant Baltic Federal University with foreign colleagues have developed a method that allows not only to predict the appearance of glitches but even to eliminate their influence on experiments.

The structure of a substance determines its properties. Therefore, there is no need to dispute the importance of materials science. The most effective, non-destructive, and actively developing nowadays are X-ray methods based on the interaction of the corresponding radiation with a substance. Its new sources (generations 4+) generate X-ray beams with immensely high brightness and a degree of spatial coherence. For taking full advantage of such beams, new optics capable of fully forming, focusing, and transporting radiation without significant distortions and losses, are required. Not any material is suitable - the features of its atomic structure and the presence of any surface and volume inhomogeneities can significantly affect the outcome.

Single-crystal diamond has long been assumed as an ideal candidate for the material for manufacturing X-ray optics: it is mechanically and thermally stable, poorly absorbs radiation, has a small number of impurities, and a suitable crystal structure. This means it is an X-ray amorphous material, which microstructure prevents a beam from scattering. So the radiation is used by scientists without losses, and the optics based on it has increased resolution and sensitivity. However, working with a single crystal diamond possesses one issue - the effect of diffraction loss or X-ray glitches. So are the "dips" in the intensity of radiation passed through the optical element are called. It happens when the X-ray beam passes through the optical element, for some wavelengths the condition (called the Wolfe-Bragg condition) fulfills then a part of the transmitted radiation is diffracted in an undesirable direction. This condition could be met rather often, especially for "hard" radiation (with a small wavelength).

Glitches can be inconvenient to researchers in experiments in which a wavelength of the incident radiation changes during measurements. Thus, the jumping intensity for different wavelengths might occur. Even worse, at a constant wavelength, one can accidentally hit the glitch and carry out the entire experiment either at a lower or "floating" intensity. Respectively, this effect has to be taken into account in any research.

"The glitch effect has been known in spectroscopy for a long time and brings some discomfort to researchers. In some cases, they try to ignore it. Otherwise, it is easier to discard the damaged part of the obtained data. However, glitches can appear in any experiment: if with small changes in the radiation intensity, it is possible to compensate for the negative effects by normalizing the transmitted intensity to the incident one, with a strong drop in the intensity, the signal under study can simply "drown" in the noise", says Nataliya Klimova, Research Associate at the IKBFU International Science and Research Center "Coherent X-ray Optics for Megascience facilities".

Scientists from the IKBFU and their colleagues from the Center for Free-Electron Laser Science (CFEL) and from the European Synchrotron Radiation Facility (ESRF) have developed a method for accurate simulation and prediction of glitches, as well as getting rid of them. It can be applied to any single-crystal materials. The proposed approach does not require complex calculations, and therefore can be carried out right during the research. Before starting the experiment, it will be necessary to make only one measurement of the spectrum of radiation transmitted through the optical element. Based on the data obtained, using the developed program, it is possible to determine the exact orientation of the lens (or other optical elements) and then calculate where on the spectrum glitches may appear. Moreover, the proposed algorithm allows one to suppress specific glitches in the radiation spectrum. The authors confirmed their theoretical calculations in the course of experiments. The developed program is publicly available and applicable to any X-ray source.

"The results in this article not only continue the study of the previously discovered effect of diffraction losses in single-crystal X-ray optics but offer a reliable way to deal with them under experimental conditions. It will increase the efficiency of refractive single-crystal optics and let tune the work at the beamlines of the 4th generation synchrotron radiation sources", says Anatoly Snigirev, director of the IKBFU International Science and Research Center "Coherent X-ray Optics for Megascience facilities".

Scientists continue to work on this topic and are planning additional applications of the discovered effects at modern X-ray sources. Corresponding articles will be published soon.

"This article is just the beginning. Once again, we made sure that correct data processing is not only necessary but also rewarding. A correct physical model allowed us to fully explain the experimental data and also to come up with excellent applications for, at first glance, negative effects. Thus, shortly, we'll publish even more exciting articles on this topic!" says Oleksandr Yefanov, Seniour Researcher at the German Research Center for Free-Electron Laser Science, DESY, Hamburg.

Credit: 
Immanuel Kant Baltic Federal University

Brain connections mean some people lack visual imagery

New research has revealed that people with the ability to visualise vividly have a stronger connection between their visual network and the regions of the brain linked to decision-making. The study also sheds light on memory and personality differences between those with strong visual imagery and those who cannot hold a picture in their mind's eye.

The research, from the University of Exeter, published in Cerebral Cortex Communications, casts new light on why an estimated one-three per cent of the population lack the ability to visualise. This phenomenon was named "aphantasia" by the University of Exeter's Professor Adam Zeman in 2015 Professor Zeman called those with highly developed visual imagery skills "hyperphantasics".

Funded by the Arts and Humanities Research Council, the study is the first systematic neuropsychological and brain imaging study of people with aphantasia and hypephantasia. The team conducted fMRI scans on 24 people with aphantasia, 25 with hyperphantasia and a control group of 20 people with mid-range imagery vividness. They combined the imaging data with detailed cognitive and personality tests.

The scans revealed that people with hyperphantasia have a stronger connection between the visual network which processes what we see, and which becomes active during visual imagery, and the prefrontal cortices, invovled in decision-making and attention. These stronger connections were apparent in scans performed during rest, while participants were relaxing - and possibly mind-wandering.

Despite equivalent scores on standard memory tests, Professor Zeman and the team found that people with hyperphantasia produce richer descriptions of imagined scenarios than controls, who in turn outperformed aphantasics. This also applied to autobiographical memory, or the ability to remember events that have taken place in the person's life. Aphantasics also had lower ability to recognise faces.

Personality tests revealed that aphantasics tended to be more introvert and hyperphantasics more open.

Professor Zeman said: "Our research indicates for the first time that a weaker connection between the parts of the brain responsible for vision and frontal regions involved in decision-making and attention leads to aphantasia. However, this shouldn't be viewed as a disadvantage - it's a different way of experiencing the world. Many aphantasics are extremely high-achieving, and we're now keen to explore whether the personality and memory differences we observed indicate contrasting ways of processing information, linked to visual imagery ability."

The study is entitled 'Behavioral and Neural Signatures of Visual Imagery Vividness Extremes: Aphantasia vs. Hyperphantasia' and is published in Cerebral Cortex Communications.

Credit: 
University of Exeter

Social media use one of four factors related to higher COVID-19 spread rates early on

video: Researchers from York University and the University of British Columbia have found social media use to be one of the factors related to the spread of COVID-19 within dozens of countries during the early stages of the pandemic.

Image: 
York University

TORONTO, June 9, 2021 - Researchers from York University and the University of British Columbia have found social media use to be one of the factors related to the spread of COVID-19 within dozens of countries during the early stages of the pandemic.

The researchers say this finding resembles other examples of social media misinformation ranging from the initial phase of vaccine rollout to the 2021 Capitol riot in the United States.

Countries with high social media use leading to off-line political action prior to the pandemic, as surveyed before the pandemic by V-Dem (a database from the University of Gothenburg), showed the strongest trend toward a high R0 - an indicator of how many secondary infections one infected individual is likely to cause - and a faster initial spread of the virus. For example, Canada when compared to the United States had a lower level of social media use leading to off-line action and a lower R0. A set of multiple factors, including social media, could explain the different outcomes between the two countries, although the findings do not imply causation.

"What we found was surprising, that the use of social media to organize off-line action tended to be associated with a higher spread rate of COVID-19. This highlights the need to consider the dynamic role that social media plays in epidemics," says Assistant Professor Jude Kong of York University's Faculty of Science, who led the research with University of British Columbia Postdoctoral Fellow Edward Tekwa.

Watch video: https://youtu.be/7ICj1CPX6So

The research team examined national level demographic, disease, economic, habitat, health, social and environmental characteristics that existed before the pandemic across 58 countries, including Ghana, Canada and the United States. They broke those characteristics down into covariates and analyzed which ones had the strongest associations with vulnerability to the virus before government interventions were put in place.

"The world has changed to modify R0. Social media, for example, could help rather than hurt now that we have more reliable information to pass around. But some of the factors identified in our research have not changed and could be informative for the current and future pandemics," says Tekwa.

Kong and Tekwa found a country with an intermediate number of youth (between the ages of 20 and 34), an intermediate GINI inequality factor (the amount of income inequality across a population), and a population that primarily lives in cities of more than one million people were three additional factors with the strongest relationship to the rate of spread.

"We found that with a lower youth population, the spreading was very low, while a country with an intermediate level of youth population had the highest rate of spreading of COVID-19," says Kong of the Department of Mathematics & Statistics. "Interestingly, we found that as the youth population increases, it was associated with a lower number of cases, rather than a higher number."

Pollution, temperature, and humidity did not have a strong relationship with R0. The overall goal was to find baseline epidemiological differences across countries, shape future COVID-19 research, and better understand infectious disease transmission.

What's Next?

"Different countries have different characteristics that predispose them to greater vulnerability," says Kong. "When we are looking to compare COVID-19 progression among countries, we need to take into account those pre-existing country characteristics. The reason being is that if you just do a simple analysis the result will be misleading."

Understanding the initial phase will help account for pre-existing, intrinsic differences, as regions try to identify their own best management strategy going forward. Kong says they are already using this data to inform policymakers in Africa about which communities are most vulnerable.
The paper was published today in the journal PLOS ONE.

Credit: 
York University

Nearly 1 in 5 patients who die from unexplained sudden cardiac death have suspicious gene

As many as 450,000 Americans die every year from a sudden, fatal heart condition, and in slightly more than one in ten cases the cause remains unexplained even after an autopsy. Researchers from the University of Maryland School of Medicine (UMSOM) and their colleagues found that nearly 20 percent of patients with unexplained sudden cardiac death - most of whom were under age 50 - carried rare genetic variants. These variants likely raised their risk of sudden cardiac death. In some cases, their deaths may have been prevented if their doctors had known about their genetic predisposition to heart disease. The study findings were published last week in JAMA Cardiology.

"Genetic screening isn't routinely used in cardiology, and far too many patients still die suddenly from a heart condition without having any previously established risk factors. We need to do more for them," said study corresponding author Aloke Finn, MD, Clinical Associate Professor of Medicine at UMSOM.

To conduct the study, Dr. Finn and his colleagues performed genetic sequencing in 413 patients, who died at age 41 on average of sudden unexplained heart failure. Nearly two-thirds of the group were men, and about half were African American. The study found that 18 percent of patients who experienced sudden death had previously undetected genes associated with life-threatening arrhythmia or heart failure conditions. None of those who carried these genetic variants had been previously diagnosed with these abnormalities. Their hearts looked normal on autopsies without any signs of heart failure or significant blockages in their coronary arteries.

"What we found opens the door and asks some important questions," said Dr. Finn. "Should we be doing routine genetic screening in those who have a family history of unexplained sudden cardiac death?"

Such screening could have the potential to save lives. It may also leave patients and doctors in a quandary over what to do with such information. There are currently no clear guidelines on how to monitor or treat patients with these variants in the absence of clinically detectable disease.

Study faculty co-authors from UMSOM include Kristen Maloney, MS, Instructor of Medicine, Libin Wang, BM PhD, Assistant Professor of Medicine, Susie Hong, MD, Assistant Professor of Medicine, Anuj Gupta, MD, Associate Professor of Medicine, Linda Jeng, MD, PhD, Clinical Associate Professor of Medicine, Braxton Mitchell, PhD, Professor of Medicine, and Charles Hong, MD, PhD, the Melvin Sharoky, MD, Professor of Medicine.

"This is a fascinating study that provides important new insights into devastating deaths due to unexplained cardiac abnormalities," said E. Albert Reece, MD, PhD, MBA, Executive Vice President for Medical Affairs, UM Baltimore, and the John Z. and Akiko K. Bowers Distinguished Professor and Dean, University of Maryland School of Medicine. "It certainly makes the case for more research to address this urgent health need and save lives in the future."

Credit: 
University of Maryland School of Medicine

GEM simplifies the internal structure of protons and their collisions

image: When a proton collides with a proton, the gluon emitted by one of the valence quarks can interact with a virtual quark from the the quark-antiquark pair inside the other proton. According to the GEM model, the result of such an interaction will be a fast proton with an intact structure of valence quarks, and other particles created in processes taking place in the interaction region (outlined in white).

Image: 
Source: IFJ PAN / Dual Color

Inside each proton or neutron there are three quarks bound by gluons. Until now, it has often been assumed that two of them form a "stable" pair known as a diquark. It seems, however, that it's the end of the road for the diquarks in physics. This is one of the conclusions of the new model of proton-proton or proton-nucleus collisions, which takes into account the interactions of gluons with the sea of virtual quarks and antiquarks.

In physics, the emergence of a new theoretical model often augurs badly for old concepts. This is also the case with the description of collisions of protons with protons or atomic nuclei, proposed by scientists from the Institute of Nuclear Physics of the Polish Academy of Sciences (IFJ PAN) in Cracow. In the latest model, a significant role is played by interactions of gluons emitted by one proton with the sea of virtual quarks and antiquarks, appearing and disappearing inside another proton or neutron.

Gluons are carriers of the strong force, one of the four fundamental forces of nature. This binds quarks into composite structures, such as protons or neutrons. In many respects, the strong force differs from the others. For example, it does not weaken, but grows with the distance between the particles. Moreover, unlike photons, gluons carry a specific kind of charge (picturesquely known as colour) and can interact with each other.

The majority of nuclear reactions - including the bulk of collisions of protons with protons or atomic nuclei - are processes in which particles only "brush against" each other by exchanging gluons. Collisions of this type are called soft by physicists and cause them quite some trouble, since the theory describing them is incalculable from first principles. Thus, by necessity, all today's models of soft processes are more or less phenomenological.

"In the beginning, we only wanted to see how the existing tool, known as the Dual Parton Model, handles more precise experimental data on proton-proton and proton-carbon nucleus collisions," recalls Prof. Marek Jezabek (IFJ PAN). "It rapidly turned out that it was not coping well. So, we decided, on the basis of the old model which has been under development for over four decades, to try to create something which was on the one hand more precise, and on the other - closer to the nature of the described phenomena."

The Gluon Exchange Model (GEM) built at IFJ PAN is also phenomenological. However, it is not based on analogies to other physical phenomena, but directly on the existence of quarks and gluons and their fundamental properties. Moreover, GEM takes into account the existence in protons and neutrons of not only triplets of the main (valence) quarks, but also the sea of constantly arising and annihilating pairs of virtual quarks and antiquarks. In addition, it takes into account the limitations resulting from the principle of baryon number conservation. In simplified terms, it says that the number of baryons (i.e. protons and neutrons) existing before and after the interaction must remain unchanged. As each quark carries its own baryon number (equal to 1/3), this principle allows to draw more reliable conclusions on what is happening with the quarks and the gluons exchanged between them.

"GEM has allowed us to explore new scenarios of the course of events involving protons and neutrons," stresses Dr. Andrzej Rybicki (IFJ PAN) and goes into more detail: "Let's imagine, for example, that in the course of a soft proton-proton collision, one of protons emits a gluon, which hits the other proton - not its valence quark but a quark from the virtual sea that exists for a fraction of a moment. When such a gluon is absorbed, the sea quark and antiquark forming a pair cease to be virtual and materialize into other particles in specific final states. Note that in this scenario new particles are formed despite the fact that the valence quarks of one of the protons have remained untouched."

The Cracow gluon model leads to interesting insights, two of which are particularly noteworthy. The first concerns the origin of diffractive protons, observed in proton-proton collisions. These are fast protons that come out of the collision site at small angles. Until now, it was believed that they could not be produced by colour change processes and that some other physical mechanism was responsible for their production. Now it turns out that the presence of diffractive protons can be explained by the interaction of the gluon emitted by one proton with the sea quarks of another proton.

Another observation is no less interesting. Earlier, when describing soft collisions, it was assumed that two of the three valence quarks of a proton or a neutron are bound together so that they form a "molecule" called a diquark. The existence of the diquark was a hypothesis that not all physicists would vouch for indiscriminately, but the concept was widely used - something that is now likely to change. The GEM model was confronted with experimental data describing a situation in which a proton collides with a carbon nucleus and interacts with two or more protons/neutrons along the way. It turned out that in order to be consistent with the measurements, under the new model in at least half the cases the disintegration of the diquark must be assumed.

"Thus, there are many indications that the diquark in a proton or neutron is not a strongly bound object. It may be that the diquark exists only effectively, as a random configuration of two quarks forming a so-called colour antitriplet - and whenever it can, it immediately disintegrates," says Dr. Rybicki.

The Cracow model of gluon exchange explains a wider class of phenomena in a simpler and more coherent way than the existing tools for description of soft collisions. The current results, presented in an article published in Physics Letters B, have interesting implications for matter-antimatter annihilation phenomena, in which an antiproton could annihilate on more than one proton/neutron in the atomic nucleus. Therefore, the authors have already formulated first, preliminary proposals to perform new measurements at CERN with an antiproton beam.

Credit: 
The Henryk Niewodniczanski Institute of Nuclear Physics Polish Academy of Sciences

Research shows decline in collisions and convictions connected to increase in ridesharing

image: Christopher Conner, MD, PhD's new research showed a direct connection between an increased use of ridesharing apps and a decrease in motor vehicle collisions and impaired driving convictions.

Image: 
Rogelio Castro/UTHealth

The increased use of ridesharing apps was linked to a decrease in motor vehicle collisions and impaired driving convictions in Houston, according to published research by The University of Texas Health Science Center at Houston (UTHealth).

The findings were published today in JAMA Surgery.

Christopher Conner, MD, PhD, neurosurgery resident in the Vivian L. Smith Department of Neurosurgery at McGovern Medical School at UTHealth and the study's lead author, said the research is timely as more individuals are utilizing ridesharing apps.

"Automobile accidents are the leading cause of death and disability among young people, so anything we can do to reduce those incidents is going to have a massive effect," he said.

For the study, researchers asked rideshare app companies that were in Houston as early as 2014, to supply their utilization rates. Uber responded, submitting data from 2014, when they first started service in Houston, through 2018.

Researchers also collected data from the Red Duke Trauma Institute at Memorial Hermann Hospital-Texas Medical Center and Harris Health Ben Taub Hospital in Houston comparing rates of patients admitted for injuries sustained in a motor vehicle accident from 2007-2013 and 2014-2018. Memorial Hermann-TMC and Ben Taub are the only American College of Surgeons Level 1 trauma centers in Houston. All patients admitted as a result of a motor vehicle accident were included in the data set.

Data was also collected on impaired driving convictions from the Harris County District Attorney's Office from 2007-2019, limited to cases resulting in a conviction or probation.

The study found that rideshare volume had a significant correlation with the incidence of motor vehicle-related trauma, with a reduction in the rate of incidence by one-third for every 1,000 rides. The rate continued to drop as more rides occurred. The age group with the most significant decrease in motor vehicle-related trauma were those under the age of 30, with a reduction rate of almost 39%.

Impaired driving convictions also reduced in the years following the introduction of Uber into Houston. Before 2014, there were an average of 22.5 impaired driving convictions in Houston daily. After 2014, impaired driving convictions decreased to an average of 19 per day.

"I think this was the biggest takeaway from the study. The data shows that ridesharing companies can decrease these incidents because they give young people an alternative to driving drunk," Conner said, adding that he hoped the results will allow people to see that anyone can be affected by a motor vehicle collision, but that they do have another option that has been proven to reduce their risk of injury, death, or impaired driving conviction.

The greatest number of motor vehicle collisions occurred on Friday and Saturday nights between 9 p.m. and 3 a.m. Comparing the date from before and after 2014 revealed an almost 24% decrease in motor vehicle collision traumas and the number of impaired driving convictions during those hours.

Conner is hopeful this study will open the door to further trauma research. "It is an area that has been really understudied," he said.

Credit: 
University of Texas Health Science Center at Houston

Breakthrough study shows defining traits are forged the moment we're born

There are still many unsolved mysteries about the human brain and its development. Now, a novel study published in Frontiers in Psychiatry sheds new light on the neurobiological origins of our individual traits.

Functional connectivity is the coordinated activity - activation or deactivation - through time between separate brain regions, regardless of their physical closeness or the type of neural connections between them. Changes in functional connectivity can be a sign of mental health disorders such as depression, eating disorders, and schizophrenia, and are thought to have developmental origins.

We know that mental health is characterized by three functional brain networks. The first is hypoconnectivity within the frontoparietal network (FPN), which is involved in the cognitive control of emotion and attention. The second is hyperconnectivity within the default mode network (DMN), which is involved in social cognition and mind wandering. And finally, hypoconnectivity within the homologous-interhemispheric network (HIN), which is implicated in the regulation of emotions.

Researching infant brains

The researchers focused on two questions. First, to identify and map individual variability in the three defined functional brain networks (FPN, DMN, and HIN) in newborn and one-month-old infants. For this, the researchers used functional near-infrared spectroscopy (fNIRS), which utilizes a headcap to measure brain activity.

They then looked at how variability in functional connectivity can predict individual differences in infant temperament. Infant temperament refers to their innate personality, which is present from birth. The researchers focused on three important dimensions of infant temperament: regulation or orienting (measured by cuddliness, soothability, and low intensity pleasure), negative emotionality (fear, sadness, and distress to limitations), positive emotionality (laughing/smiling, activity level, and vocal reactivity). The researchers asked the parents to fill in a questionnaire about the temperament of their children.

The findings show, for the first time, that functional brain networks that impact our behavior develop within the first month of a person's life. More specifically, the researchers could determine functional connectivity in the three studied cortical brain networks in young infants and found that these networks differed noticeably among each child.

A first-of-its-kind study

This means that the neural connections in our brains that determine human behavioral traits are already present from birth and are unique to each individual. "Our main findings show that soon after birth, greater connectivity between frontal and parietal brain regions is linked to improved behavioral regulation in human infants. To our knowledge, this is the first study, to demonstrate that connectivity for this specific brain network develops early in human infancy and plays a role in accounting for individual differences in emerging self-regulation and control skills among infants," says co-author Dr Toby Grossmann, of the University of Virginia, and the Max Planck Institute for Human Cognitive and Brain Sciences.

These findings call for further research to develop a deeper understanding of the role of functional brain connectivity in early human cognitive, emotional, and social development, and specifically, research into psychiatric disorders. "There is a whole host of psychiatric disorders that have been associated with differences in functional connectivity in the brain networks examined in young infants in our study. Previous research implicates more extreme individual differences in these networks studied here in a group of typically developing infants to adults suffering from major depression. But it remains an open question whether the demonstrated link between brain and behavior in early infancy is predictive of long-term developmental outcomes including psychiatric diseases. It is important to carry out large-scale longitudinal neurodevelopmental studies to address the question of whether the demonstrated brain-behavior correlation is of psychiatric relevance and clinical significance."

Credit: 
Frontiers

Tree diversity may save the forest: Advocating for biodiversity to mitigate climate change

image: There is much emphasis on the undesirable feedbacks where climate change drives biodiversity loss (magenta arrows feedback). Here, we highlight the contribution of an underutilized positive feedback in which biodiversity-dependent productivity could contribute to climate change mitigation (green arrows feedback). The conservation and restoration of tree diversity could enhance this feedback and promote the desirable pathway whereby forest biodiversity contributes to climate stabilization.

Image: 
Yokohama National University

When it comes to climate change, policymakers may fail to see the trees for the forest. Turns out that the trees may be the answer after all, according to a study published by authors from more than seven countries on June 3rd in Nature Climate Change.

"Climate change and biodiversity loss are two major environmental challenges," said paper author Akira S. Mori, professor at Yokohama National University. "But the vast majority of attention has been paid to one unidirectional relationship -- climate change as a cause and biodiversity loss as a consequence."

Mori and his co-authors argue that climate change and species diversity across ecosystems are mutually independent, and, while they can influence each other, they are not a direct cause and effect. The problem, Mori said, is that this perspective is largely lacking from both policy efforts and science so far.

"There is now recognition of the need for nature-based solutions, which involve working with nature to address society challenges, including carbon storage by restoring forests," Mori said. "However, natural climate solutions are currently missing biodiversity as part of the equation: it is not yet widely appreciated as a powerful contributor to climate stabilization."

To quantify how biodiversity, or the lack thereof, might influence climate change, the researchers used a multi-faceted modelling approach to assess how mitigation efforts impacted the diversity of woody plant species -- namely, trees and shrubs -- that can enable forests to store carbon. They divided the forested areas of Earth into 115 million grids, allowing them to analyze how shifts in species richness on the local level could change primary productivity -- the ability to process carbon dioxide into other, benign and beneficial molecules, such as energy and oxygen. The researchers considered these changes and impacts against a baseline scenario in which global temperatures continue to rise and another scenario in which climate change is mitigated before reaching temperature increases of two degrees Celsius by the end of the 21st century.

"We found that greenhouse gas mitigation could help maintain tree diversity, and thereby avoid a nine to 39% reduction in terrestrial primary productivity across different biomes, which could otherwise occur over the next 50 years," Mori said, noting that avoiding such a reduction could have significant social and economic benefits for communities.

The researchers scaled up the local estimates to understand how countries with varied biomes might fare in potential scenarios of reduced biodiversity and unabated climate change.

"We found that countries with the highest country-level social cost of carbon -- the marginal damage expected to occur in a particular country as a consequence of additional carbon dioxide emissions produced anywhere in the world -- have the greatest incentive to mitigate climate change to avoid its economic damages and also tend to be the countries where climate change mitigation could greatly help maintain primary productivity by safeguarding tree diversity, regardless of model or scenario," Mori said.

For example, the United States and China, two of the biggest carbon producers, would likely experience the most significant economic damages due to global warming, which, Mori said, incentivize the countries to maintain tree diversity as part of their effort to mitigate emissions.

"Our results emphasize an opportunity for a triple win for climate, biodiversity and society, and highlight that these co-benefits should be the focus of reforestation programs," Mori said.

The researchers are now preparing for two United Nations Framework Conventions: COP15, focused on biodiversity, in October and COP26, focused on climate, in November.

"We are aiming to provide strong implications for international policies since the interdependence of biodiversity and climate change are still not fully recognized by many governments," Mori said.

Credit: 
Yokohama National University

'Significant reduction' in GP trainee burnout following mindfulness programme

Medics training to be GPs reported positive improvement in burnout and resilience after completing a mindfulness course specially designed for doctors

The participants in the study by Warwick Medical School also saw improvements in their wellbeing and stress

By improving the mental wellbeing of trainees the researchers hope to better prepare them for the challenges of general practice and the impact of Covid-19 on the profession

Supports the wider adoption of mindfulness in medical training and the need for larger studies

Medics training to become general practitioners reported a significant positive improvement in their mental wellbeing after participating in a specially-designed mindfulness programme, a study from University of Warwick researchers shows.

The results show that incorporating mindfulness into training for GPs could help them cope better with the pressures of the profession and the challenges of practicing medicine during the pandemic.

The conclusions are drawn from a new study in the BMC Medical Education by a team from Warwick Medical School and funded by Health Education England focusing on a sample of 17 GP trainees working in Coventry and Warwickshire.

Mindfulness is defined as a capacity for enhanced and sustained moment-to-moment awareness of one's own mental and emotional state and being, in the context of one's own immediate environment. For their study, the researchers used the Mindful Practice Curriculum, an intervention designed for doctors in that it is structured and addresses issues that are specific to their profession. It has been widely tested in the United States but the researchers are currently evaluating its effectiveness in the UK.

For this study, 17 GP trainees took part in weekly 1.5-hour group sessions over a six-week period led a fully trained Mindful Practice tutor. Prior to starting, they completed questionnaires based on validated measures for wellbeing, burnout, stress, mindfulness and resilience. They then completed the same questionnaires after they finished the programme and their scores on both were compared.

Analysing the results, the researchers found significant change for the better in participants scores for all five categories. There were significant reductions in emotional exhaustion (24.2%) and disengagement (17.7%), measures of burnout, and stress (23.3%) reported amongst the trainees, and similarly improvements in resilience (15.8%) and wellbeing (22%). In addition, 16 trainees (94%) scored above the threshold for emotional exhaustion pre-course, but only 9 (53%) afterwards.

Lead author Dr Manuel Villarreal, Honorary Clinical Research Fellow at Warwick Medical School and a practicing GP, said: "As a medic, these are important qualities when engaging in consultations and making decisions. Strengthening all those qualities will help them to be better clinicians, engage better with patients, and it will benefit them at a personal level. The key thing is how you incorporate this type of programme into their training, how we put this in place so that GP trainees acquire these skills.

"It will also allow them to better navigate the challenges of Covid. The pandemic has entailed doing lots of telephone consultations and GP trainees are now having to make different decisions in new scenarios. That comes with additional stress."

Co-author Dr Petra Hanson, PhD student at the University of Warwick and Clinical Research Fellow at University Hospitals Coventry and Warwickshire NHS Trust, said: "This was such a positive and encouraging improvement, even in a small study, that this will hopefully lead to bigger and longer studies. This kind of intervention is feasible and is practical. We have shown that it can work as part of postgraduate training and it should now be tested in other areas."

A previous study by the team showed that GP trainees experienced similar levels of burnout to experienced GPs, but that the majority were willing to use mindfulness as a method to reduce its impact.

Co-author Professor Jeremy Dale, Professor of Primary Care at Warwick Medical School said: "General practitioners at all stages of their career experience considerable stress, often leading to exhaustion and burnout, early retirement and career change. Training to become a GP must not only include focusing on the clinical knowledge and skills needed to care for patients effectively, but also needs to support development of the personal skills needed to cope with being a GP. This is essential to ensuring the sustainability of the profession. As this study shows, mindfulness training offers a readily applicable approach, which it is feasible to deliver as part of GP vocational training. Preventing or relieving emotional exhaustion, stress and burnout is unarguably good for GP trainees. Trainees' wellbeing will almost certainly have an impact on their patients, colleagues and the wider NHS, and so should be a priority in vocational training."

Credit: 
University of Warwick

Most cities in São Paulo state have low potential capacity to adapt to climate change

image: It measures the effectiveness of public policy and legislation or regulation in supporting urban intervention relating to climate change in several areas, such as housing, mobility, agriculture and environment.

Image: 
Marcos Akira Watanabe

 Most cities in São Paulo state (Brazil) have low potential capacity to adapt to climate change in terms of the ability to formulate public policy that facilitates the revamping of their housing and transportation systems, for example, to account for the impact of climate change.

This is the main conclusion of a study conducted by researchers at the University of São Paulo (USP) in partnership with colleagues at the University of Campinas (UNICAMP) and the Federal University of Itajubá (UNIFEI) in Brazil, and the University of Michigan in the United States. 

Researchers linked to a project supported by FAPESP participated in the study. The results are published in the journal Climatic Change

Key findings were presented on May 19 to the 9th Brazilian-German Dialogue on Science, Research and Innovation, entitled “Cities and climate, the multilevel governance challenge”.

The event, organized by FAPESP in partnership with the German Center for Science and Innovation (DWIH) in São Paulo, was held online on May 17-20.

“We found that most cities in São Paulo state still have a lot of difficulties aligning public policy that can be connected to adaptation to climate change,” said Gabriela Marques Di Giulio, last author of the study and a professor at the University of São Paulo’s School of Public Health (FSP-USP).

To help cities assess their capacity to cope with the impact of climate change by implementing policies combining sustainability and adaptation in the short and long term, the researchers developed an Urban Adaptation Index (UAI) that measures the effectiveness of public policy and legislation or regulation in supporting urban intervention relating to climate change in several areas, such as housing, mobility, agriculture and environment. 

“The indicators for the UAI can be based on public data, such as the census carried out by IBGE [Brazil’s national statistics bureau], so the index is easily accessible and can be dynamically updated to reflect changes occurring in cities,” Di Giulio said. 

The researchers used the index to evaluate São Paulo state’s 645 municipalities. The results pointed to low scores for over half in the five dimensions evaluated.

Cities located in metropolitan areas, where more than 50% of the state’s population live, had the highest scores. “The UAI can help strengthen cities’ capacity to adapt and provoke a more knowledgeable debate about how they should best prepare for climate change,” Di Giulio said. 

The role of cities

Cities should establish strategies to reduce greenhouse gas emissions, achieve climate neutrality, and contribute to global efforts to mitigate the effects of climate change, said Sabine Schlacke, a professor at the University of Münster (WWU) in Germany.

Because they are part of nation-states, cities do not have foreign policy powers and their capacity to act according to the international climate change mitigation agenda is limited, but they can organize cooperation with other cities in networks like C40. “Cities are not addressed directly by the Paris Agreement, for example. Nevertheless, they can be involved in the development of nationally determined contributions, the NDCs that have to be submitted by member states,” Schlacke said.

According to Cathrin Zengerling, a professor at the University of Freiburg, few cities in the world are close to achieving the greenhouse gas emission targets required to keep the global average temperature rise well below 2 °C and limit it to 1.5 °C above the preindustrial level, as established by the Paris Agreement. 

One is São Paulo. “Among the reasons São Paulo has done so well in this regard is that most of its electricity comes from hydropower,” Zengerling said. 

The cities that most emit greenhouse gases are Denver, Chicago and Los Angeles in the US, and Shanghai and Beijing in China, she added.

The event website, with links to full recordings of all four days, is at: fapesp.br/eventos/dwhi9

Credit: 
Fundação de Amparo à Pesquisa do Estado de São Paulo

Researchers study historic Mississippi flow and impacts of river regulation

In "Atchafalaya," John McPhee's essay in the 1989 book The Control of Nature, the author chronicles efforts by the U.S. Army Corps of Engineers to prevent the Atchafalaya River from changing the course of the Mississippi River where they diverge, due to the Atchafalaya's steeper gradient and more direct route to the gulf. McPhee's classic essay proved inspirational to John Shaw, an assistant professor of geosciences who called it "a foundational text."

Indeed, his latest work adds to the story.

In a recent paper published in the American Geophysical Union's journal, Water Resources Research, Shaw and his fellow researchers, Kashauna G. Mason, Hongbo Ma and Gordon W. McClain III, examine the critical period before the decision was made in 1950 to create a river control system at the junction of the two rivers to get a clearer understanding of the rivers' natural state - and how regulation might be fine-tuned moving forward to preserve Louisiana coastlands.

The paper, Influences on Discharge Partitioning on a Large River Delta: Case Study of the Mississippi-Atchafalaya Diversion, 1926-1950, seeks to resolve lingering questions about the rate at which the Atchafalaya River captured water from the Mississippi River and the degree to which it would have changed the course of the river.

"You basically have two conduits to the ocean, and you can think of them as competing for water and sediment. You've got the Mississippi and the Atchafalaya -- anything to widen or grow one branch will pull more water from the other," Shaw said. "More and more water was going down the Atchafalaya, so that was what everybody focused on."

By looking at old Army Corps of Engineers surveys, McCain and Mason were able to painstakingly digitize more than 100,000 data points, bringing historic measurements into the modern field. From this, Ma and Shaw were able to model the hydrodynamic flow through the channel network.

An unexpected finding was that while the Atchafalaya was widening, increasing its flow, the Mississippi was widening, too, just not as fast the Atchafalaya. "That's really interesting," Shaw explained, "because I think most people assumed the Mississippi was shrinking simply because the Atchafalaya was expanding." In short, the situation may not have been as dire as initially thought.

Ultimately, the team found that erosion of the upper Atchafalaya accounted for about 73 percent of the increased water flow, while dredging of the lower part of the river accounted for the remainder, meaning the increased flow was a product of both natural and man-made forces. While man-made controls on the flow are secondary, they weren't properly understood at the time McPhee wrote "Atchafalaya."

Why is this important to know?

As Shaw explains it, Louisiana is slowly being submerged due to rising sea levels and human impact on the river system. Billions of dollars are being spent to prevent that from happening.

The Atchafalaya-Mississippi Diversion is the linchpin to controlling where water and sediment go, whether down one river or the other, and determining which marshes will be nourished by sediment. Hundreds of megatons of sediment come down the Mississippi every year, and if more goes down the Atchafalaya, that impacts the Mississippi and the coastline it shapes.

The ultimate goal of the research is to better understand how these rivers are being regulated and what would happen in the absence of regulation. By focusing on the years between 1926-1950, Shaw and his team are seeking to find a clearer picture of what the river looked like before regulation began -- and how it might be fine-tuned moving forward. This research was funded by a Department of Energy grant to understand river channel dynamics along coastlines.

Now that the paper is done, Shaw wants to send a copy to McPhee, a professor emeritus at Princeton. "I just want to let him know this was inspired by him," he said, "maybe not written so well, but it updates the story he broke to the world in the 80s."

Credit: 
University of Arkansas

Projected acidification of the Great Barrier Reef could be offset by ten years

New research has shown that by injecting an alkalinizing agent into the ocean along the length of the Great Barrier Reef, it would be possible, at the present rate of anthropogenic carbon emissions, to offset ten years’ worth of ocean acidification.

The research, by CSIRO Oceans and Atmosphere, Hobart, used a high-resolution model developed for the Great Barrier Reef region to study the impact of artificial ocean alkalinization on the acidity of the waters in the Great Barrier Reef. The study is based on the use of existing shipping infrastructure to inject a source of alkalinity into the ocean, which could also be considered as an acceleration of the chemical weathering of minerals through natural processes. Their results are published today in the IOP Publishing journal Environmental Research Letters.

The Great Barrier Reef is a globally significant coral reef system that supports productive and diverse ecosystems. At present, it is facing unprecedented stress from ocean warming, tropical cyclones, sediment and nutrient runoff, marine pests, and ocean acidification. Among these stressors, ocean acidification represents one of the most significant threats to the long-term viability of the reef, since it impacts the ability of the corals to build and repair their hard structures and recover from bleaching events.

In response to the declining health of coral reef ecosystems, a wide range of potential intervention concepts and technologies are currently under consideration, with the goal of minimising environmental pressures and enhancing the resilience of the coral reef ecosystem. These include active and direct environmental engineering approaches, such as artificial ocean alkalinization, a technique to offset or ameliorate the changes associated with ocean acidification and enhance oceanic carbon uptake. Essentially, artificial ocean alkalinization involves adding a source of alkalinity, such as olivine, to seawater, thereby "reversing" the shift in the carbon chemistry equilibrium process that occurs when the ocean takes up anthropogenic carbon. Olivine is an abundant mineral resource, which is already mined near the Great Barrier Reef.

The goal of this study was to investigate the reduction of the impact of ocean acidification on a scale hitherto unconsidered. According to the authors, "The majority of the artificial ocean alkalinization modeling studies to date have focused on the potential for alkalinization as a carbon dioxide removal technique. Few studies have explored the role of alkalinization with a focus on offsetting the changes associated with ocean acidification at a regional scale." The study therefore used a recently developed 4 km-resolution coupled hydrodynamic-biogeochemical model, validated for the Great Barrier Reef region, which allowed the simulation of the impact of the alkalinity injection on individual reefs along the length of the Great Barrier Reef (~2,000 km) for the first time. The results showed that by releasing the alkalinizing agent from an existing shipping lane, the resulting de-acidification would reach almost the whole of the Great Barrier Reef.

This report describes the novel and timely use of a regional model as a testbed for an ocean acidification mitigation technique. The study found that, by assuming the use of existing shipping infrastructure (a bulk carrier releasing 30 000 tons per day) as the alkalinity delivery mechanism, artificial ocean alkalinization would offset or ameliorate the projected acidification by ten years on 250 reefs. In doing so, it would also sequester 35,000 t of carbon in the ocean per year, or 0.0001% of the current global CO2 emissions.

Credit: 
IOP Publishing

Super productive 3D bioprinter could help speed up drug development

image: The high-throughput 3D bioprinting setup performing prints on a standard 96-well plate.

Image: 
Biofabrication

A 3D printer that rapidly produces large batches of custom biological tissues could help make drug development faster and less costly. Nanoengineers at the University of California San Diego developed the high-throughput bioprinting technology, which 3D prints with record speed—it can produce a 96-well array of living human tissue samples within 30 minutes. Having the ability to rapidly produce such samples could accelerate high-throughput preclinical drug screening and disease modeling, the researchers said.

The process for a pharmaceutical company to develop a new drug can take up to 15 years and cost up to $2.6 billion. It generally begins with screening tens of thousands of drug candidates in test tubes. Successful candidates then get tested in animals, and any that pass this stage move on to clinical trials. With any luck, one of these candidates will make it into the market as an FDA approved drug.

The high-throughput 3D bioprinting technology developed at UC San Diego could accelerate the first steps of this process. It would enable drug developers to rapidly build up large quantities of human tissues on which they could test and weed out drug candidates much earlier.

“With human tissues, you can get better data—real human data—on how a drug will work,” said Shaochen Chen, a professor of nanoengineering at the UC San Diego Jacobs School of Engineering. “Our technology can create these tissues with high-throughput capability, high reproducibility and high precision. This could really help the pharmaceutical industry quickly identify and focus on the most promising drugs.”

The work was published in the journal Biofabrication.

The researchers note that while their technology might not eliminate animal testing, it could minimize failures encountered during that stage.

“What we are developing here are complex 3D cell culture systems that will more closely mimic actual human tissues, and that can hopefully improve the success rate of drug development,” said Shangting You, a postdoctoral researcher in Chen’s lab and co-first author of the study.

The technology rivals other 3D bioprinting methods not only in terms of resolution—it prints lifelike structures with intricate, microscopic features, such as human liver cancer tissues containing blood vessel networks—but also speed. Printing one of these tissue samples takes about 10 seconds with Chen’s technology; printing the same sample would take hours with traditional methods. Also, it has the added benefit of automatically printing samples directly in industrial well plates. This means that samples no longer have to be manually transferred one at a time from the printing platform to the well plates for screening.

“When you’re scaling this up to a 96-well plate, you’re talking about a world of difference in time savings—at least 96 hours using a traditional method plus sample transfer time, versus around 30 minutes total with our technology,” said Chen.

Reproducibility is another key feature of this work. The tissues that Chen’s technology produces are highly organized structures, so they can be easily replicated for industrial scale screening. It’s a different approach than growing organoids for drug screening, explained Chen. “With organoids, you’re mixing different types of cells and letting them to self-organize to form a 3D structure that is not well controlled and can vary from one experiment to another. Thus, they are not reproducible for the same property, structure and function. But with our 3D bioprinting approach, we can specify exactly where to print different cell types, the amounts and the micro-architecture.”

How it works

To print their tissue samples, the researchers first design 3D models of biological structures on a computer. These designs can even come from medical scans, so they can be personalized for a patient’s tissues. The computer then slices the model into 2D snapshots and transfers them to millions of microscopic-sized mirrors. Each mirror is digitally controlled to project patterns of violet light—405 nanometers in wavelength, which is safe for cells—in the form of these snapshots. The light patterns are shined onto a solution containing live cell cultures and light-sensitive polymers that solidify upon exposure to light. The structure is rapidly printed one layer at a time in a continuous fashion, creating a 3D solid polymer scaffold encapsulating live cells that will grow and become biological tissue.

The digitally controlled micromirror array is key to the printer’s high speed. Because it projects entire 2D patterns onto the substrate as it prints layer by layer, it produces 3D structures much faster than other printing methods, which scans each layer line by line using either a nozzle or laser.

“An analogy would be comparing the difference between drawing a shape using a pencil versus a stamp,” said Henry Hwang, a nanoengineering Ph.D. student in Chen’s lab who is also co-first author of the study. “With a pencil, you’d have to draw every single line until you complete the shape. But with a stamp, you mark that entire shape all at once. That’s what the digital micromirror device does in our technology. It’s orders of magnitude difference in speed.”

This recent work builds on the 3D bioprinting technology that Chen’s team invented in 2013. It started out as a platform for creating living biological tissues for regenerative medicine. Past projects include 3D printing liver tissues, blood vessel networks, heart tissues and spinal cord implants, to name a few. In recent years, Chen’s lab has expanded the use of their technology to print coral-inspired structures that marine scientists can use for studying algae growth and for aiding coral reef restoration projects.

Now, the researchers have automated the technology in order to do high-throughput tissue printing. Allegro 3D, Inc., a UC San Diego spin-off company co-founded by Chen and a nanoengineering Ph.D. alumnus from his lab, Wei Zhu, has licensed the technology and recently launched a commercial product.

Credit: 
University of California - San Diego