Culture

AI-enhanced precision medicine identifies novel autism subtype

Highlights:

Previously autism diagnosed by symptoms only

Subtype characterized by abnormal lipid levels

Autism affects estimated 1 in 54 children in U.S.

CHICAGO ---A novel precision medicine approach enhanced by artificial intelligence (AI) has laid the groundwork for what could be the first biomedical screening and intervention tool for a subtype of autism, reports a new study from Northwestern University, Ben Gurion University, Harvard University and the Massachusetts Institute of Technology.

The approach is believed to be the first of its kind in precision medicine.

"Previously, autism subtypes have been defined based on symptoms only -- autistic disorder, Asperger syndrome, etc. -- and they can be hard to differentiate as it is really a spectrum of symptoms," said study co-first author Dr. Yuan Luo, associate professor of preventive medicine: health and biomedical informatics at the Northwestern University Feinberg School of Medicine. "The autism subtype characterized by abnormal levels identified in this study is the first multidimensional evidenced-based subtype that has distinct molecular features and an underlying cause."

Luo is also chief AI officer at the Northwestern University Clinical and Translational Sciences Institute and the Institute of Augmented Intelligence in Medicine. He also is a member of the McCormick School of Engineering.

The findings were published August 10 in Nature Medicine.

Autism affects an estimated 1 in 54 children in the United States, according to the Centers for Disease Control and Prevention. Boys are four times more likely than girls to be diagnosed. Most children are diagnosed after age 4, although autism can be reliably diagnosed based on symptoms as early as age 2.

The subtype of the disorder studied by Luo and colleagues is known as dyslipidemia-associated autism, which represents 6.55% of all diagnosed autism spectrum disorders in the U.S.

"Our study is the first precision medicine approach to overlay an array of research and health care data -- including genetic mutation data, sexually different gene expression patterns, animal model data, electronic health record data and health insurance claims data --and then use an AI-enhanced precision medicine approach to attempt to define one of the world's most complex inheritable disorders," said Luo.

The idea is similar to that of today's digital maps. In order to get a true representation of the real world, the team overlaid different layers of information on top of one another.

"This discovery was like finding a needle in a haystack, as there are thousands of variants in hundreds of genes thought to underlie autism, each of which is mutated in less than 1% of families with the disorder. We built a complex map, and then needed to develop a magnifier to zoom in," said Luo.

To build that magnifier, the research team identified clusters of gene exons that function together during brain development. They then used a state-of-the-art AI algorithm graph clustering technique on gene expression data. Exons are the parts of genes that contain information coding for a protein. Proteins do most of the work in our cells and organs, or in this case, the brain.

"The map and magnifier approach showcases a generalizable way of using multiple data modalities for subtyping autism and it holds the potential for many other genetically complex diseases to inform targeted clinical trials," said Luo.

Using the tool, the research team also identified a strong association of parental dyslipidemia with autism spectrum disorder in their children. They further saw altered blood lipid profiles in infants later diagnosed with autism spectrum disorder. These findings have led the team to pursue subsequent studies, including clinical trials that aim to promote early screening and early intervention of autism.

"Today, autism is diagnosed based only on symptoms, and the reality is when a physician identifies it, it's often when early and critical brain developmental windows have passed without appropriate intervention," said Luo. "This discovery could shift that paradigm."

Credit: 
Northwestern University

Rare 'boomerang' earthquake observed along Atlantic Ocean fault line

video: Tracking how the rupture evolved over the fracture zone.

Image: 
Hicks et al

Scientists have tracked a 'boomerang' earthquake in the ocean for the first time, providing clues about how they could cause devastation on land.

Earthquakes occur when rocks suddenly break on a fault - a boundary between two blocks or plates. During large earthquakes, the breaking of rock can spread down the fault line. Now, an international team of researchers have recorded a 'boomerang' earthquake, where the rupture initially spreads away from initial break but then turns and runs back the other way at higher speeds.

The strength and duration of rupture along a fault influences the among of ground shaking on the surface, which can damage buildings or create tsunamis. Ultimately, knowing the mechanisms of how faults rupture and the physics involved will help researchers make better models and predictions of future earthquakes, and could inform earthquake early-warning systems.

The team, led by scientists from the University of Southampton and Imperial College London, report their results today in Nature Geoscience.

While large (magnitude 7 or higher) earthquakes occur on land and have been measured by nearby networks of monitors (seismometers), these earthquakes often trigger movement along complex networks of faults, like a series of dominoes. This makes it difficult to track the underlying mechanisms of how this 'seismic slip' occurs.

Under the ocean, many types of fault have simple shapes, so provide the possibility get under the bonnet of the 'earthquake engine'. However, they are far from large networks of seismometers on land. The team made use of a new network of underwater seismometers to monitor the Romanche fracture zone, a fault line stretching 900km under the Atlantic near the equator.

In 2016, they recorded a magnitude 7.1 earthquake along the Romanche fracture zone and tracked the rupture along the fault. This revealed that initially the rupture travelled in one direction before turning around midway through the earthquake and breaking the 'seismic sound barrier', becoming an ultra-fast earthquake.

Only a handful of such earthquakes have been recorded globally. The team believe that the first phase of the rupture was crucial in causing the second, rapidly slipping phase.

First author of the study Dr Stephen Hicks, from the Department of Earth Sciences and Engineering at Imperial, said: "Whilst scientists have found that such a reversing rupture mechanism is possible from theoretical models, our new study provides some of the clearest evidence for this enigmatic mechanism occurring in a real fault.

"Even though the fault structure seems simple, the way the earthquake grew was not, and this was completely opposite to how we expected the earthquake to look before we started to analyse the data."

However, the team say that if similar types of reversing or boomerang earthquakes can occur on land, a seismic rupture turning around mid-way through an earthquake could dramatically affect the amount of ground shaking caused.

Given the lack of observational evidence before now, this mechanism has been unaccounted for in earthquake scenario modelling and assessments of the hazards from such earthquakes. The detailed tracking of the boomerang earthquake could allow researchers to find similar patterns in other earthquakes and to add new scenarios into their modelling and improve earthquake impact forecasts.

The ocean bottom seismometer network used was part of the PI-LAB and EUROLAB projects, a million-dollar experiment funded by the Natural Environment Research Council in the UK, the European Research Council, and the National Science Foundation in the US.

Credit: 
Imperial College London

Quality of care at rural hospitals may not differ as much as reported, study suggests

PROVIDENCE, R.I. [Brown University] -- Critical access hospitals (CAHs) provide care to Americans living in remote rural areas. As important health care access points, these hospitals serve a population that is disproportionately older, impoverished and burdened by chronic disease. In 1997, with small rural hospitals under increasing financial strain and closing in large numbers, the federal CAH designation was established to increase their viability and to ensure that rural communities have adequate access to health care.

Prior research studies comparing the quality of care provided by CAHs and non-CAHs have found that risk-adjusted mortality rates at CAHs were higher, and the hospitals' quality of care, therefore, lower. But a new study led by investigators at the Center for Gerontology and Healthcare Research in Brown's School of Public Health suggests that standard risk-adjustment methodologies have been unfairly penalizing CAHs.

According to the study, for Medicare beneficiaries in rural areas who were hospitalized during the period of 2007 to 2017, CAHs submitted significantly fewer hospital diagnosis codes than did non-CAHs. The primary reason for the relative under-reporting of diagnoses at CAHs has to do with differences in Medicare reimbursements -- while non-CAHs are incentivized by Medicare to complete diagnosis coding, CAHs, which receive cost-based reimbursements, are not.

"When payments for episodes of care are tied to the acuity of patients, health care providers have the incentive to fully report or even overstate acuity," said study senior author Momotazur Rahman, an associate professor of health services, policy and practice at Brown. "Since payments for non-CAHs are dependent on reported acuity while payments for CAHs are not, non-CAH patients will appear comparatively sicker than they actually are."

Because mortality rates are adjusted per severity of illness -- acuity, in Rahman's words -- the result is that CAHs appear to have higher mortality rates for patients with similar conditions, when in reality their patients may in fact be sicker than those in non-CAHs, from the standpoint of risk adjustment.

The study was published in the Journal of the American Medical Association on Tuesday, Aug. 4.

How did the researchers determine that CAHs tend to overreport diagnoses? In 2010, Medicare increased the allowable number of billing codes for hospitalizations from 10 to 25.

"We observed a large jump in reported acuity among non-CAH patients in 2010," Rahman said, "but we saw a much smaller jump for CAH patients. We found that due to this difference in acuity reporting, when compared to non-CAHs, the risk-adjusted performance of CAHs on short-term mortality measures looks much worse than it actually is."

The CAH program, created to prevent rural hospitals from closing, has repeatedly come under threat. Given that in many parts of the U.S., CAHs serve as sole health care providers, Rahman said that examining differences in quality of care is important for understanding the value of the CAH program and informing decisions about the allocation of funding for rural health care.

The finding that short-term mortality outcomes at rural CAHs may not differ from those of non-CAHs after accounting for different coding practices, he added, is essential knowledge for ensuring timely access to acute care for vulnerable rural communities.

Credit: 
Brown University

Individual differences in the brain

image: Zebrafish react with individual differences to loud sounds. A selection towards pronounced behavioral responses shows, within a few generations, also in differences in brain activity.

Image: 
MPI of Neurobiology / Kuhl

Personality varies widely. There are bold and reserved individuals, who behave very differently when faced with the same environmental stimulus. What is true for humans also applies to fish: their behavior shows a range of individual differences. By selectively breeding zebrafish, scientists from the Max Planck Institute of Neurobiology were able to show that distinct personality traits rapidly emerge and manifest not only in the behavior, but also through far-reaching changes in the brain.

Young zebrafish are just five millimeters long and almost transparent. Nevertheless, the tiny fish display a spectrum of behavior in response to external stimuli. While some animals flee in panic at a loud sound, other fish remain calm. If the sound is repeated, fish in one group learn to ignore it quickly, while others never really get used to it. Between these two extremes - relaxed or skittish - there is a whole range of behavioral expressions.

Carlos Pantoja and colleagues in Herwig Baier's team were now able to show that selection for a specific behavioral trait can also change the fishes' brain activity surprisingly quickly. The researchers mated animals only within the extremely relaxed and the extremely skittish groups. After just two generations, the brains of the fry selected for skittishness differed significantly from the brains of the calm offspring.

In the transparent fish larvae, the scientists were able to observe which brain regions were activated by the loud sound. The offspring of the two behavioral extremes showed clear differences in neuronal activity in a part of the hypothalamus and in the so-called dorsal raphe nucleus. A noticeable difference between these two brain regions is that the plastic part of the hypothalamus contains neurons that secrete dopamine, while the raphe nucleus mainly produces serotonin. Dopamine and serotonin are two prominent neuromodulators that have also been associated with personality differences and even psychiatric conditions in humans.

"The ratio of cell activity in these two brain regions could regulate the sensitivity of an individual fish's reaction to the sound and how quickly it gets used to it," explains Carlos Pantoja. "However, this is certainly only one component, as there are also differences in a whole range of other brain areas."

Interestingly, the offspring of the two fish groups not only showed the expected differences in their startle response. While in the larval stage, the more relaxed fish fry was also significantly less spontaneously active. As adults, these fish then adapted much slower to a new environment than adult jumpy fish. "At first glance, this sounds paradoxical. But it could be that the early tendency to fearful overreactions tends to dampen the later stress response," says Pantoja. Similar long-term effects of early stress processing have been reported in mammals.

In both groups of fish, the dopamine-releasing part of the hypothalamus was activated during the startle reaction. However, while this region was only switched on by the sound in the relaxed fish, it was permanently active in the skittish fish. After a mere two generations of behavioral selection, these animals already seemed to be constantly prepared to escape.

"The pace at which personality traits can be shifted and fixed in evolution is remarkable," reflects Herwig Baier. "The process might be similarly rapid in populations of Homo sapiens." The zebrafish could perhaps reveal some of the involved brain structures and the genetic basis of this plasticity.

Credit: 
Max-Planck-Gesellschaft

Detailed molecular workings of a key system in learning and memory formation

image: Biochemist Margaret Stratton at UMass Amherst reports how her lab used advanced sequencing technology to determine all variants of a single protein/enzyme, CaMKII, in the hippocampus, the brain's memory center. There, CaMKII is required for learning and memory. Mutations contribute to conditions such as autism spectrum disorders and developmental disabilities, among others.

Image: 
UMass Amherst

AMHERST, Mass. - One of the new realities in biomedical research is that it's increasingly difficult to use a general approach to score advances. Now, investigations into disease mechanisms, for example, are often conducted at the molecular level by specialists who dedicate years to interrogating a single protein or signaling pathway.

One such scientist is biochemist Margaret Stratton at the University of Massachusetts Amherst, whose lab reports how they used advanced sequencing technology to clear up uncertainty and determine all variants of a single protein/enzyme known as calcium/calmodulin-dependent protein kinase II (CaMKII) in the hippocampus, the brain's memory center.

It plays a central role in calcium signaling throughout the body, Stratton explains. In the hippocampus, CaMKII is required for learning and memory, and when mutations occur they contribute to conditions such as autism spectrum disorders and developmental disabilities, or problems in other systems relating to cardiac pacing and fertility.

Stratton and first authors Roman Sloutsky and Noelle Dziedzic, with others, report in Science Signaling that they found an unexpected new role for the hub domain, or organizational center of the CaMKII molecular complex. Stratton says, "In addition to this known role, we show that this domain affects how sensitive CaMKII is to calcium; it acts like a tuner for sensitivity. This was a surprise. It opens a whole new area for investigation. We also show evidence for how we think it works at the molecular level."

Kinases are quite prevalent in biology, she adds, with more than 500 kinds in humans, but CaMKII is unique with its hub domain. Their unexpected discovery that "the hub actually plays a role in regulating activity gives us a unique handle on CaMKII to potentially control its activity with high specificity."

In vertebrates and humans, genomes encode for four CaMKII variants, and each is associated with many different proteins.

"We collaborated with Luke Chao, a structural biologist at Mass General Hospital, and a postdoc in his lab, Sivakumar Boopathy, to use cutting-edge techniques to structurally characterize the different flavors of CaMKII to understand how they may react differently to calcium." They hoped to identify any that have a modulatory or regulatory role and might serve as a new therapeutic target for controlling it or correcting mutations, she notes.

"All CaMKIIs consist of a catalytic kinase domain, a regulatory segment, a variable linker and a hub domain," Stratton explains. When called upon, this molecule adds phosphates where they are needed for cell function. "When calcium levels rise, CaMKII turns on. When they drop, CaMKII activity does too. Our goal was to unravel the differences to better understand how CaMKII does its job in memory formation."

In the CaMKII structure, the hub domain's job is to gather the other domains around it. A kidney bean-shaped kinase domain is attached to the hub by a spaghetti-like linker. When subunits are assembled into a working complex it looks like a flower, where the kinase domains are petals around the central hub domain, she points out.

In their sequencing experiments, Stratton explains, "We found something quite surprising. We discovered that there are more than 70 different CaMKII variants present in hippocampus. That's an extraordinary number."

Chao's group used cryo-electron microscopy to make images of purified CaMKII, allowing the researchers to see that CaMKII's "action" domain adopts different conformations relative to the hub, Stratton says, "In the 70 or so different variants, the petals are likely in a different orientation around the hub. It still looks like a flower, but all the petals are not exactly the same. This orientation we think is dependent on the hub identity, which is dictated by the sequence of the gene."

Credit: 
University of Massachusetts Amherst

Retesting for COVID-19: UPMC shares its experience

image: Graham Snyder, M.D. from University of Pittsburgh

Image: 
UPMC

PITTSBURGH, Aug. 10, 2020 - In the first large, multicenter analysis of its kind, the 40-hospital UPMC health system today reported its findings on clinician-directed retesting of patients for presence of SARS-CoV-2, the virus that causes COVID-19, in the journal Infection Control & Hospital Epidemiology.

While retesting was uncommon, the UPMC analysis found that patients positive for COVID-19 stayed positive for an average of three weeks and repeating tests in patients who were initially negative very rarely led to a positive result.

"In the U.S., COVID-19 testing capacity is limited -- not everyone who wants a test can get one -- so we have to be judicious in how we use it," said co-author Graham Snyder, M.D., M.S., medical director of infection prevention and hospital epidemiology at UPMC. "Often, testing decisions are left to individual clinicians, which leads to questions about when and whom to retest for COVID-19, how often false positives or negatives might occur, and the duration of positivity. So, it is important that we understand the value of retesting and what information it can, and cannot, provide."

UPMC uses a nucleic acid polymerase chain reaction (PCR) test for SARS-CoV-2 and specimen collection is done with a nasopharyngeal swab by trained clinicians. The health system developed its COVID-19 test in early March 2020 in anticipation of the tremendous need for diagnostic capabilities.

Snyder and his colleagues worked with the Wolff Center at UPMC -- the health system's quality care and improvement center -- to review the results of more than 30,000 COVID-19 tests performed on adult patients who received care through one of UPMC's 40 academic, community and specialty hospitals, or 700 doctors' offices and outpatient sites in Pennsylvania, New York and Maryland. The tests were performed between March 3 and May 3, 2020. Of those tests, 485 were repeated at least once.

Among 74 patients who initially tested positive and were retested, about half were still positive and half were negative. The median time between an initial positive and a repeat positive was 18 days, whereas the median time from initial positive to a negative test was 23 days, suggesting that PCR tests may remain positive until some point in between, around 21 days. The most common reason for repeat testing on someone who initially tested positive was to determine if infection prevention protocols needed to be continued when the patient was discharged.

Among the 418 patients who initially tested negative and were retested, 96.4% were still negative on retesting. Pre-operative asymptomatic screening was the most common reason negative patients were retested, followed by clinical suspicion that the first test was a false negative. For the 15 patients who went from negative to positive, the median time between tests was eight days.

The researchers noted that the data was not collected as part of a formal study and testing was done at each clinician's discretion, so they were unable to calculate a true false negative rate.

"Although our analysis cannot provide definitive clinical guidance regarding retesting for COVID-19, it does point to several interesting areas for further research," said lead author Amy Kennedy, M.D., M.S., a clinical research fellow in Pitt's Department of Medicine at the time the analysis was performed. "These include identifying predictors of initial false negatives and providing a better estimate for how long someone who tests positive could transmit the virus to others."

Credit: 
University of Pittsburgh

New USask-led research reveals previously hidden features of plant genomes

image: P2IRC researcher Andrew Sharpe with the PromethION high throughput DNA and RNA sequencing device at GIFS.

Image: 
David Stobbe

SASKATOON - An international team led by the Plant Phenotyping and Imaging Research Centre (P2IRC) at the University of Saskatchewan (USask) and researchers at Agriculture and Agri-Food Canada (AAFC) has decoded the full genome for the black mustard plant--research that will advance breeding of oilseed mustard crops and provide a foundation for improved breeding of wheat, canola and lentils.

The team, co-led by P2IRC researchers Andrew Sharpe and Isobel Parkin, used a new genome sequencing technology (Nanopore) that results in very long "reads" of DNA and RNA sequences, providing information for crop breeding that was previously not available. The results are published today in Nature Plants.

"This work provides a new model for building other genome assemblies for crops such as wheat, canola and lentils. Essentially, it's a recipe for generating a genome sequence that works for any crop," said Sharpe, director of P2IRC.

"We now know that we can get the same quality of genomic data and level of information about genetic variation for these important national and international crops. This means we can make breeding more efficient because we can more easily select genes for specific desired traits."

Sharpe said his team is already using this software platform in the Omics and Precision Agriculture Lab (OPAL) at the USask Global Institute for Food Security (GIFS) to sequence larger and more complex crop genomes.

Black mustard (Brassica nigra), commonly used in seed form as a cooking spice, is grown on the Indian sub-continent and is closely related to mustard and canola crops grown in Canada. The research provides a clearer, "higher resolution" view of the plant's genes and gives researchers and breeders a more defined view of which genes are responsible for which traits.

The resulting gene assembly for black mustard also helps explain how the black mustard genome differs from those of its close crop relatives--such as cabbage, turnip and canola.

The team also uncovered the first direct evidence of functional centromeres, structures on chromosomes essential for plant fertility, and detected other previously hard to identify regions of the genome. This knowledge provides a foundation for improving crop production.

Parkin, a USask adjunct professor and P2IRC member, said the use of long-read sequence data has enabled unprecedented access to previously hidden features of plant genomes.

"This provides not only insights into how crops evolve but enables the identification of novel structural variation--now known to play an important role in the control of many key agronomic traits," said Parkin, also the lead research scientist with AAFC Saskatoon Research Centre.

They also found in the sequence multiple copies of certain genes that express specific traits. This could mean that certain traits, such as fungal resistance, could be expressed more strongly through several genes.

Other USask members of the team include GIFS researcher Zahra-Katy Navabi and bioinformatics specialist Chu Shin Koh. Other team members include Sampath Perumal, a post-doctoral fellow with Parkin, as well as others from the University of Ottawa, Thompson River University, the National Research Council, and researchers from the United Kingdom and China.

"The genome assembly for black mustard that we have developed is a great example of how new Nanopore sequencing technology quickly reveals important genome biology," Sharpe said, noting that this advanced sequencing technology and capability is available to public and private plant breeding organizations through the OPAL at GIFS.

Credit: 
University of Saskatchewan

New study confirms the power of Deinosuchus and its 'teeth the size of bananas'

image: Deinosuchus schwimmeri (MMNS VP-256) skull. A, left lateral view. B, right lateral view. C, anterodorsal view demonstrating the unique orbital morphology and midline furrow of the skull table. Scale bar equals 5 cm.

Image: 
Adam Cossette

A new study, revisiting fossil specimens from the enormous crocodylian, Deinosuchus, has confirmed that the beast had teeth “the size of bananas”, capable to take down even the very largest of dinosaurs.

And, it wasn’t alone!

TThe research, published in the Journal of Vertebrate Paleontology, also reveals various kinds of “terror crocodile”. Two species, entitled Deinosuchus hatcheri and Deinosuchus riograndensis lived in the west of America, ranging from Montana to northern Mexico. Another, Deinosuchus schwimmeri, lived along the Atlantic coastal plain from New Jersey to Mississippi. At the time, North America was cut in half by a shallow sea extending from the Arctic Ocean south to the present-day Gulf of Mexico.

Ranging in up to 33 feet in length Deinosuchus, though, has been known to be one of the largest, if not the largest, crocodylian genera ever in existence. It was the largest predator in its ecosystem, outweighing even the largest predatory dinosaurs living alongside them between 75 and 82 million years ago.

From previous studies of cranial remains and bite marks on dinosaur fossil bones, paleontologists have long speculated that the massive beasts preyed on dinosaurs.

Now this new study, led by Dr Adam Cossette sheds new light on the monstrous creature and has further confirmed that Deinosuchus most certainly had the head size and crushing jaw strength to do just that.

“Deinosuchus was a giant that must have terrorized dinosaurs that came to the water’s edge to drink,” says Dr Cossette, from the New York Institute of Technology College of Osteopathic Medicine at Arkansas State University. “Until now, the complete animal was unknown. These new specimens we’ve examined reveal a bizarre, monstrous predator with teeth the size of bananas.”

C. Deinosuchus seems to have been an opportunistic predator, and given that it was so enormous, almost everything in its habitat was on the menu.

There are multiple examples of bite marks made by D. riograndensis and a species newly described in this study, D. schwimmeri, on turtle shells and dinosaur bones.

In spite of the genus’s name, which means “terror crocodile,” they were actually more closely related to alligators. Based on its enormous skull, it looked like neither an alligator nor a crocodile. Its snout was long and broad, but inflated at the front around the nose in a way not seen in any other crocodylian, living or extinct. The reason for its enlarged nose is unknown.

“It was a strange animal,” says co-author Professor Christopher Brochu a palaeontologist, from the University of Iowa. “It shows that crocodylians are not ‘living fossils’ that haven’t changed since the age of dinosaurs. They’ve evolved just as dynamically as any other group.”

Deinosuchus disappeared before the main mass extinction at the end of the age of dinosaurs (Meozoic). The reason for its extinction remains unknown. From here, the authors call for more studies to further understand Deinosuchus.

“It had two large holes are present at the tip of the snout in front of the nose,” Dr Cossette says.

“These holes are unique to Deinosuchus and we do not know what they were for, further research down the line will hopefully help us unpick this mystery and we can learn further about this incredible creature.”

Credit: 
Taylor & Francis Group

Explosive nuclear astrophysics

image: Photograph of GRETINA in ATLAS at Argonne.

Image: 
Argonne National Laboratory

Analysis of meteorite content has been crucial in advancing our knowledge of the origin and evolution of our solar system. Some meteorites also contain grains of stardust. These grains predate the formation of our solar system and are now providing important insights into how the elements in the universe formed.

Working in collaboration with an international team, nuclear physicists at the U.S. Department of Energy’s (DOE’s) Argonne National Laboratory have made a key discovery related to the analysis of “presolar grains” found in some meteorites. This discovery has shed light on the nature of stellar explosions and the origin of chemical elements. It has also provided a new method for astronomical research.

“Tiny presolar grains, about one micron in size, are the residue from stellar explosions in the distant past, long before our solar system existed,” said Dariusz Seweryniak, experimental nuclear physicist in Argonne’s Physics division. The stellar debris from the explosions eventually became wedged into meteorites that crashed into the Earth.

“In turn, we were able to calculate the ratios of various sulfur isotopes produced in stellar explosions, which will allow astrophysicists to determine whether a particular presolar grain is of nova or supernova origin.” — Dariusz Seweryniak, experimental physicist in the Physics division

The major stellar explosions are of two types. One called a “nova” involves a binary star system, where a main star is orbiting a white dwarf star, an extremely dense star that can be the size of Earth but have the mass of our sun. Matter from the main star is continually being pulled away by the white dwarf because of its intense gravitational field. This deposited material initiates a thermonuclear explosion every 1,000 to 100,000 years, and the white dwarf ejects the equivalent of the mass of more than thirty Earths into interstellar space. In a “supernova,” a single collapsing star explodes and ejects most of its mass.

Nova and supernova are the sources of the most frequent and violent stellar eruptions in our Galaxy, and for that reason, they have been the subject of intense astronomical investigations for decades. Much has been learned from them, for example, about the origin of the heavier elements.

“A new way of studying these phenomena is analyzing the chemical and isotopic composition of the presolar grains in meteorites,” explained Seweryniak. “Of particular importance to our research is a specific nuclear reaction that occurs in nova and supernova — proton capture on an isotope of chlorine — which we can only indirectly study in the lab.”

In conducting their research, the team pioneered a new approach for astrophysics research. It entails use of the Gamma-Ray Energy Tracking In-beam Array (GRETINA) coupled to the Fragment Mass Analyzer at the Argonne Tandem Linac Accelerator System (ATLAS), a DOE Office of Science User Facility for nuclear physics. GRETINA is a state-of-the-art detection system able to trace the path of gamma rays emitted from nuclear reactions. It is one of only two such systems in the world.

Using GRETINA, the team completed the first detailed gamma-ray spectroscopy study of an astronomically important nucleus of an isotope, argon-34. From the data, they calculated the nuclear reaction rate involving proton capture on a chlorine isotope (chlorine-33).

“In turn, we were able to calculate the ratios of various sulfur isotopes produced in stellar explosions, which will allow astrophysicists to determine whether a particular presolar grain is of nova or supernova origin,” said Seweryniak. The team also applied their acquired data to gain deeper understanding of the synthesis of elements in stellar explosions.

The team is planning to continue their research with GRETINA as part of a worldwide effort to reach a comprehensive understanding of nucleosynthesis of the elements in stellar explosions.

Credit: 
DOE/Argonne National Laboratory

Evolutionary assimilation of foreign DNA in a new host

image: Schematic of the experimental workflow. Native E. coli glycolytic isomerases pgi and tpiA were replaced with the coding sequence of foreign orthologues and subjected to laboratory evolution for improved exponential phase growth rate. Ma, million years ago.

Image: 
Palsson Lab

All life is subject to evolution in the form of mutations that change the DNA sequence of an organism's offspring, after which natural selection allows the 'fittest' mutants to survive and pass on their genes to future generations. These mutations can generate new abilities in a species, but another common driving force for evolution is horizontal gene transfer (HGT) - the acquisition of DNA from a creature other than a parent, and even of a different species. For example, a significant amount of the human genome is actually viral DNA. Genetic engineering techniques now allow humans to intentionally induce HGT in various species to create 'designer organisms' capable of things like renewable chemical production, but it's often difficult to get foreign DNA working in a new host.

Bioengineers at the University of California San Diego used genetic engineering and laboratory evolution to test the functionality of DNA placed into a new species and study how it can mutate to become functional if given sufficient evolutionary time. They published their results on August 10 in Nature Ecology and Evolution.

Using the model bacterium Escherichia coli as a host, bioengineers in Professor Bernhard Palsson's Systems Biology Research Group used CRISPR to generate gene-swapped strains with donor DNA from species across the tree of life -- from close bacterial relatives, to a microbe that lives in boiling hotsprings, to humans. The genes pgi or tpiA were replaced, two enzymes involved in sugar metabolism that cripple E. coli when removed, causing them to grow about 5 times slower. They then used an 'evolution machine,' robotic systems to study how the engineered bacteria adapted to replacement of such important genes with foreign versions. The automated systems enabled a large-scale study, generating hundreds of mutant strains evolved for more than 50,000 cumulative generations, something that would take decades rather than months if performed manually. Moreover, culture growth rates could be tracked in real-time as the populations evolved, allowing mutant strains to be isolated immediately after they took over the population from the ancestral strain. This high temporal resolution regularly allowed strains to be isolated that differed across their entire genome by only single mutations of interest, revealing not only order of acquisition but also providing an easy way to test the effect of mutations without laborious rounds of additional genetic engineering of the ancestral strains.

Although at first E. coli couldn't use most of the foreign genes it was given, they quickly and frequently found an evolutionary way around this, often in a matter of days recovering from their crippled state to grow just as fast as before they were engineered. Notably, the foreign genes were not codon optimized before insertion into E. coli - this is regularly performed during synthetic HGT, relying on the fact that DNA codes for the string of amino acids that composes a protein via 3 letter codons that contain redundancy (e.g., Lysine is coded for by AAA or AAG). Different species have different genome-wide trends in codon usage that codon optimization minimizes for a gene inserted into a foreign species, but this was unnecessary to enable functionality - even for human DNA which has been evolutionarily diverging from E. coli for billions of years.

For every strain that successfully evolved use of the foreign DNA the critical factor was one or more mutations increasing gene expression level. Most of these mutations did not even occur within the foreign gene but rather in regions of E. coli's DNA controlling regulation of the gene, with their nature depending sensitively on the gene's specific DNA sequence and location in the chromosome. Some of these mutations occurred with shocking regularity, including one observed independently more than 20 times, demonstrating that evolutionary outcomes can be (probabilistically) predicted to the single DNA basepair.

Of the few mutations occurring within the foreign DNA, most were at the beginning of the gene and 'silent' in nature, changing the codon but not the resulting amino acid. These are often assumed to have negligible impact on cell fitness (at least when in a single codon rather than across the entire gene as in codon optimization), but we found them to have significant impact. Thermodynamic modeling revealed that these mutations serve to prevent binding of the gene's mRNA transcript into knotted structures, which limits the amount of protein that ribosomes can produce from the transcript. Finally, our hundreds of evolved strains contained >90 distinct mutations in the RNA Polymerase complex that produces mRNA transcripts from the DNA sequence. Such mutations are common in laboratory evolution experiments, but our large dataset revealed clustering of mutations into distinct regions depending on how the strain containing it evolved. This points to evolutionarily conserved regulatory strategies for rapidly adapting to metabolic perturbations such as the ones we induced.

"This result shows the importance of systems biology," said UC San Diego bioengineering professor Bernhard Palsson, the principal investigator of the study. "Namely, biological function, in this case, is not so much about the parts of the cell, but how the parts come together to function as a system."

Overall, this study establishes the influence of various DNA and protein features on cross-species genetic interchangeability and evolutionary outcomes, with implications for both natural HGT and strain design via genetic engineering.

Credit: 
University of California - San Diego

COVID-19 does not directly damage taste bud cells

image: Fungiform taste papillae, small structures or "bumps" found on the upper surface of the front two thirds of the tongue.

Image: 
UGA

A new study from the Regenerative Bioscience Center at the University of Georgia is the first to suggest that COVID-19 does not directly damage taste bud cells.

Contrary to previous studies that have shown damage may be caused directly by the virus particle, the researchers, led by Hongxiang Liu, associate professor of animal and dairy science in UGA's College of Agricultural and Environmental Sciences, found that taste loss is likely caused indirectly by events induced during COVID-19 inflammation.

An increasing number of COVID-19 patients have reported losses of smell and/or taste, prompting the CDC to add it to the growing list of symptoms for COVID-19. Recent research shows 20%-25% of patients now report a loss of taste.

"More alarming is the rate of patients reporting loss of taste at a later date, sometime after exposure to the virus," said Liu. "This is something we need to keep a careful eye on."

Published in ACS Pharmacology & Translational Science, the study further indicates that taste bud cells are not vulnerable to SARS-CoV-2 infection, because most of them do not express ACE2, a gateway that the virus uses to enter the body.

"This study isn't the first to study ACE2 expression in the oral cavity," said Liu. "But it is the first to show, specifically in relation to coronavirus and taste bud cell survival, that there are likely other cell death mechanisms at play."

Liu and her colleagues wanted to find out whether ACE2 was expressed specifically in taste bud cells, as well as when this receptor first emerges on oral tissue cells during fetal development, by studying mice as a model organism.

Although the mouse version of ACE2 isn't susceptible to SARS-CoV-2, studying where it's expressed in mice could still help clarify what's happening when people become infected and lose the sense of taste, given that mouse and human share similar expression patterns of genes.

"Mice have a different cellular copy of ACE2, making them impervious to SARS-CoV-2 infection," said Liu. "A logical first step was to genetically engineer a model to examine the ACE2 receptor expression in wild type mice, to provide insights into what happens in people."

By analyzing data from oral cells of adult mice, the researchers found that ACE2 was enriched in cells that give the tongue its rough surface, but couldn't be found in most taste bud cells. That means the virus probably does not affect taste loss through direct infection of these cells.

"It's clear from the data, that future designs of therapeutics directed at ACE2 receptors would likely not be as effective in treating taste loss of patients suffering from COVID-19," said Liu.

According to the team, more researchers have jumped into studying coronavirus and have published more data for smell loss than taste.

"Anosmia coronavirus research is being published at a faster pace," said Liu. "This is the only COVID-19 research that we know of, that involves the mechanisms of taste loss. Taste loss in the tongue is more complex and harder to validate, because of the complexity of cells, tissue structures, and the limited expression level of the ACE2 receptor."

Credit: 
University of Georgia

Biology blurs line between sexes, behaviors

Biological sex is typically understood in binary terms: male and female. However, there are many examples of animals that are able to modify sex-typical biological and behavioral features and even change sex. A new study, which appears in the journal Current Biology, identifies a genetic switch in brain cells that can toggle between sex-specific states when necessary, findings that question the idea of sex as a fixed property.

The research-- led by Douglas Portman, Ph.D., an associate professor in the University of Rochester Department of Biomedical Genetics and the Del Monte Institute for Neuroscience -was conducted in C. elegans, a microscopic roundworm that has been used in labs for decades to understand the nervous system. Many of the discoveries made using C. elegans apply throughout the animal kingdom and this research has led to a broader understanding of human biology. C. elegans is the only animal whose nervous system has been completely mapped, providing a wiring diagram - or connectome - that is helping researchers understand how brain circuits integrate information, make decisions, and control behavior.

There are two sexes of C. elegans, males and hermaphrodites. Though the hermaphrodites are able to self-fertilize, they are also mating partners for males, and are considered to be modified females. A single gene, TRA-1, determines the sex of these roundworms. If a developing worm has two X chromosomes, this gene is activated and the worm will develop into a female. If there is only one X chromosome, TRA-1 is inactivated, causing the worm to become a male.

The new study shows that the TRA-1 gene doesn't go completely silent in males, as had been previously thought. Instead, it can go into action when circumstances compel males to act more like females. Typically, C. elegans males prefer searching for mates over eating, in part because they can't smell food as well as females do. But if a male goes too long without eating, it will dial up its ability to detect food and acts more like a female. The new research shows that TRA-1 is necessary for this switch, and without it hungry males can't enhance their sense of smell and stay locked in the default, food-insensitive mate-searching mode. TRA-1 does the same job in juvenile males - it activates efficient food detection in males that are too young to search for mates.

"These findings indicate that, at the molecular level, sex isn't binary or static, but rather dynamic and flexible," said Portman. "The new results suggest that aspects of the male nervous system might transiently take on a female 'state,' allowing male behavior to be flexible according to internal and external conditions."

A separate study appearing Current Biology by a team of collaborating researchers at Columbia University further describes the complex molecular mechanism by which TRA-1 is controlled by sex chromosomes and other cues.

Credit: 
University of Rochester Medical Center

How to get more cancer-fighting nanoparticles to where they are needed

image: Researchers in Professor Warren Chan's lab. Ben Ouyang (second from top left) and team, under the supervision of Chan (top left), discovered the dose threshold that improves drug delivery to tumours.

Image: 
Ben Ouyang

University of Toronto Engineering researchers have discovered a dose threshold that greatly increases the delivery of cancer-fighting drugs into a tumour.

Determining this threshold provides a potentially universal method for gauging nanoparticle dosage and could help advance a new generation of cancer therapy, imaging and diagnostics.

"It's a very simple solution, adjusting the dosage, but the results are very powerful," says MD/PhD candidate Ben Ouyang, who led the research under the supervision of Professor Warren Chan.

Their findings were published today in Nature Materials, providing solutions to a drug-delivery problem previously raised by Chan and researchers four years ago in Nature Reviews Materials.

Nanotechnology carriers are used to deliver drugs to cancer sites, which in turn can help a patient's response to treatment and reduce adverse side effects, such as hair loss and vomiting. However, in practice, few injected particles reach the tumour site.

In the Nature Reviews Materials paper, the team surveyed literature from the past decade and found that on median, only 0.7 percent of the chemotherapeutic nanoparticles make it into a targeted tumour.

"The promise of emerging therapeutics is dependent upon our ability to deliver them to the target site," explains Chan. "We have discovered a new principle of enhancing the delivery process. This could be important for nanotechnology, genome editors, immunotherapy, and other technologies."

Chan's team saw the liver, which filters the blood, as the biggest barrier to nanoparticle drug delivery. They hypothesized that the liver would have an uptake rate threshold -- in other words, once the organ becomes saturated with nanoparticles, it wouldn't be able to keep up with higher doses. Their solution was to manipulate the dose to overwhelm the organ's filtering Kupffer cells, which line the liver channels.

The researchers discovered that injecting a baseline of 1 trillion nanoparticles in mice, in vivo, was enough to overwhelm the cells so that they couldn't take up particles quick enough to keep up with the increased doses. The result is a 12 percent delivery efficiency to the tumour.

"There's still lots of work to do to increase the 12 percent but it's a big step from 0.7," says Ouyang. The researchers also extensively tested whether overwhelming Kupffer cells led to any risk of toxicity in the liver, heart or blood.

"We tested gold, silica, and liposomes," says Ouyang. "In all of our studies, no matter how high we pushed the dosage, we never saw any signs of toxicity."

The team used this threshold principle to improve the effectiveness of a clinically used and chemotherapy-loaded nanoparticle called Caelyx. Their strategy shrank tumours 60 percent more when compared to Caelyx on its own at a set dose of the chemotherapy drug, doxorubicin.

Because the researchers' solution is a simple one, they hope to see the threshold having positive implications in even current nanoparticle-dosing conventions for human clinical trials. They calculate that the human threshold would be about 1.5 quadrillion nanoparticles.

"There's a simplicity to this method and reveals that we don't have to redesign the nanoparticles to improve delivery," says Chan. "This could overcome a major delivery problem."

Credit: 
University of Toronto Faculty of Applied Science & Engineering

Dietary control of the healing of injury-induced inflammation

Injuries induce the initiation of inflammation to control the damage. However, the resolution of the injury-induced inflammation leading to healing is not well characterized. This new article by researchers at the Inflammation Research Foundation suggests that the resolution process is under significant dietary control and thus can be optimized by using a highly defined systems-based nutritional approach.

In particular, a successful resolution of injury-induced inflammation requires the continuous balance of hormonal and genetic factors. The essential hormones involved in the process are eicosanoids derived from omega-6 fatty acids that need to be balanced by resolvins derived from omega-3 fatty acids. Likewise, the gene transcription factor NF-κB that controls inflammation must be offset by the activation of AMPK, which is the genetic master switch of metabolism and repair of damaged tissue.

"This balancing act of initiation of inflammation and its ultimate resolution that leads to healing is a systems-based approach," states Dr. Barry Sears, the President of the Inflammation Research Foundation. "The activation of resolution requires a sequential orchestration of reducing, resolving, and repairing of the injury-induced inflammation. Each step of the process can be either enhanced or inhibited by the diet."

The article outlines an appropriate calorie-restricted anti-inflammatory diet that is needed to reduce inflammation, the levels of omega-3 fatty acids required to resolve inflammation, and the levels of dietary polyphenols required to activate AMPK to repair the tissue damage caused by the inflammation. Furthermore, the appropriate blood markers to indicate success in optimizing each distinct phase of resolution are discussed in the article.

Since injuries are at random, the mechanisms and dietary constraints required for their successful orchestration leading to healing must be continually optimized. If not, then unresolved inflammation may become permanent in the form of either fibrosis or the development of senescence cells leading to earlier onset of chronic disease and acceleration of the aging process.

Credit: 
Bentham Science Publishers

Adaptive mutations repeat themselves in tiny crustaceans of Lake Baikal

image: The tree of parallel adaptive mutations

Image: 
Valentina Burskaia/Skoltech

A group of scientists from Skoltech and the Institute for Information Transmission Problems of RAS (IITP RAS) showed, using Lake Baikal amphipods as an example, that parallel evolution driven by adaptations can be detected at the whole-genome level. The research was published in the Genome Biology and Evolution journal.

Similar adaptations are sometimes known to result from exactly the same mutations that occurred independently. The phenomenon is commonly termed "parallel evolution" to describe evolution that keeps repeating itself. It is usually hard to prove that such "parallel" mutations did not occur by pure accident but actually help organisms to adapt to their environment. Thus far, adaptive parallel mutations have been found in some individual genes or small groups of interrelated genes only.

A team of Skoltech and IITP RAS researchers led by Georgii Bazykin, an evolutionary biologist and a professor at Skoltech, undertook extensive bioinformatics analysis of protein-coding sequences of 46 amphipod species from Lake Baikal. The scientists were eager to see whether closely related amphipods from a Baikalian species flock displayed an elevated rate of adaptive parallel evolution.

The research suggests that adaptive parallel mutations are more common than random parallel mutations in protein-coding sequences of Lake Baikal amphipods and actually affect several thousand genes. Drawing on the basic laws of molecular evolution, the scientists showed that the mutations they discovered were indeed caused by the species' need to adapt to the environment. However, the exact adaptations behind parallel evolution still remain a mystery.

"Lake Baikal is home to hundreds of species of endemic amphipods that evolved from several species in their distant ancestry and embrace a variety of ecological niches from predators to planktonic forms and parasites. Parallels were found even between forms with totally different lifestyles," says Valentina Burskaia, the first author of the study and a Skoltech PhD student.

Credit: 
Skolkovo Institute of Science and Technology (Skoltech)