Culture

3D-printed, lifelike heart models could help train tomorrow's surgeons (video)

image: Researchers have developed a way to 3D print a full-size model of a patient's own heart.

Image: 
American Chemical Society

Full-size, realistic models of human organs could help surgeons train and practice before they cut into a patient. However, it's been challenging to make inexpensive models of a size, complexity and material that simulates human organs. Now, researchers reporting in ACS Biomaterials Science & Engineering have developed a way to 3D print a full-size model of a patient's own heart. Watch a video of how they made the 3D organ here.

For complex heart surgeries, having a chance to plan and practice on a realistic model could help surgeons anticipate problems, leading to more successful outcomes. Current 3D printing techniques have been used to make full-size organ models, but the materials generally don't replicate the feel or mechanical properties of natural tissue. And soft, tissue-like materials, such as silicone rubbers, often collapse when 3D printed in air, making it difficult to reproduce large, complex structures. Eman Mirdamadi, Adam Feinberg and colleagues recently developed a technique, called freeform reversible embedding of suspended hydrogels (FRESH), which involves 3D printing soft biomaterials within a gelatin bath to support delicate structures that would otherwise collapse in air. However, the technique was previously limited to small objects, so the researchers wanted to adapt it to full-size organs.

The team's first step was to show that alginate, an inexpensive material made from seaweed, has similar material and mechanical properties as cardiac tissue. Next, the researchers placed sutures in a piece of alginate, which held even when stretched -- suggesting that surgeons could practice stitching up a heart model made from the material. In preparation for making the heart model, the team modified their FRESH 3D printer to make larger objects. They used this device and magnetic resonance imaging (known as MRI) scans from a patient to model and print a full-size adult human heart, as well as a section of coronary artery that they could fill with simulated blood. The heart model was structurally accurate, reproducible and could be handled outside of the gelatin bath. The method could also be applied to printing other realistic organ models, such as kidneys or liver, the researchers say.

The authors acknowledge funding from the Office of Naval Research, the U.S. Food & Drug Administration and the National Institutes of Health.

The abstract that accompanies this paper can be viewed here.

The American Chemical Society (ACS) is a nonprofit organization chartered by the U.S. Congress. ACS' mission is to advance the broader chemistry enterprise and its practitioners for the benefit of Earth and its people. The Society is a global leader in providing access to chemistry-related information and research through its multiple research solutions, peer-reviewed journals, scientific conferences, eBooks and weekly news periodical Chemical & Engineering News. ACS journals are among the most cited, most trusted and most read within the scientific literature; however, ACS itself does not conduct chemical research. As a specialist in scientific information solutions (including SciFinder® and STN®), its CAS division powers global research, discovery and innovation. ACS' main offices are in Washington, D.C., and Columbus, Ohio.
 
To automatically receive news releases from the American Chemical Society, contact newsroom@acs.org.
 
Follow us: Twitter | Facebook

Credit: 
American Chemical Society

Single-cell technique could provide 'egg health' indicators

Using the power of single-cell analysis, researchers at the Babraham Institute have assessed the effects of age on egg cells (oocytes) in mice, particularly looking to identify genomic and epigenetic factors that relate to reduced developmental competence. The knowledge uncovered by this research provides new insights into the mechanisms underlying egg quality and is relevant to the development of techniques to assess the quality of human egg cells, an area of growing importance as the use of fertility treatments increases. The research is published today in the journal Aging Cell.

The most recent results by the UK's Office of National Statistics report that for the 10th consecutive year, the average age of mothers in England and Wales increased to 30.6 years*. Trends indicate that women are choosing to delay the decision to have children with the number of children born to women aged 40 or above steadily increasing since 1978**.

Societal factors aside, advancing maternal age causes a gradual reduction in fertility. "Why egg cells lose their development competence is something we don't fully understand but it's like to be due to a combination of factors." says Dr Gavin Kelsey, Head of the Epigenetics research programme at the Babraham Institute, who led this work.

The research used a cutting-edge single-cell technique developed at the Institute to obtain parallel read-outs on all gene expression and DNA methylation (the addition of epigenetic marks that modify DNA without altering its sequence) occurring in the same egg. The approach allowed a genome-wide analysis of each egg and in addition to comparing eggs from younger and older mice it also allowed the researchers to explore variation between eggs from similarly aged mice.

Using this technique the researchers were able to identify the characteristics of eggs with reduced developmental competence and distinguish eggs from older females that retained a young-like profile. In particular, eggs from older females had less active gene expression and showed greater variability in this level egg-to-egg. Epigenetic marks in general correlated between eggs taken from younger and older mice, providing reassurance that age does not affect key sites of DNA methylation in the genome. The researchers did find some genes showing coupled change between gene activity and epigenetic marks, suggesting that epigenetics could be used as a readout for the gene activity quality of the egg.

"As far as we know, this is the first genome-wide assessment of DNA methylation at single-nucleotide resolution in eggs from aged female mice. Our development of methods that capture epigenetic information in single cells has enabled us to examine both the quality of epigenetic marks across the whole genome in single eggs and how individual eggs differ as a function of maternal age." explains Juan Castillo-Fernandez, a postdoctoral researcher in the Kelsey lab and joint first author on the paper. "We are particularly interested in epigenetic changes as they can be inherited into the embryo and predispose to later-onset problems in otherwise healthy-looking offspring."

"Our combined method for parallel read-outs from one cell gives a powerful tool, not only for opening up areas of biology to discover new knowledge but also for the application of both knowledge and techniques in the clinic. As demonstrated by this research, single-cell techniques and epigenetic analysis could be used to indicate the quality of an egg in terms of forming a healthy embryo after fertilisation." concludes Dr Kelsey.

Credit: 
Babraham Institute

16-year-old cosmic mystery solved, revealing stellar missing link

video: The Blue Ring Nebula consists of two expanding cones of debris. The base of one cone is moving toward Earth. Both bases are outlined in magenta, revealing shockwaves created as the debris races through space. Blue represents material behind the shockwave and is visible only where the cones overlap.

Image: 
NASA/JPL-Caltech/R. Hurt

Maunakea, Hawaii - In 2004, scientists with NASA's space-based Galaxy Evolution Explorer (GALEX) spotted an object unlike any they'd seen before in our Milky Way galaxy: a large, faint blob of gas with a star at its center. Though it doesn't actually emit light visible to the human eye, GALEX captured the blob in ultraviolet (UV) light and thus appeared blue in the images; subsequent observations also revealed a thick ring structure within it. So the team nicknamed it the Blue Ring Nebula. Over the next 16 years, they studied it with multiple Earth- and space-based telescopes, including W. M. Keck Observatory on Maunakea in Hawaii, but the more they learned, the more mysterious it seemed.

A new study published online on Nov. 18 in the journal Nature may have cracked the case. By applying cutting-edge theoretical models to the slew of data that has been collected on this object, the authors posit the nebula - a cloud of gas in space - is likely composed of debris from two stars that collided and merged into a single star.

While merged star systems are thought to be fairly common, they are nearly impossible to study immediately after they form because they're obscured by debris kicked up by the collision. Once the debris has cleared - at least hundreds of thousands of years later - they're challenging to identify because they resemble non-merged stars. The Blue Ring Nebula appears to be the missing link: astronomers are seeing the star system only a few thousand years after the merger, when evidence of the union is still plentiful. It appears to be the first known example of a merged star system at this stage.

Operated between 2003 and 2013 and managed by NASA's Jet Propulsion Laboratory in Southern California, GALEX was designed to help study the history of star formation by observing young star populations in UV light. Most objects seen by GALEX radiated both near-UV (represented as yellow in GALEX images) and far-UV (represented as blue), but the Blue Ring Nebula stood out because it emitted only far-UV light.

The object's size was similar to that of a supernova remnant, which forms when a massive star runs out of fuel and explodes, or a planetary nebula, the puffed-up remains of a star the size of our Sun. But the Blue Ring Nebula had a living star at its center. Furthermore, supernova remnants and planetary nebulas radiate in multiple light wavelengths outside the UV range, whereas the Blue Ring Nebula did not.

PHANTOM PLANET

In 2006, the GALEX team looked at the nebula with the 5.1-meter Hale telescope at the Palomar Observatory in San Diego County, California, and then with the even more powerful 10-meter Keck Observatory telescopes. They found evidence of a shockwave in the nebula using Keck Observatory's Low Resolution Imaging Spectrometer (LRIS), suggesting the gas composing the Blue Ring Nebula had indeed been expelled by some kind of violent event around the central star.

"Keck's LRIS spectra of the shock front was invaluable for nailing down how the Blue Ring Nebula came to be," said Keri Hoadley, an astrophysicist at Caltech and lead author of the study. "Its velocity was moving too fast for a typical planetary nebula yet too slow to be a supernova. This unusual, in-between speed gave us a strong clue that something else must have happened to create the nebula."

Data from Keck Observatory's High-Resolution Echelle Spectrometer (HIRES) also suggested the star was pulling a large amount of material onto its surface. But where was the material coming from?

"The HIRES observations at Keck gave us the first evidence that the system was accreting material," said co-author Mark Seibert, an astrophysicist with the Carnegie Institution for Science and a member of the GALEX team at Caltech, which manages JPL. "For quite a long time we thought that maybe there was a planet several times the mass of Jupiter being torn apart by the star, and that was throwing all that gas out of the system. Though the HIRES data appeared to support this theory, it also told us to be wary of that interpretation, suggesting the accretion may have something to do with motions in the atmosphere of the central star."

To gather more data, in 2012, the GALEX team used NASA's Wide-field Infrared Survey Explorer (WISE), a space telescope that studied the sky in infrared light, and identified a disk of dust orbiting closely around the star. Archival data from three other infrared observatories also spotted the disk. The finding didn't rule out the possibility that a planet was also orbiting the star, but eventually the team would show that the disk and the material expelled into space came from something larger than even a giant planet. Then in 2017, the Hobby-Eberly Telescope in Texas confirmed there was no compact object orbiting the star.

More than a decade after discovering the Blue Ring Nebula, the team had gathered data on the system from four space telescopes, four ground-based telescopes, historical observations of the star going back to 1895 (in order to look for changes in its brightness over time), and the help of citizen scientists through the American Association of Variable Star Observers (AAVSO). But an explanation for what had created the nebula still eluded them.

STELLAR SLEUTHING

When Hoadley began working with the GALEX science team in 2017, "the group had kind of hit a wall" with the Blue Ring Nebula, she said. But Hoadley was fascinated by the thus-far unexplainable object and its bizarre features, so she accepted the challenge of trying to solve the mystery. It seemed likely that the solution would not come from more observations of the system, but from cutting-edge theories that could make sense of the existing data. So Chris Martin, principal investigator for GALEX at Caltech, reached out to Brian Metzger of Columbia University for help.

As a theoretical astrophysicist, Metzger makes mathematical and computational models of cosmic phenomena, which can be used to predict how those phenomena will look and behave. He specializes in cosmic mergers - collisions between a variety of objects, whether they be planets and stars or two black holes.

"It wasn't just that Brian could explain the data we were seeing; he was essentially predicting what we had observed before he saw it," said Hoadley. "He'd say, 'If this is a stellar merger, then you should see X,' and it was like, 'Yes! We see that!'"

The team concluded the nebula was the product of a relatively fresh stellar merger that likely occurred between a star similar to our Sun and another only about one tenth that size (or about 100 times the mass of Jupiter). Nearing the end of its life, the Sun-like star began to swell, creeping closer to its companion. Eventually, the smaller star fell into a downward spiral toward its larger companion. Along the way, the larger star tore the smaller star apart, wrapping itself in a ring of debris before swallowing the smaller star entirely.

This was the violent event that led to the formation of the Blue Ring Nebula. The merger launched a cloud of hot debris into space that was sliced in two by the gas disk. This created two cone-shaped debris clouds, their bases moving away from the star in opposite directions and getting wider as they travel outward. The base of one cone is coming almost directly toward Earth and the other almost directly away. They are too faint to see alone, but the area where the cones overlap (as seen from Earth) forms the central blue ring GALEX observed.

Millennia passed, and the expanding debris cloud cooled and formed molecules and dust, including hydrogen molecules that collided with the interstellar medium, the sparse collection of atoms and energetic particles that fill the space between stars. The collisions excited the hydrogen molecules, causing them to radiate in a specific wavelength of far-UV light. Over time, the glow became just bright enough for GALEX to see.

Stellar mergers may occur as often as once every 10 years in our Milky Way galaxy, meaning it's possible that a sizeable population of the stars we see in the sky were once two.

"We see plenty of two-star systems that might merge someday, and we think we've identified stars that merged maybe millions of years ago. But we have almost no data on what happens in between," said Metzger. "We think there are probably plenty of young remnants of stellar mergers in our galaxy, and the Blue Ring Nebula might show us what they look like so we can identify more of them."

Though this is likely the conclusion of a 16-year-old mystery, it may also be the beginning of a new chapter in the study of stellar mergers.

"It's amazing that GALEX was able to find this really faint object that we weren't looking for but that turns out to be something really interesting to astronomers," said Seibert. "It just reiterates that when you look at the universe in a new wavelength or in a new way, you find things you never imagined you would."

Credit: 
W. M. Keck Observatory

Computer vision predicts congenital adrenal hyperplasia

Researchers at the VISTA Center (Vision, Image, Speech and Text Analytics) at the USC Viterbi Information Sciences Institute (ISI) along with scholars at the Keck School of Medicine of USC and Children's Hospital Los Angeles (CHLA) have discovered strong correlations between facial morphology and congenital adrenal hyperplasia (CAH), a life-threatening genetic condition of the adrenal glands and one of the most common forms of adrenal insufficiency in children. The findings, which could have implications for phenotyping and treating patients with CAH, appeared today in the Journal of the American Medical Association Network Open.

Dr. Mimi Kim, associate professor of clinical pediatrics at the Keck School of Medicine of USC, suspected that facial features were affected by this condition. Based on her work, computer scientists at USC used artificial intelligence (AI) to generate facial models from iPad photos taken by doctors in clinic and then applied AI to analyze these images to distinguish differences between the facial structure of youth affected with CAH versus others without CAH.

The paper's lead author, Wael Abd-Almageed, who is an associate research professor in the USC Viterbi Department of Electrical and Computer Engineering and a Research Team Leader at USC ISI, says this breakthrough "can open up the door to better clinical outcomes and improving quality of life for patients." He says one can imagine that doctors in the future can use this tool to assess disease progression.

Until now, the link between CAH and facial morphology was unexplored, since the effects of CAH on facial structure are relatively subtle compared to other genetic conditions such as Down Syndrome. Because of the subtlety of the differences in facial features and the difficulty involved in taking precise facial measurements by hand, the team used artificial intelligence to assist in the study.

According to Dr. Mimi Kim, a lead on the project and the co-director of the CAH Comprehensive Care Clinic at CHLA, this discovery unlocks new areas of research that can help us better understand the disease. Kim began to suspect common variations in facial features between patients with CAH at the center, and to confirm her suspicions, partnered with the USC ISI team to investigate. The discovery will not be used to identify or diagnose severe forms of CAH, which is screened for nationwide in all newborns. Rather, it opens the door for new clinical applications.

"Our first goal of this project was to find out whether differences in facial morphology can be identified in patients with CAH compared to unaffected individuals," said Hengameh Mirzaalian, a machine learning and computer vision scientist at ISI, and member of the CAH research team. The team used machine learning to train a computer to recognize individuals with CAH by analyzing an image of their face. This was accomplished by first showing the computer labeled images of faces of individuals both with and without CAH.

Ultimately, understanding the distinct facial features that accompany CAH is an important step towards learning more about other issues associated with the condition, such as hormonal imbalances that begin in early pregnancy, and improving treatment," said Mirzaalian.

Credit: 
University of Southern California

Pandemic has surprising impacts on public transit demand

COLUMBUS, Ohio - The COVID-19 pandemic had surprising effects on demand for public transit in American cities, new research suggests.

While demand for public transit dropped about 73% across the country after the pandemic hit, the reduction didn't impact all cities equally, according to the study, which analyzed activity data from a widely used public transit navigation app.

Large, coastal cities - like Seattle, San Francisco and Washington, D.C - saw demand fall further than cities in the Midwest and South. The reason had to do with the nature of jobs in different cities and who was actually using public transportation before the pandemic, said Luyu Liu, lead author of the study and doctoral student in geography at The Ohio State University.

"Many of the people who used public transit in large, coastal cities could work remotely from home after the pandemic," Liu said.

"But in cities in the Midwest and the deep South, most public transit users have jobs where they still had to come in to work during the pandemic and didn't have any other choice."

Study co-author Harvey Miller, professor of geography at Ohio State, said what we have called "essential workers" during the pandemic are the core users of public transit in these cities often labeled as non-transit dependent.

"These are the health care workers, people working service jobs, working in grocery stores, people who clean and maintain buildings," said Miller, who is also director of Ohio State's Center for Urban and Regional Analysis.

"It is a dramatic social equity story about who has to move during the pandemic."

The study was published today (Nov. 18, 2020) in the journal PLOS ONE.

Because of the difficulty of obtaining public transit ridership data on a national scale, the researchers took a different approach. They collected data on activity by users of the popular Transit mobile phone app, which provides real-time public transit data and trip planning.

The researchers used data on 113 county-level transit systems in 63 metro areas and 28 states across the United States. They examined data on app use from Feb. 15, right before widespread lockdowns were imposed because of the pandemic, up to May 17.

Overall, demand dropped about 73% after the pandemic started. But several factors were linked to which cities saw more or less decline in transit use.

The biggest factor was race. The larger the population of African Americans in a city, the less decline in demand for public transit.

A large proportion of Black transit users were women. According to a report from the Transit app, more than 70% of African-American riders during the early pandemic were women.

Occupation also played a large role. Demand dropped more in cities with a higher percentage of people with non-physical occupations.

"People who can work at home avoided public transit," Liu said. "But people who cannot work at home and rely on public transit continue to use it."

Many of the people with physical jobs who continued to use public transit were Hispanic, the study showed. That is consistent with statistics that show the Hispanic population had the lowest percentage (22%) of management, professional and related occupations compared with white (41%), African American (31%), and Asian people (54%) in 2018.

Communities with larger populations of people over the age of 45 continued to have higher demand for public transit.

Finally, cities that showed higher levels of Google searches for the word "coronavirus" early in the pandemic showed greater drops in transit use, suggesting more people in those cities were worried about COVID-19.

The study showed that the pandemic changed daily use levels of public transit.

With many of the management employees who work traditional 9 to 5 jobs no longer going to their offices, the "rush hours" weren't so crowded.

"In some cities, there wasn't even a morning or afternoon peak anymore - and weekdays and weekends started to resemble each other more in terms of demand," Miller said.

"Many of these essential workers don't have traditional 9 to 5 schedules. Their work needs to get done at all hours, seven days a week."

The reliance of low-income essential workers on public transit is likely even stronger than this research suggests, according to the researchers. Because the data in this study came from use of the Transit app, it doesn't capture transit users who can't afford a smartphone or who don't use the app.

The researchers said the study revealed how important public transportation is in our cities, even those that aren't thought of as reliant on buses and subways.

"The people who are using public transit are those who need to come to work even when everything else is locked down. They have no choice. We need to build our public transit systems to serve these people," Miller said.

Added Liu: "Public transit shouldn't be treated as a business. It is part of our social welfare system that we need to support our essential workers."

Credit: 
Ohio State University

UTHSC researchers identify three drugs as possible therapeutics for COVID-19

image: Based on virtual and in vitro antiviral screening that began in the earlier months of the COVID-19 pandemic, the researchers led at UTHSC by Colleen Jonsson, PhD, identified zuclopenthixol, nebivolol, and amodiaquine as promising therapeutics for the virus in its early stages.

Image: 
UTHSC

Memphis, Tenn. (November 18, 2020)--Researchers at the University of Tennessee Health Science Center working with colleagues at the University of New Mexico have identified three drugs, already approved for other uses in humans, as possible therapeutics for COVID-19, the illness caused by the SARS-CoV-2 virus.

Based on virtual and in vitro antiviral screening that began in the earlier months of the COVID-19 pandemic, the researchers led at UTHSC by Colleen Jonsson, PhD, identified zuclopenthixol, nebivolol, and amodiaquine as promising therapeutics for the virus in its early stages.

Dr. Jonsson is a professor and the Endowed Van Vleet Chair of Excellence in Virology in the College of Medicine at UTHSC. She also directs the UTHSC Regional Biocontainment Laboratory (RBL), where this research was conducted. The university's RBL is one of roughly a dozen federally funded labs authorized to safely study contagious pathogens.

In a paper published in ACS Pharmacology & Translational Science, the researchers propose the drugs as possible candidates for testing in future clinical trials to improve immune response to the virus. Amodiaquine is an older antimalarial, zuclopenthixol is an antipsychotic, and nebivolol is a blood pressure medication.

"Particularly in the context of this pandemic, there is a stringent need for high-quality studies that can provide critical knowledge concerning the COVID-19 disease and reliable treatment proposals," the paper states. "With these caveats in mind, we conceived a computational workflow that included independent in vitro validation, followed by assessing emerging candidates in the context of available clinical pharmacology data with the aim of proposing suitable candidates for clinical studies for early stage (incubation and symptomatic phases) patients infected by SARS-CoV-2."

"Given the need for improved efficacy and safety, we propose zuclopenthixol, nebivolol, and amodiaquine as potential candidates for clinical trials against the early phase of the SARS-CoV-2 infection," the researchers wrote.

Comparing the drugs to hydroxychloroquine, the anti-malarial drug most-frequently studied in clinical trials for use as a COVID-19 therapeutic, the researchers examined 4,000 approved drugs and found these three to act similarly to the hydroxychloroquine, and in some cases, more safely. The research indicates they may also improve efficacy when combined in lower doses with remdesivir, an anti-viral given an emergency use authorization by the United States Food and Drug Administration as a therapeutic for COVID-19.

"Think of it as a whack-a-mole game," said Tudor Oprea, MD, PhD, professor of Medicine and Pharmaceutical Sciences, chief of the UNM Division of Translational Informatics, and corresponding author on the paper. "Instead of having one hammer, you have two hammers, which is more effective. We're trying to give the scientific community two hammers, instead of one."

Dr. Jonsson added, "This is a very exciting discovery and we are following up on the potential use of zuclopenthixol, nebivolol, and amodiaquine in additional research studies."

Credit: 
University of Tennessee Health Science Center

Controversy continues over '13 Reasons Why' and adolescent suicide

PHILADELPHIA - After its release in 2017, the Netflix series "13 Reasons Why" spurred controversy over concerns that its portrayal of a teenage girl's suicide could increase suicide contagion among adolescents.

Though a much-publicized 2019 study found a contagion effect among boys, a subsequent reanalysis of that data by the Annenberg Public Policy Center (APPC) of the University of Pennsylvania concluded that, to the contrary, the series had no clear effect on teen suicide.

Now, in a pair of commentaries published in PLOS ONE, the original authors challenged the APPC reanalysis and APPC research director Daniel Romer defended his critique.

"We stand by our reanalysis," Romer said. "There is no reason or evidence to suggest that the show had an effect before it was even released. And as the authors of the study acknowledged, one would expect the show to have a strong effect for female adolescents, which was not found."

The debate over '13 Reasons Why'

In their 2019 paper, Bridge et al. claimed to find an increase in suicide in 10- to 17-year-old boys over as long as a 10-month period, starting the month before Netflix released the series. But an APPC reanalysis of that data, published early in 2020, failed to detect any reliable increase in suicide in girls and an increase for boys one month before and one month after the release in April 2017. Another study using the same methodology also found an increase for males in March and April and no significant effect on females in April, consistent with APPC's findings.

In their new PLOS commentary, Bridge et al. responded that Netflix "was actively broadcasting advertisements and series' trailers" in March 2017 "that targeted youth and encouraged them to watch this dramatization of an adolescent girl's suicide." But Romer finds "considerable evidence" that the show "did not create concerns about contagion until April," citing other independent analyses that focused on April as the point at which Google searches and crisis-line discussions began to rise. The study of crisis lines, for instance, "found no change in trend the month before the release and a sharp decrease" shortly after the release of the series.

"Thus," Romer argued, "there was no evidence that the series produced anywhere near the attention that would have been required to produce contagion in the month prior to its release and, if anything, the series coincided with a decline in crisis conversations that followed its release."

Romer said that if one were to predict a contagion effect from the series, it would be for young females. In his reanalysis, Romer found a modest but statistically unreliable increase in suicide in April among girls that was unique to that month.

"Unfortunately, looking at aggregate monthly suicide rates is not a very sensitive method for detecting either the harmful or helpful effects of media depictions of suicide," Romer said. In a separate study, Romer and colleagues found that viewing the second season of "13 Reasons Why" may have had beneficial effects on some young viewers and harmful effects on other viewers. These opposing effects make it difficult to determine whether the potentially harmful effect for some female adolescents was counterbalanced by beneficial effects for others, he said.

In his original reanalysis, Romer said the Bridge study failed to account for ongoing trends in adolescent suicide, in particular a strong rise in 2017. In their new commentary, Bridge et al. defended the use of their analytic model, but Romer responded that their model "seriously underestimated the upward trend in overall suicide ..."

Understanding media effects

Romer said it is important to gain a better understanding of how shows like "13 Reasons Why" affect vulnerable audiences so that the television producers can develop entertaining and helpful programming without creating adverse effects on viewers.

In his current commentary, Romer concluded, "In view of our still limited knowledge about how these events affect vulnerable audiences, we should resist drawing bold conclusions about effects that defied predictions about both the gender of the victims and the time when the effect should appear."

Credit: 
Annenberg Public Policy Center of the University of Pennsylvania

Air pollution costs Utahns billions annually and shortens life expectancy by two years

Air pollution has been a problem in Utah since before the territory was officially recognized as a state. The mountain valleys of this high elevation region are particularly vulnerable to the buildup of air pollution from vehicles, household heating and power production. Together with high per-capita energy use, this has resulted in periods of poor air quality. However, with so many types of pollution and regional conditions, determining the overall effects of air pollution on Utah's health and economy has been a major challenge. A new study from 23 Utah-based researchers, including five from the University of Utah, sought to do just that.

The study estimated that air pollution shortens the life of the average Utahn by around 2 years. And pollution costs Utah's economy around $1.9 billion annually. But many state-level actions, such as increasing vehicle and building efficiency, could reduce air pollution by double-digit percentages while benefitting the economy, the researchers found.

The team used an approach called expert assessment, which combines all available research and experience from published and unpublished scientific studies. Combining expertise from public health, atmospheric science and economics, the researchers assessed what types of disease and economic harm could stem from Utah's air pollution. The study was published in the peer-reviewed journal Atmosphere in a special issue on air quality in Utah.

They estimated that air pollution in Utah causes between 2,500 and 8,000 premature deaths each year, decreasing the median life expectancy of Utahns by 1.1 to 3.6 years. This loss of life expectancy is distributed across most of the population, they found, rather than only affecting "sensitive groups." For example, 75% of Utahns may lose 1 year of life or more because of air pollution and 23% may lose 5 years or more.

This substantial health burden is caused by many illnesses and conditions that most people might not associate with air pollution. For example, exposure to particulates and other pollutants increases occurrence of heart and lung diseases, including congestive heart failure, heart attack, pneumonia, COPD and asthma. These conditions account for 62% of the pollution impact on health, according to this study. The remaining 38% of health effects are associated with stroke, cancer, reproductive harm to mothers and children, mental illness, behavioral dysfunction, immune disease, autism and other conditions--all exacerbated by exposure to dirty air.

On the economic side, the researchers estimated that the direct and indirect costs of air pollution cost Utahns around $1.9 billion dollars (in the range of $0.75-3.3 billion) annually. This economic damage results from direct effects such as healthcare expenses, damage to crops and lost earning potential, in addition to indirect costs such as loss of tourism, decreased growth and regulatory burdens.

"It was a real eye-opener to see quantitative estimates of how serious the health and economic costs of air pollution are for the people of Utah," said Isabella Errigo, lead author and a graduate student at Brigham Young University. "The consequences of dirty air can seem very abstract until you read the medical research connecting the quality of our environment to our personal health."

Even though the estimates of cost in this study are on the low end of national estimates, which range up to $9 billion a year for Utah, they are still much higher than figures commonly discussed in the legislature. For example, approximately $10 million was appropriated to clean Utah's air this year, representing only 0.1% to 0.5% of the costs of air pollution.

"Utahns understand that air pollution imposes large hidden costs on our communities which is why it's consistently ranked as a top concern," said Logan Mitchell, a research assistant professor at the U and a co-author of the study. "Thankfully, innovation has made clean energy technologies cost competitive on the market, without even considering those hidden costs. The coming energy transition will mean being good stewards of the environment will also protect our economy."

The mismatch between the size of the problem and the proposed solutions emphasizes one of the central findings from the study: cleaning the air could have immense health and economic benefits for Utah. The authors combined their estimates of cost with the air pollution goals from the recent Utah Roadmap to Clean Air, produced by the U's Kem C. Gardner Policy Institute. If Utah achieves the roadmap's pollution reduction targets, they estimate, Utah could save $500 million per year by 2030 and $1.1 billion per year by 2050.

"The payoff of reducing pollution would be huge in economic terms and the benefits would be incalculable in terms of human life and health," said senior author Ben Abbott, an assistant professor at BYU. "It's a question of choice. Are we going to settle for incremental progress in air quality or take advantage of this immense opportunity to improve the health of our communities and remove this enormous drag on our economy?"

"When I read these results, my thoughts immediately turn to my friends and family who live in Utah," said co-author Rebecca Frei, a graduate student at the University of Alberta. "My grandmother goes walking and my niece plays on the playground every day. Changing some simple things about how we operate means added years of life. To me, that's a no-brainer. This isn't about pushing an agenda, this is about assessing the evidence and acting out of love for our families and community."

The researchers ranked more than 30 recommendations of how to best reduce the amount of air pollution in Utah. At the top of the list: increase efficiency of vehicles and buildings, invest in awareness, remove subsidies for nonrenewable energy, require payment for pollution and expand alternative transportation. They estimated that each of these interventions could result in double-digit decreases in air pollution. The researchers suggested that changes at the state level and community level as the most effective and tractable.

The researchers cautioned that no single change would achieve the desired improvement in air quality alone. "We need long-term implementation of proven pollution control measures," Errigo said. "It's going to take commitment from multiple groups at city to state levels to clean up our air and prepare for future growth."

The findings of this study are directly in line with the recommendations of the Utah Road Map to Clean Air and add quantitative estimates of the health and economic costs. The researchers hope that these estimates provide additional context for state legislators and concerned citizens who want to enact positive change.

"In our efforts to clear the air there are no perfect answers, but there are practical solutions," said Thom Carter, Executive Director of the Utah Clean Air Partnership (UCAIR) and co-author on the study. "When looking at how poor air quality impacts our region, it is important to know that we are making progress and that each person, family, organization, and community can find ways to reduce emissions and improve our quality of life."

Credit: 
University of Utah

In the lab, St. Jude scientists identify possible COVID-19 treatment

image: (L-R) Bhesh Sharma, Ph.D., Thirumala-Devi Kanneganti, Ph.D., Rajendra Karki Ph.D., of the Kanneganti Lab at St. Jude Children's Research Hospital

Image: 
St. Jude Children's Research Hospital

The COVID-19 pandemic continues to cause significant illness and death while treatment options remain limited. St. Jude Children's Research Hospital scientists have discovered a potential strategy to prevent life-threatening inflammation, lung damage and organ failure in patients with COVID-19. The research appeared online in the journal Cell.

The scientists identified the drugs after discovering that the hyperinflammatory immune response associated with COVID-19 leads to tissue damage and multi-organ failure in mice by triggering inflammatory cell death pathways. The researchers detailed how the inflammatory cell death signaling pathway worked, which led to potential therapies to disrupt the process.

"Understanding the pathways and mechanism driving this inflammation is critical to develop effective treatment strategies," said corresponding author Thirumala-Devi Kanneganti, Ph.D., vice chair of the St. Jude Department of Immunology. "This research provides that understanding. We also identified the specific cytokines that activate inflammatory cell death pathways and have considerable potential for treatment of COVID-19 and other highly fatal diseases, including sepsis."

COVID-19, cytokines, and inflammatory cell death

COVID-19 is caused by the SARS-CoV-2 virus. The infection has killed more than 1.2 million people in less than one year and sickened millions more.

The infection is marked by increased blood levels of multiple cytokines. These small proteins are secreted primarily by immune cells to ensure a rapid response to restrict the virus. Some cytokines also trigger inflammation.

The phrase cytokine storm has been used to describe the dramatically elevated cytokine levels in the blood and other immune changes that have also been observed in COVID-19, sepsis and inflammatory disorders such as hemophagocytic lymphohistiocytosis (HLH). But the specific pathways that initiate the cytokine storm and the subsequent inflammation, lung damage and organ failure in COVID-19 and the other disorders was unclear. The cellular and molecular mechanisms that comprehensively define cytokine storm was also lacking.

Kanneganti's team focused on a select set of the most elevated cytokines in COVID-19 patients. The scientists showed that no single cytokine induced cell death in innate immune cells.

The St. Jude investigators then tried 28 cytokine combinations and found just one duo that, working together, induced a form of inflammatory cell death previously described by Kanneganti as PANoptosis. The cytokines are tumor necrosis factor (TNF)-alpha and interferon (IFN)-gamma. PANoptosis is a unique type of cell death that features coordination of three different cell death pathways--pyroptosis, apoptosis and necroptosis. PANoptosis fuels inflammation through cell death, resulting in the release of more cytokines and inflammatory molecules.

The investigators showed that blocking individual cell death pathways was ineffective in stopping cell death caused by TNF-alpha and IFN-gamma. A closer look at proteins that make up the pathways identified several, including caspase-8 and STAT1, that were essential for PANoptosis in response to these cytokines. Deleting those proteins blocked PANoptosis in innate immune cells called macrophages.

Potential for repurposing TNF-alpha and IFN-gamma blockers to treat COVID-19

Because TNF-alpha and IFN-gamma are produced during COVID-19 and cause inflammatory cell death, the investigators questioned whether these cytokines were responsible for the clinical manifestations and deadly effects of the disease. They found that the TNF-alpha and IFN-gamma combination triggered tissue damage and inflammation that mirror the symptoms of COVID-19 along with rapid death.

Neutralizing antibodies against TNF-alpha and IFN-gamma are currently used to treat inflammatory diseases in the clinic. The investigators found that treatment with these antibodies protected mice from death associated with SARS-CoV-2 infection, sepsis, HLH and cytokine shock.

"The findings link inflammatory cell death induced by TNF-alpha and IFN-gamma to COVID-19," Kanneganti said. "The results also suggest that therapies that target this cytokine combination are candidates for rapid clinical trials for treatment of not only COVID-19, but several other often fatal disorders associated with cytokine storm."

Added co-first author Rajendra Karki, Ph.D., a scientist in the Kanneganti laboratory: "We were excited to connect these dots to understand how TNF-alpha and IFN-gamma trigger PANoptosis." Co-first author Bhesh Raj Sharma, Ph.D., a scientist in the Kanneganti laboratory, added: "Indeed, understanding how PANoptosis contributes to disease and mortality is critical for identifying therapies."

Redefining cytokine storm

Based on this fundamental research, Kanneganti and her colleagues have proposed a definition of cytokine storm that puts the cytokine-mediated inflammatory cell death via PANoptosis at the center of the process. The researchers noted that PANoptosis results in the release of more cytokines and inflammatory molecules, which intensifies systemic inflammation.

"We have solved a major piece of the cytokine storm mystery by characterizing critical factors responsible for initiating this process, and thereby identifying a unique combination therapy using existing drugs that can be applied in the clinic to save lives," Kanneganti said.

Credit: 
St. Jude Children's Research Hospital

Missing the radiological forest for the trees

There's a classic video demonstrating how our brains process information and allocate attention in which people bounce and pass basketballs and the viewer is asked to count the passes.

If you haven't seen it, go watch it here and then come back. Go ahead. I'll wait.

The experiment highlights a phenomenon called inattentional blindness. We can't pay attention to everything at once, so our brains have to filter information. In the situation in the video, the stakes were low. But what if inattentional blindness causes a radiologist, for example, to miss something obvious and serious?

A study from University of Utah researchers Lauren Williams, Trafton Drew and colleagues finds that even experienced radiologists, when looking for one abnormality, can completely miss another. The results, published in Psychonomic Bulletin & Review, show that inattentional blindness can befall even experts.

"Inattentional blindness reveals the limits of human cognition," says Williams, a recent U graduate and now a postdoctoral scholar at the University of California, San Diego, "and this research demonstrates that even highly trained experts are bound by the same machinery as everyone else."

"If even these experts miss these seemingly obvious findings," adds Drew, associate professor of psychology, "it suggests that this is something really critical we need to understand about how all of us perceive the world."

Missing the gorilla

By some estimates medical errors, including missed radiological abnormalities, are the third leading cause of death in the United States. "We've known for a long time that many errors in radiology are retrospectively visible," Drew says. "This means if something goes wrong with a patient, you can often go back to the imaging for that patient and see that there were visible signs--say, a lung nodule--on something like a chest CT."

So, in 2013, Drew and colleagues conducted an experiment to understand how trained experts could miss those clear signs. In that study, the authors presented radiologists with chest computed tomography (CT) scans and asked them to look for lung cancer nodules. But the authors had also placed an image of a gorilla into the scan--something that obviously doesn't belong in a lung. Drew and his colleagues found that 83% of radiologists did not notice the gorilla.

But that's a gorilla. How would the results be different, they wondered, if instead of a gorilla it was an abnormality that could plausibly come up on a CT scan?

So Williams, Drew and their colleagues from UCLA and Macquarie University set up another experiment. They asked 50 radiologists to evaluate seven chest CT scans for lung cancer, but this time the final scan included two clear abnormalities: a significant breast mass and a lymphadenopathy (an abnormal lymph node). Two-thirds of the radiologists did not notice the potentially cancerous mass. A third did not notice the lymphadenopathy.

"Like anyone that experiences inattentional blindness, I think many radiologists were simply surprised to learn they had missed something," says Williams, who administered the experiment. "Our intuition tells us that if something is fully visible, we'll detect it, but we've all experienced the feeling of missing important information that is retrospectively obvious when our attention is focused elsewhere."

Experience wasn't a factor in whether or not the radiologists noticed the abnormalities, the researchers found, suggesting that years of experience doesn't outweigh universal cognitive truths, and that missing the abnormalities isn't a reflection on the competence or skill of the radiologist.

"It suggests that understanding the situation that led to the missed abnormality may be far more important than focusing on the experience of the individuals that missed it," Williams says.

Seeing the gorilla

In a subsequent experiment, however, instead of asking the radiologists to look for lung cancer nodules, the researchers asked radiologists to look at the same scan and report on a broader range of abnormalities. This time, only 3% missed the mass and 10% missed the lymphadenopathy.

"There a huge amount of information in the ever-growing amount of data we gather on each patient, and what we actually notice depends very strongly on what you are looking for," Drew says. "Cataloging how often radiologists miss something in plain sight misses a really important piece of the puzzle: What were they looking for when they missed the thing in plain sight?"

"Our research demonstrates that focusing narrowly on one task may cause radiologists to miss unexpected abnormalities, even if those abnormalities are critical for patient outcomes," Williams adds. "However, focused attention is probably beneficial when the abnormalities match the radiologist's expectations." Any changes to clinical process would need to find the balance between the two, she says. Some possibilities might be a general assessment of a scan before looking for specific abnormalities, or using checklists to scan for commonly missed findings.

Would the use of artificial intelligence, which doesn't have the same cognitive limitations that humans do, resolve the problem of inattentional blindness? Not necessarily, Drew says. AI is only as good as its training and programming. Algorithms are good at finding narrowly defined abnormalities, he says, but not as good at detecting all possible findings on a scan.

"Radiologists might benefit from being thoughtful about what they are looking for rather assuming that if they see it they will perceive it," Drew says. "AI has in some ways, the same limitation: it's only going to be good at detecting what it has been taught to detect."

Williams says that advancements in radiological technology have produced increasingly clear medical imaging. "However, if radiologists frequently miss a large, clearly visible abnormality when their attention is focused on another task, it suggests that having a clear image is not enough."

Drew says the study can help us understand how we often find only what we're looking for.

"Everyone, even experts, can miss things that seem really obvious if we are not looking for them," he says. "If you've searched through your whole apartment for your phone, you might assume you would have noticed your keys during that search. Our research suggests a reason why you will probably have to search again specifically for the keys."

Credit: 
University of Utah

Diabetes, hypertension may increase risk of COVID-19 brain complications

image: Chest X-ray of a SARS-CoV-2-positive patient exhibiting confusion and showing weakness on his left side shows pneumonia in the lower lungs.

Image: 
Radiological Society of North America

OAK BROOK, Ill. - Some patients with COVID-19 are at higher risk of neurological complications like bleeding in the brain and stroke, according to a study being presented at the annual meeting of the Radiological Society of North America (RSNA). The researchers said these potentially life-threatening findings were more common in patients with hypertension and diabetes.

The virus that causes COVID-19 first attacks cells in the respiratory system, often leading to an inflammation of the lungs that puts people at risk of contracting pneumonia. But the virus' impact has also been felt in other systems of the body.

"COVID-19's effects extend far beyond the chest," said study lead author Colbey W. Freeman, M.D., chief resident in the Department of Radiology at Penn Medicine in Philadelphia. "While complications in the brain are rare, they are an increasingly reported and potentially devastating consequence of COVID-19 infection."

To learn more about the phenomenon, Dr. Freeman and colleagues in the Perelman School of Medicine at the University of Pennsylvania looked at COVID-19 patients who underwent head CT and/or MRI in their health system from January to April 2020. Of the 1,357 patients with COVID-19 admitted to the system in those four months, 81 had a brain scan performed. The most common reasons for the brain scans were altered mental state and focal neurologic deficits such as speech and vision problems.

Out of 81 patients with brain scans, 18, or just over one in five, had findings that were considered emergency or critical, including strokes, brain bleeds and blocked blood vessels. At least half the patients had pre-existing histories of high blood pressure and/or type 2 diabetes. Three patients with emergent/critical findings died while admitted.

"COVID-19 is associated with neurologic manifestations, and hypertension and type 2 diabetes mellitus are common in individuals who develop these manifestations," Dr. Freeman said. "These populations may be at higher risk for neurologic complications and should be monitored closely."

Two-thirds of the patients with critical results in the study were African American, suggesting that these patients also may require closer monitoring.

The exact mechanisms for COVID-19's harmful neurological effects are not known and may involve multiple factors, although a popular theory holds that inflammation associated with the infection is the primary culprit. In the study, blood markers of inflammation were high in people with critical results.

"When your body is in an inflammatory state, it produces all these molecules called cytokines to help recruit the immune system to perform its function," Dr. Freeman said. "Unfortunately, if cytokines are overproduced, the immune response actually starts doing damage."

The study is ongoing, Dr. Freeman said, and the researchers will continue to publish findings as more data comes in. They are also investigating the incidence of neurologic complications in COVID-19 patients on extracorporeal membrane oxygenation (ECMO), a pump system to circulate and replenish oxygen in the blood. Several patients in the study needed ECMO during their time at the hospital.

"In addition, we have plans to initiate a larger prospective study evaluating delayed, long-term, and chronic neurologic manifestations that may not be known in this early period in the pandemic," Dr. Freeman said.

Credit: 
Radiological Society of North America

Denmark trial measures effectiveness of adding a mask recommendation to other public health measures

Below please find summaries of new articles that will be published in the next issue of Annals of Internal Medicine. The summaries are not intended to substitute for the full articles as a source of information. This information is under strict embargo and by taking it into possession, media representatives are committing to the terms of the embargo not only on their own behalf, but also on behalf of the organization they represent.

Denmark trial measures effectiveness of adding a mask recommendation to other public health measures for preventing SARS-CoV-2 infection

HD video soundbites of the authors and Annals editors discussing the findings are available to download at http://www.dssimon.com/MM/ACP-danmask.
Abstract: https://www.acpjournals.org/doi/10.7326/M20-6817
Editorial: https://www.acpjournals.org/doi/10.7326/M20-7448
Editorial: https://www.acpjournals.org/doi/10.7326/M20-7499
URLs are live when the embargo lifts

A randomized trial of more than 6,000 participants in Denmark adds new evidence to what is known about whether masks protect the wearer from SARS-CoV-2 infection in a setting where public health measures, including social distancing, are in effect but others are not wearing masks. The DANMASK-19 trial randomized participants to follow those public health measures with or without an additional recommendation to wear a surgical mask when outside the home. Mask use outside of hospitals was uncommon in Denmark at the time. After 1 month of follow-up, 1.8% of participants in the mask group and 2.1% in the control group developed infection. While the evidence excludes a large personal protective effect of mask wearing, it weakly supports lesser degrees of protection, and cannot definitively exclude no effect. The findings are published in Annals of Internal Medicine.

Researchers from Copenhagen University Hospital recruited 6,024 adults who spent at least 3 hours per day outside their homes, whose occupations did not require masks, and who did not have a previous known diagnosis of SARS-CoV-2 infection. Participants were randomized into the mask group or the control group and those in the mask group were given a supply of surgical masks. All participants completed weekly surveys and antibody tests with PCR testing if COVID-19 symptoms developed, and at 1 month. At the conclusion of the trial, infection rates were similar between the two groups.

Of note, Danish authorities did not recommend masks during the study period and their use in the community was uncommon. Public transportation and shops remained open and recommended public health measures included quarantine of persons with SARS-CoV-2 infection, social distancing, limiting the number of people seen, frequent hand hygiene and cleaning, and limited visitors to hospitals and nursing homes.

According to the study authors, their findings offer evidence about the degree of protection mask wearers can anticipate in a setting where others are not wearing masks and where other public health measures, including social distancing, are in effect. The findings, however, should not be used to conclude that a recommendation for everyone to wear masks in the community would not be effective in reducing SARS-CoV-2 infections, because the trial did not test the role of masks in source control (transmission from an infected person to others) of SARS-CoV-2 infection.

The editors of Annals of Internal Medicine chose to publish the DANMASK-19 trial because it is a well-designed study that provides an important piece of evidence to understand the puzzle of how to control the COVID-19 pandemic. They also note that the U.S. Centers for Disease Control recently updated their guidance to acknowledge that masks, when worn by all, may reduce transmission by both source control and personal protection. They say that the DANMASK-19 trial does not conflict with these guidelines, but shows that any contribution to risk reduction through personal protection is likely to be less than through source control.

Credit: 
American College of Physicians

NO DRINKING! NO FIGHTING! The laws of early Edo Japan to keep the peace

image: A letter from the lord of the Hosokawa clan to the four vassals in charge stating the rules to be followed.

Image: 
Professor Tsuguharu Inaba

An early Edo period document stipulating the Hosokawa clan code of conduct for vassals dispatched on a national project to rebuild Sunpu Castle has been discovered by Kumamoto University researchers. The thirteen articles from the head of the Hosokawa clan in the Kokura domain (area), Tadaoki Hosokawa at the time, delegate full authority to the vassals to lead construction and prevent conflicts with other clans. It is the second original code of conduct document related to the Sunpu Castle reconstruction effort to be discovered.

During the Edo period (1603-1867), the Japanese central government mobilized feudal lords from all over the country to build and repair important castles and carry out extensive infrastructure development. It is commonly believed that these national projects prevented clans from accumulating wealth by forcing them to send out materials and men, thereby establishing a system of control in the territory. Sunpu Castle, located in the center of Japan in Shizuoka Prefecture, was closely associated with the first of the Edo Shogunate (Ieyasu Tokugawa) and was an important base for the Edo Shogunate. A major expansion of the castle was delayed by a fire in December of 1607, but was quickly rebuilt by the following year. A number of daimyo clans were mobilized from all over Japan for this series of restoration projects.

The document found by researchers was issued by the head of the Hosokawa clan, Tadaoki Hosokawa, on January 8th, 1608, and outlined the Hosokawa clan code of conduct for vassals during reconstruction and their journey from Kokura (now northern Kyushu) to the Sunpu Castle reconstruction site (southwest of Tokyo). Consistent throughout this code is a strict prohibition of any action that might lead to quarrels with vassals or workers of other clans. Articles 9 and 10 delegated full authority of the renovation site to the four persons named and the superintendent in charge of the Hosokawa clan.

Article 1 instructs the entire Hosokawa workforce to follow the instructions of the superintendent, Masazumi Honda - Aide to the Shogun, in all matters of discipline. Article 2 stipulates that fighting within the clan must be strictly avoided. Those engaged in fighting, as well as those who supported them, were punished (usually by death).

Articles 3 to 5 are provisions aimed at preventing fights with other clans. Going to watch another clan's fight was a punishable offense (Article 3). If a servant escaped to another house, he was not returned forcefully. On the other hand, those who escaped from other clans were to be returned after the completion of the project (Article 4). Lodging fees from Kokura to Sunpu were to be paid in accordance with the "Gohatto" (laws & regulations) (Article 5).

The second half of the code provides a glimpse into the life of the soldier class (ashi-garu) mobilized for the project. Alcohol (sake) was strictly prohibited. They could bring their own food (bento), but were not to drink more than three small flat sake cups (sakazuki) of alcohol (Article 6). When going to town, they were supposed to declare the nature of their errand to the magistrate and obtain a permit (Article 7). Meetings with people from other clans or the shogunate were strictly forbidden (Article 8). Hot baths in another clan's facilities were not allowed (Article 11). Sumo wrestling and spectating were strictly forbidden during the period of the project, and violators would be punished (Article 12). On the round trip between Kokura and Sunpu, workers were to travel in groups as indicated on an attached sheet (Article 13). This purpose of this historical document was to maintain peace at the project site and vividly conveys the aspects of the samurai society during its transition from a time of war to peace and prosperity.

When asked about the academic significance of this document, Professor Tsuguharu Inaba said, "This discovery provides us with a great deal of information about the politics concerning feudal lord mobilization by the shogunate to build castles." Professor Inaba discovered the document and was part of the Eiseibunko Research Center team at Kumamoto University who deciphered it.

Until now, only two documents related to the reconstruction of Sunpu Castle were known: an original code of conduct written by Mori Terumoto, feudal lord of the Choshu clan, and another, which is a copy of the Choshu clan document, from Maeda Toshinaga, feudal lord of the Kaga clan. The discovery of this ancient document and the fact that the three documents are similar means that each clan was likely presented with a general code of conduct framework by the shogunate. Each clan then established the rules in the name of their feudal lord and required their vassals to enforce them.

The Hosokawa and Mori clans had been enemies in a major civil war (culminating in the Battle of Sekigahara) only seven years earlier and if something had sparked an old grudge, a major conflict could have evolved. The shogunate dared to mobilize the adversarial clans for the same national project to discipline them and make their joint efforts more visible. This ancient document reveals an attempt to thoroughly prevent conflicts between the clans and suggests that the shogunate was trying to eliminate the seeds of civil war by reconciling relations between clans. In other words, the government strategically implemented a national project to establish peace within Japan.

This document was made public for the first time on the 4th of November 2020 at the Kumamoto University Library Online Exhibition of Rare and Valuable Materials.

Credit: 
Kumamoto University

Lovestruck by oxytocin! Novel roles of the hormone in controlling male sexual function

image: Oxytocin directly activates the spinal ejaculation generator (SEG)/gastrin-releasing peptide (GRP) neurons via oxytocin receptors (OXTRs) and influences male sexual function in the rat lumbar spinal cord. Release of oxytocin in the lumbar cord is not limited to conventional synapses and acts by diffusion--a localized volume transmission--to reach OXTRs on SEG/GRP neurons and facilitate male sexual activity.

Image: 
2020 Okayama University

Hormones are key players of the endocrine system and have a major influence on our emotional and sexual wellbeing. The hormone oxytocin is involved in a wide range of emotions from social bonding to maternal behaviors like nursing and lactation. But the most popular and well-known role of oxytocin, lending it its popular moniker of 'love hormone,' is its role in romantic and sexual emotions.

The functional mechanism of oxytocin in male sexual function and behavior is not clearly understood, but there is some evidence supporting the role of oxytocin-specific nerve cells or neurons in the brain that project to the lower spinal cord and control penile erection and ejaculation in male rats. Now, in a brand new study published in Current Biology, a group of researchers led by Professor Hirotaka Sakamoto from Okayama University, Japan, has explored this potential role of oxytocin and the underlying mechanisms in modulating male sexual function using rats as a model system.

Oxytocin is transferred from the brain to various parts of the body by the blood, and from neuron to neuron through structures called "synapses." However, the precise mechanisms by which sparsely dispersed oxytocin fibers--structures responsible for responding to oxytocin in the central nervous system--cause the activation of widely distributed receptors remain unclear.

The researchers from Japan investigated a novel non-synaptic mode of oxytocin transport across the central nervous system. When asked to explain this process, Prof Sakamoto refers to an interesting analogy: "Overall, the endocrine system, which acts on widespread distant organs via the circulation, resembles a 'broadcasting satellite' communication, whereas synaptic transmission resembles the hard-wired 'ethernet.' Accordingly, the localized volume transmission of peptides resembles 'Wi-Fi' communication, since it is a hybrid of both endocrine (satellite) and synaptic (ethernet) systems, and may be the predominant mechanism of oxytocinergic modulation of socio-sexual behavior and cognition throughout the central nervous system."

It is already known that spinal regions like the spinal ejaculation generator (SEG) are known to control sexual functions in male rodents. To assess the role of oxytocin in copulatory and ejaculatory responses, the team injected oxytocin into the spine of male rats. The gastrin-releasing peptide or GRP neurons are an important component of the SEG, as they control the lower lumbar region connected to muscles at the base of the penis, thereby controlling erection and ejaculation.

Oxytocin caused an increased sexual activity and neuronal activity in injected animals. More specifically, oxytocin was found to directly activate SEG/GRP neurons via oxytocin receptors, which detect oxytocin, and influence male sexual function in the rat lumbar spinal cord. Using an oxytocin receptor antagonist, which reduces the activity of oxytocin receptors, resulted in a latency and decrease in number of sexual activity and ejaculatory responses in majority of the animals, confirming the importance of oxytocin.

The question remained about the transport of oxytocin, however. Electron microscopy images acquired from slices of the lumbar region ruled out the presence of synaptic vesicles or connections. Upon stimulation of exocytosis ex-vivo, they were able to observe oxytocin transport mediated by a more passive diffusion in extracellular spaces at non-synaptic sites.

Highlighting the importance of the study, Prof Sakamoto remarks, "Now that we have uncovered a novel neural mechanism-the 'localized volume transmission' of oxytocin from axons-involved in controlling male sexual function in the spinal cord, we can hope that this may lead to the development of treatments for male sexual dysfunction."

This study thus presents a completely unprecedented role for oxytocin in male sexual function, in addition to its long-standing "female-centric" role. Learning more about this "love hormone" may indeed help us foster healthier and long-lasting loving relationships!

Credit: 
Okayama University

Diabetes increases neuritic damage around amyloid plaques in Alzheimer's disease

New research from the University of Eastern Finland explores the role of diabetes in the cellular and molecular changes underlying Alzheimer's disease (AD). In an AD mouse model, diabetes induced through a diet rich in fats and sugars weakened the accumulation of microglial cells around amyloid plaques and increased the formation of neuritic plaques with prominent tau pathology. Besides the mouse model, a similar observation was also made in hydrocephalus patients with type 2 diabetes, who had fewer microglia around amyloid plaques than patients without diabetes. The findings provide valuable new insight into the cellular mechanisms by which type 2 diabetes contributes to the risk and development of AD.

Alzheimer's disease is the most common form of dementia, with no cure to date. AD is characterised by the accumulation of beta-amyloid peptides and phosphorylated tau proteins in the brain, leading to the activation of the immune cells in brain: microglia and astrocytes. AD also causes damage to axons and dendrites and, ultimately, leads to neuronal cell death. Recent genetic studies suggest that microglia play a key role in the development of AD. In addition to genetics, environmental and lifestyle factors, and diseases associated with them, such as type 2 diabetes, affect the risk of AD. Type 2 diabetes has long been known to increase the risk of AD and to influence the disease course, but the underlying cellular and molecular events are still elusive.

In the new study, transgenic AD model mice were put on a six-month regimen resembling the typical Western diet, i.e. one that is rich in fats and sugars, and this led to the development of diabetes in the mice. In behavioural analysis, diabetic mice showed impaired learning and memory compared to mice on standard diet. Bulk RNA expression analysis of brain samples of the mice suggested weakened response of microglial cells to amyloid-β, as well as attenuation of Trem2 and PI3K-Akt signalling pathways. Immunohistochemical analyses of entorhinal and hippocampal brain sections supported these findings, as the diabetic mice had fewer microglia and more dystrophic neurites around amyloid plaques than mice on the standard diet.

"This study sheds new light on the cellular level how diabetes contributes to the development of AD, and specifically highlights the importance of brain immune cells in the disease process. Our findings suggest that diabetes can weaken the ability of microglia to react to harmful amyloid-β. It seems that diabetes can lead to the formation of neuritic plaques, which are characteristic pathological changes in the AD brain," Senior Researcher Teemu Natunen from the Institute of Biomedicine at the University of Eastern Finland says.

Western diet did not associate with the overall accumulation of amyloid-β in the brain of AD mice.

"A diet that is rich in fat and sugar, i.e. the typical Western diet, is known to increase the risk of type 2 diabetes, and this way, it possibly also contributes to the development of AD," Professor Heikki Tanila from the A. I. Virtanen Institute for Molecular Sciences at the University of Eastern Finland says.

The researchers further analysed cortical biopsies of idiopathic normal pressure hydrocephalus patients, collected and studied by Professor Ville Leinonen's research group at Kuopio University Hospital. Human cortical samples showed changes that were similar to those observed in mice: in normal pressure hydrocephalus patients with type 2 diabetes, the number of microglia around amyloid plaques was lower than in non-diabetic patients.

"The set of data from patients with normal pressure hydrocephalus constituted an important part of our study, because it allowed us to show that also humans with type 2 diabetes have an impaired microglia response. This type of collaboration between research groups at the University of Eastern Finland and Kuopio University Hospital, which makes it possible for us to verify findings from basic research in patient samples, is crucial for the high-level research carried out in the Neuroscience Research Community at UEF," Professor Mikko Hiltunen from the Institute of Biomedicine at the University of Eastern Finland says.

Credit: 
University of Eastern Finland