Culture

More berries, apples and tea may have protective benefits against Alzheimer's

BOSTON (May 5, 2020)--Older adults who consumed small amounts of flavonoid-rich foods, such as berries, apples and tea, were two to four times more likely to develop Alzheimer's disease and related dementias over 20 years compared with people whose intake was higher, according to a new study led by scientists at the Jean Mayer USDA Human Nutrition Research Center on Aging (USDA HNRCA) at Tufts University.

The epidemiological study of 2,800 people aged 50 and older examined the long-term relationship between eating foods containing flavonoids and risk of Alzheimer's disease (AD) and Alzheimer's disease and related dementias (ADRD). While many studies have looked at associations between nutrition and dementias over short periods of time, the study published today in the American Journal of Clinical Nutrition looked at exposure over 20 years.

Flavonoids are natural substances found in plants, including fruits and vegetables such as pears, apples, berries, onions, and plant-based beverages like tea and wine. Flavonoids are associated with various health benefits, including reduced inflammation. Dark chocolate is another source of flavonoids.

The research team determined that low intake of three flavonoid types was linked to higher risk of dementia when compared to the highest intake. Specifically:

Low intake of flavonols (apples, pears and tea) was associated with twice the risk of developing ADRD.

Low intake of anthocyanins (blueberries, strawberries, and red wine) was associated with a four-fold risk of developing ADRD.

Low intake of flavonoid polymers (apples, pears, and tea) was associated with twice the risk of developing ADRD.

The results were similar for AD.

"Our study gives us a picture of how diet over time might be related to a person's cognitive decline, as we were able to look at flavonoid intake over many years prior to participants' dementia diagnoses," said Paul Jacques, senior author and nutritional epidemiologist at the USDA HNRCA. "With no effective drugs currently available for the treatment of Alzheimer's disease, preventing disease through a healthy diet is an important consideration."

The researchers analyzed six types of flavonoids and compared long-term intake levels with the number of AD and ADRD diagnoses later in life. They found that low intake (15th percentile or lower) of three flavonoid types was linked to higher risk of dementia when compared to the highest intake (greater than 60th percentile). Examples of the levels studied included:

Low intake (15th percentile or lower) was equal to no berries (anthocyanins) per month, roughly one-and-a-half apples per month (flavonols), and no tea (flavonoid polymers).

High intake (60th percentile or higher) was equal to roughly 7.5 cups of blueberries or strawberries (anthocyanins) per month, 8 apples and pears per month (flavonols), and 19 cups of tea per month (flavonoid polymers).

"Tea, specifically green tea, and berries are good sources of flavonoids," said first author Esra Shishtar, who at the time of the study was a doctoral student at the Gerald J. and Dorothy R. Friedman School of Nutrition Science and Policy at Tufts University in the Nutritional Epidemiology Program at the USDA HNRCA. "When we look at the study results, we see that the people who may benefit the most from consuming more flavonoids are people at the lowest levels of intake, and it doesn't take much to improve levels. A cup of tea a day or some berries two or three times a week would be adequate," she said.

Jacques also said 50, the approximate age at which data was first analyzed for participants, is not too late to make positive dietary changes. "The risk of dementia really starts to increase over age 70, and the take home message is, when you are approaching 50 or just beyond, you should start thinking about a healthier diet if you haven't already," he said.

Methodology

To measure long-term flavonoid intake, the research team used dietary questionnaires, filled out at medical exams approximately every four years by participants in the Framingham Heart Study, a largely Caucasian group of people who have been studied over several generations for risk factors of heart disease.

To increase the likelihood that dietary information was accurate, the researchers excluded questionnaires from the years leading up to the dementia diagnosis, based on the assumption that, as cognitive status declined, dietary behavior may have changed, and food questionnaires were more likely to be inaccurate.

The participants were from the Offspring Cohort (children of the original participants), and the data came from exams 5 through 9. At the start of the study, the participants were free of AD and ADRD, with a valid food frequency questionnaire at baseline. Flavonoid intakes were updated at each exam to represent cumulative average intake across the five exam cycles.

Researchers categorized flavonoids into six types and created four intake levels based on percentiles: less than or equal to the 15th percentile, 15th-30th percentile, 30th-60th percentile, and greater than 60th percentile. They then compared flavonoid intake types and levels with new diagnoses of AD and ADRD.

There are some limitations to the study, including the use of self-reported food data from food frequency questionnaires, which are subject to errors in recall. The findings are generalizable to middle-aged or older adults of European descent. Factors such as education level, smoking status, physical activity, body mass index and overall quality of the participants' diets may have influenced the results, but researchers accounted for those factors in the statistical analysis. Due to its observational design, the study does not reflect a causal relationship between flavonoid intake and the development of AD and ADRD.

Credit: 
Tufts University, Health Sciences Campus

Firms perceived to fake social responsibility become targets for hackers, study shows

image: Corey Angst, professor of IT, analytics and operations at Notre Dame's Mendoza College of Business.

Image: 
University of Notre Dame

Data breaches have become daily occurrences. Research firm Cybersecurity Ventures reveals that in 2018 hackers stole half a billion personal records -- a 126 percent jump from 2017 -- and more than 3.8 million records are stolen in breaches every day, including recently the World Health Organization.

What corporate leaders may not realize is that strides they are making toward social responsibility may be placing a proverbial target on their backs -- if their efforts appear to be disingenuous, according to new research from the University of Notre Dame.

A firm's social performance, as measured by its engagement in socially responsible or irresponsible activities, affects its likelihood of being subject to computer attacks that result in data breaches, according to "Too Good to Be True: Firm Social Performance and the Risk of Data Breach," forthcoming in Information Systems Research from Corey Angst, professor of IT, analytics and operations at Notre Dame's Mendoza College of Business.

There is evidence that not all hackers are motivated by money and that at least some target what they dislike. Recent hacks against the WHO, due to its actions or alleged inactions related to the coronavirus pandemic, are a case in point, according to Angst.

"Recent hacking activity, including 25,000 email addresses and passwords allegedly from the National Institutes of Health, WHO, Gates Foundation and others being posted online, is supported by our findings," Angst said. "What is most surprising is that firms that are 'bad actors' regarding corporate social responsibility are generally no more likely to be breached than firms that are good. In fact, the opposite is true."

The study shows firms that are notably poor at corporate social responsibility, or CSR, are no more likely to experience a data breach, while a strong record of CSR in areas peripheral to core firm activities, including philanthropy and recycling programs, results in an elevated likelihood of breach.

"Delving into this latter finding, our results suggest firms that simultaneously have peripheral CSR strengths alongside major concerns in other areas are at increased risk of breach," Angst said. "This reality for firms with seemingly disingenuous CSR records suggests that 'greenwashing' efforts to mask poor social performance make firms attractive targets for security exploitation. Some perpetrators can 'sniff out' firms' attempts to give the appearance of social responsibility, and, consequently, these firms are more often victimized by malicious data breaches."

The team conducted its research by compiling a unique dataset consisting of publicly available information on data breaches at 189 firms spanning 2005 to 2010 and included external assessments of their CSR and other firm-specific factors.

"Corporate leaders need to understand that hackers are seeing through weak attempts at CSR," Angst advised. "They are taking matters into their own hands and acting as corporate disciplinarians by breaching the technology infrastructure of firms that they deem to be promoting themselves as good corporate citizens when in fact there are blemishes under the surface. When firms portray themselves as 'holier-than-thou,' any small misstep could trigger an attack."

Credit: 
University of Notre Dame

Worms freeload on bacterial defence systems

Scientists have untangled a sensory circuit in worms that allows them to choose whether to spend energy on self-defence or rely on the help of nearby bacteria, a new study in eLife reveals.

The paper describes a novel sensory circuit that, if also conserved in humans, could be used to switch on defence mechanisms and improve health and longevity.

Bacteria, fungi, plants and animals all excrete hydrogen peroxide as a weapon. In defence, cells use enzymes called catalases to break down hydrogen peroxide into water and oxygen. But it is not known whether this mechanism is coordinated across different cells.

"We speculated that coordinating these hydrogen peroxide cell defences based on environmental cues would be beneficial because it would save the energetic cost of protection," explains lead author Jodie Schiffer, a graduate student at Northeastern University, Boston, US. "We used the worm Caenorhabditis elegans to study whether the brain plays a role in this coordination by collecting and integrating information from the environment."

Schiffer and her team found 10 different classes of sensory neurons in the worms that could positively or negatively control peroxide resistance. Among them was a pair of neurons that sense taste and temperature and caused the largest increase in peroxide resistance, which the team decided to study further.

To determine how the neurons transmit messages to tell the worm to change its peroxide defence mechanisms, the team set out to identify the hormones involved. They found that when the worms lacked a hormone called DAF-7, it doubled peroxide resistance. In a process of gene elimination, they established that the neurons release DAF-7, which in turn signals through a well-known communication pathway, via cells called interneurons, to coordinate with defence systems in the intestine. Together, these control the worm's peroxide resistance.

As worms can be exposed to peroxides through food, and those with faulty DAF-7 hormones have feeding defects, the team next explored whether feeding directly affects peroxide defenses. They placed worms that had never been exposed to peroxides on plates of Escherichia coli (E. coli) bacteria - their preferred snack - and then measured peroxide resistance. They found that worms grown on plates with the most E. coli were most resistant to peroxides. By contrast, worms grown without E. coli for only two days had a six-fold drop in peroxide resistance. Worms with a mutation that slows down their eating also had lower peroxide resistance. Taken together, these results suggest that the presence of E. coli was important for peroxide resistance.

To test this, they looked at whether the bacteria can protect worms from the lethal effects of peroxides. They exposed worms to high amounts of hydrogen peroxide that would normally kill them. In the presence of a mutant E. coli that cannot produce the hydrogen-peroxide-degrading catalase enzyme, the worms were killed, whereas in the presence of wild-type E. coli, they were protected.

"We have identified a sensory circuit in the worm's brain that helps them decide when it is appropriate to use their own defences and when it is best to freeload on the protection given by others in the environment," concludes senior author Javier Apfeld, Assistant Professor at Northeastern University. "Because sensory perception and catalases also determine health and longevity in other animals, it is possible that sensory modulation could be a promising approach for switching on defence systems that could improve health and increase lifespan."

Credit: 
eLife

Supercapacitor promises storage, high power and fast charging

image: An illustration, provided by the lab of Penn State researcher Huanyu "Larry" Cheng, of the manganese oxide @ cobalt manganese oxide capacitor. The bottom purple layer is N-doped graphene with upper purple layer manganese oxide @ cobalt manganese oxide separated by a filter paper separator. An induced electric field allows charging and discharging (blue lightning) of the capacitor, creating electrons (fish bones) and OH ions (fish). Shocking Tom (cat) represents shockingly fast electron and ion transport.

Image: 
Xiaonan Hu, Penn State

A new supercapacitor based on manganese oxide could combine the storage capacity of batteries with the high power and fast charging of other supercapacitors, according to researchers at Penn State and two universities in China.

"Manganese oxide is definitely a promising material," said Huanyu "Larry" Cheng, assistant professor of engineering science and mechanics and faculty member in the Materials Research Institute, Penn State. "By combining with cobalt manganese oxide, it forms a heterostructure in which we are able to tune the interfacial properties."

The group started with simulations to see how manganese oxide's properties change when coupled with other materials. When they coupled it to a semiconductor, they found it made a conductive interface with a low resistance to electron and ion transport. This will be important because otherwise the material would be slow to charge.

"Exploring manganese oxide with cobalt manganese oxide as a positive electrode and a form of graphene oxide as a negative electrode yields an asymmetric supercapacitor with high energy density, remarkable power density and excellent cycling stability," according to Cheng Zhang, who was a visiting scholar in Cheng's group and is the lead author on a paper published recently in Electrochimica Acta.

The group has compared their supercapacitor to others and theirs has much higher energy density and power. They believe that by scaling up the lateral dimensions and thickness, their material has the potential to be used in electric vehicles. So far, they have not tried to scale it up. Instead, their next step will be to tune the interface where the semiconducting and conducting layers meet for even better performance. They want to add the supercapacitor to already developed flexible, wearable electronics and sensors as an energy supply for those devices or directly as self-powered sensors.

Credit: 
Penn State

Four years of calculations lead to new insights into muon anomaly

image: A typical diagrammatic representation of the hadronic light-by-light scattering contribution with Argonne's Mira supercomputer in the background.

Image: 
Photo courtesy of Luchang Jin, University of Connecticut

UPTON, NY and LEMONT, IL—Two decades ago, an experiment at the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory pinpointed a mysterious mismatch between established particle physics theory and actual lab measurements. When researchers gauged the behavior of a subatomic particle called the muon, the results did not agree with theoretical calculations, posing a potential challenge to the Standard Model—our current understanding of how the universe works.

Ever since then, scientists around the world have been trying to verify this discrepancy and determine its significance. The answer could either uphold the Standard Model, which defines all of the known subatomic particles and how they interact, or introduce the possibility of an entirely undiscovered physics. A multi-institutional research team (including Brookhaven, Columbia University, and the universities of Connecticut, Nagoya and Regensburg, RIKEN) have used Argonne National Laboratory’s Mira supercomputer to help narrow down the possible explanations for the discrepancy, delivering a newly precise theoretical calculation that refines one piece of this very complex puzzle. The work, funded in part by the DOE’s Office of Science through its Office of High Energy Physics and Advanced Scientific Computing Research programs, has been published in the journal Physical Review Letters.

A muon is a heavier version of the electron and has the same electric charge. The measurement in question is of the muon’s magnetic moment, which defines how the particle wobbles when it interacts with an external magnetic field. The earlier Brookhaven experiment, known as Muon g-2, examined muons as they interacted with an electromagnet storage ring 50 feet in diameter. The experimental results diverged from the value predicted by theory by an extremely small amount measured in parts per million, but in the realm of the Standard Model, such a difference is big enough to be notable.

“If you account for uncertainties in both the calculations and the measurements, we can’t tell if this is a real discrepancy or just a statistical fluctuation,” said Thomas Blum, a physicist at the University of Connecticut who co-authored the paper. “So both experimentalists and theorists are trying to improve the sharpness of their results.”

As Taku Izubuchi, a physicist at Brookhaven Lab who is a co-author on the paper, noted, “Physicists have been trying to understand the anomalous magnetic moment of the muon by comparing precise theoretical calculations and accurate experiments since the 1940s. This sequence of work has led to many discoveries in particle physics and continues to expand the limits of our knowledge and capabilities in both theory and experiment.”

If the discrepancy between experimental results and theoretical predictions is indeed real, that would mean some other factor—perhaps some yet-to-be discovered particle—is causing the muon to behave differently than expected, and the Standard Model would need to be revised.

The team’s work centered on a notoriously difficult aspect of the anomaly involving the strong force, which is one of four basic forces in nature that govern how particles interact, along with weak, electromagnetic, and gravitational force. The biggest uncertainties in the muon calculations come from particles that interact through the strong force, known as hadronic contributions. These hadronic contributions are defined by a theory called quantum chromodynamics (QCD).

For a long time, many people thought this contribution, because it was so challenging, would explain the discrepancy. But we found previous estimates were not far off.

— University of Connecticut physicist Thomas Blum

The researchers used a method called lattice QCD to analyze a type of hadronic contribution, light-by-light scattering. “To do the calculation, we simulate the quantum field in a small cubic box that contains the light-by-light scattering process we are interested in,” said Luchang Jin, a physicist at the University of Connecticut and paper co-author. “We can easily end up with millions of points in time and space in the simulation.”

That’s where Mira came in. The team used the supercomputer, housed at the Argonne Leadership Computing Facility (ALCF), to solve the complex mathematical equations of QCD, which encode all possible strong interactions with the muon. The ALCF, a DOE Office of Science User Facility, recently retired Mira to make room for the more powerful Aurora supercomputer, an exascale system scheduled to arrive in 2021.

“Mira was ideally suited for this work,” said James Osborn, a computational scientist with the ALCF and Argonne’s Computational Science division. “With nearly 50,000 nodes connected by a very fast network, our massively parallel system enabled the team to run large simulations very efficiently.”

After four years of running calculations on Mira, the researchers produced the first-ever result for the hadronic light-by-light scattering contribution to the muon anomalous magnetic moment, controlling for all errors.

“For a long time, many people thought this contribution, because it was so challenging, would explain the discrepancy,” Blum said. “But we found previous estimates were not far off, and that the real value cannot explain the discrepancy.”

Meanwhile, a new version of the Muon g-2 experiment is underway at Fermi National Accelerator Laboratory, aiming to reduce uncertainty on the experimental side by a factor of four. Those results will add more insight to the theoretical work being done now.

“As far as we know, the discrepancy still stands,” Blum said. “We are waiting to see whether the results together point to new physics, or whether the current Standard Model is still the best theory we have to explain nature.”

Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by  UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science.

One of ten national laboratories overseen and primarily funded by the Office of Science of the U.S. Department of Energy (DOE), Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies and national security. Brookhaven Lab also builds and operates major scientific facilities available to university, industry and government researchers. Brookhaven is operated and managed for DOE's Office of Science by Brookhaven Science Associates, a limited-liability company founded by the Research Foundation for the State University of New York on behalf of Stony Brook University, the largest academic user of Laboratory facilities, and Battelle, a nonprofit applied science and technology organization.

The U.S. Department of Energy’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit https://energy.gov/science.

Follow @BrookhavenLab on Twitter or find us on Facebook

Journal

Physical Review Letters

DOI

10.1103/PhysRevLett.124.132002

Credit: 
DOE/Brookhaven National Laboratory

Cognition and gait speed often decline together, study shows

image: Mitzi Gonzales, Ph.D., of UT Health San Antonio is lead author of a study of cognition and gait speed among 370 participants in the San Antonio Longitudinal Study of Aging (SALSA). The study found that in a majority of participants, measures of cognition and gait speed paralleled each other over a 9.5-year study follow-up period.

Image: 
UT Health San Antonio

Do thinking and walking go hand in hand in determining the health course of senior adults? A study published by UT Health San Antonio researchers found that, indeed, the two functions often parallel each other in determining a person's health trajectory.

The researchers analyzed data from 370 participants in the San Antonio Longitudinal Study of Aging (SALSA) and found that they grouped into three distinct trajectories. These classifications were based on the participants' changes on a cognitive measure and a gait speed task over an average of 9½ years:

Stable cognition and gait class (65.4% of the participants).

Cognitive and physical vulnerability class (22.2%).

Physical vulnerability class (12.4%).

"In our community-based sample of Mexican American and European American older adults aged 65 to 74 years old at baseline, the majority of individuals began the study with higher scores in both domains, cognition and gait speed. During follow-up, this group demonstrated resilience to age-related declines and continued to be functionally independent," said study senior author Helen Hazuda, Ph.D., professor in UT Health San Antonio's Long School of Medicine and the principal investigator of SALSA.

"In contrast, one-fifth of individuals began the study with lower scores in cognition and gait speed. They experienced deterioration in each domain during the follow-up period," Dr. Hazuda said.

The third group of individuals, termed the physical vulnerability class, demonstrated stable cognition throughout the study, but their gait speed slowed over time.

2 effects, 1 root?

Cognition was assessed using English or Spanish versions of the Folstein Mini-Mental State Examination, a 30-item tool that assesses orientation to time and place, attention, recall, language and other aspects. Gait speed was measured with a timed 10-foot walk.

"For most of the population we studied, changes in cognition and gait speed were parallel, which suggests shared mechanisms," said Mitzi M. Gonzales, Ph.D., lead author of the study and a neuropsychologist with the Glenn Biggs Institute for Alzheimer's and Neurodegenerative Diseases, which is part of UT Health San Antonio.

Cognition and gait speed may be altered by blood vessel disease, brain tissue insults, hormone regulation, and abnormal deposits of amyloid beta and tau proteins in the brain, Dr. Gonzales said. Amyloid beta and tau deposits are well-known indicators of Alzheimer's disease but may impact gait, too.

"Abnormal protein deposition promotes neurodegeneration and synaptic loss, which may induce dysfunction in brain regions governing cognition and gait," said study coauthor Sudha Seshadri, M.D., professor of neurology in the Long School of Medicine and director of the Biggs Institute. "Another possibility is damage to white matter in regions integral to both cognition and gait coordination."

Groundbreaking San Antonio research

SALSA investigators led by Dr. Hazuda launched the study in 1992 and completed the baseline examination in 1996. Follow-up examinations were conducted at 18-month intervals between 2000 and 2005.

Among the 370 participants in this new analysis, 182 were Mexican American and 188 were European American. The Mexican American participants were almost four times more likely than European Americans to be in the cognitive and physical vulnerability class, even after statistical adjustment for educational attainment, income and chronic medical conditions, Dr. Gonzales said.

Prevalence of a key risk factor in this group, diabetes, was significantly higher in Mexican Americans (23%) than in European Americans (7%). Diabetes was associated with a 4½ times higher likelihood of being part of the cognitive and physical vulnerability class.

Poor start, poor course

Individuals who entered the study with poorer cognition and slower gait speed went on to decline in both domains at an accelerated pace through the years of follow-up, Dr. Hazuda said.

"In this at-risk group, we observed steeper rates of decline over and above the low starting point," Dr. Hazuda said. "This suggests that preventive efforts should ideally target young and middle-aged adults in which there is still time to intervene to alter the trajectories."

Overall, individuals in the cognitive and physical vulnerability class and the physical vulnerability class had a five- to sevenfold increased risk of mortality in comparison to the stable cognition and gait class.

Credit: 
University of Texas Health Science Center at San Antonio

Despite millennial stereotypes, burnout just as bad for Gen X doctors in training

CHICAGO --- Despite the seemingly pervasive opinion that millennial physicians are more prone to burnout and a lack of empathy compared to older generations, a new study by researchers at Northwestern Medicine and Cleveland Clinic found that no such generational gap exists.

According to the study of 588 millennial and Generation X residents and fellows, millennial physicians in training did not show increased vulnerability to burnout or different empathy skills compared to a demographic-matched sample of Generation-X physicians.

It is the first study to evaluate the impact of generation affiliation (millennial vs. Generation X) on physician qualities, specifically empathy and burnout - a state of emotional, physical and mental exhaustion caused by excessive and prolonged stress. Both empathy and burnout have been demonstrated to impact the quality of patient care.

The study was published today, May 5, in the journal Academic Psychiatry.

This study's findings cannot be extrapolated beyond the context of physicians in training. However, the statistical approach used in this study, controlling for other factors like level of experience in the field, could be used in studies of other professional fields to provide further clarity on the broader impact of generation affiliation on professional qualities.

"As millennial physicians are increasingly entering the workforce, people seem to be wondering what millennial doctors will be like, and I've heard older physicians opine that physician burnout is a bigger problem now due to generation vulnerability," said lead author Dr. Brandon Hamm, instructor of psychiatry and behavioral sciences at Northwestern University Feinberg School of Medicine. Hamm conducted the research while he was at Cleveland Clinic.

"Our study provides a little more transparency that it's medical-system exposure - not generational traits - that is more likely to contribute to the burnout seen in today's doctors," Hamm said.

Consistent with other studies, this paper found empathy decreases over the course of physician training, Hamm said. Additionally, Hispanic/Latino physicians in training demonstrated higher empathy scores and lower depersonalization burnout experience than Caucasian physicians in training. Depersonalization includes psychological withdrawal from relationships and the development of a negative, cynical or callous attitude.

What can be done to slow physician burnout?

"The first year of residency can be really rigorous and have a negative psychological impact on physicians in training, which can lead to dysfunctional coping strategies like substance abuse," Hamm said. "We need to be researching interventions that not only slow this empathy decline but bolster physicians' communities so they feel supported and less isolated."

Credit: 
Northwestern University

Grandfamilies: New study uncovers common themes and challenges in kinship care

image: The Theory of Compounding Complexity explains that grandfamilies face a combination of three types of complexity: relationship complexity, situational complexity, and emotional complexity, which all exist amidst conflict and change.

Image: 
George Mason University

Today, more than 2.5 million U.S. grandparents are raising their grandchildren due to the opioid crisis and other social issues. These grandfamilies--where grandparents are raising grandchildren in the absence of the biological parents--are becoming increasingly common. While kinship care is often considered a better alternative to children being placed in non-relative foster care, grandfamilies often experience unique challenges with significant economic and social impacts.

More recently, with the COVID-19 pandemic, media outlets have written about the high risk these grandparents are at for COVID-19, and their dilemmas between staying healthy and caring for their grandchildren, worrying about what may happen to them should they fall sick. Some resources are available for grandfamilies, but the exact impacts of the CARES Act on grandfamilies are still being clarified.

Dr. Catherine Tompkins at George Mason University's College of Health and Human Services is an expert in gerontology and grandparent kinship care. She led a recent study on the challenges grandfamilies face, interviewing 15 low-income grandfamilies and developing the Theory of Compounding Complexity. The study was published online in The Gerontologist in February.

"In Fairfax County alone, there are more than 5,000 grandfamilies, and we are just beginning to understand some of their complexities and challenges. Compounding Complexity explains what these families experience and can assist social workers and other practitioners in working with them," explains Tompkins.

The Theory of Compounding Complexity explains that grandfamilies face a combination of three types of complexity: relationship complexity, situational complexity, and emotional complexity. An example of relationship complexity is the change from a past family relationship to the current one, where a grandparent is now making day-to-day parenting decisions. An example of situational complexity includes raising children at a time grandparents expected to be retired, sometimes being forced to return to the workforce. Examples of emotional complexity are grandparents preparing their grandchildren for a visit with a biological parent and the parent not showing up or the grandparent being forced to choose between their existing relationship and their grandchildren:

"I was engaged to someone for eight years. My two grandsons are difficult and have special needs. My fiancé made me choose between him and my grandsons, so I chose my grandsons. He left and my heart was broken."

These three categories of complexity all occur amidst conflict and change. Conflict often arises from power dynamics between custodial grandparents and biological parents. Change also has a significant impact on these families and contributes to the complexity of their experiences.

In their interviews, Tompkins and colleagues uncovered complex challenges in these families. One case demonstrates all aspects of the Theory of Compounding Complexity:

"A grandfather, caring for his grandson, was holding onto emotions of guilt, regret and remorse for not being able to prevent the illegal behavior of his daughter and feelings of resentment and disapproval toward his daughter for making bad choices and not putting her child first. The daughter expected her father to lessen the complexity of the situation by taking care of her child but still allowing her to parent from jail and pick up where she left off when she returned from prison (conflict). The grandson was fearful of living within the neighborhood that his mother lived in once she got out of prison (change). Thus, each family member had emotional responses to the situation and to the relationships involved which compounded the complexity of the situation."

"Compounding Complexity offers a new lens that service providers can use when they interact with grandfamilies," explains Tompkins. "While this is most informative about our specific group of participants, it lays the groundwork for additional study on grandfamilies."

This study was funded by the John A. Hartford Foundation and supported by Fairfax County Department of Family Services.

Tompkins is currently working with the Fairfax County Kinship Family Institute to run an online support group for local kinship families in the Fairfax, Virginia area during the COVID-19 pandemic. She suggests taking into account the unique needs of grandfamilies when we consider ways to help our community members and make additional resources available during the pandemic.

For future research, Tompkins and colleagues recommend studies that include the perspectives of the biological parents and externally validated studies using the framework of Compounding Complexity to inform best practices for working with these families.

Credit: 
George Mason University

Germline genomic profiles of children, young adults with solid tumors to inform managementand treatment

image: A new Cleveland Clinic study led by Charis Eng, MD, PhD, demonstrates the importance of genetics evaluation and genetic testing for children, adolescents and young adults with solid tumor cancers.

Image: 
Cleveland Clinic

CLEVELAND - A new Cleveland Clinic study demonstrates the importance of genetics evaluation and genetic testing for children, adolescents and young adults with solid tumor cancers. The study was published today in Nature Communications.

Solid tumors account for half of cancer cases in children, adolescent and young adult (C-AYA) patients. The majority of these cases are assumed to result from germline variants (heritable changes affecting all cells in the body) rather than somatic alterations. However, little is known regarding the spectrum, frequency and implications of these germline variants.

In this study, led by Charis Eng, M.D., Ph.D., Cleveland Clinic's Genomic Medicine Institute, the researchers conducted the largest-to-date evaluation of germline mutations in C-AYA patients with solid tumors utilizing a combined dataset from Cleveland Clinic and St. Jude Children's Research Hospital. Of the 1,507 patients analyzed, 12% carried germline pathogenic and/or likely pathogenic variants in known cancer-predisposing (KCPG) genes while an additional 61% had germline pathogenic variants in non-KCPG genes.

"Our findings emphasize the necessity for all C-AYA patients with solid tumors to be sent for genetics evaluation and gene testing," said Dr. Eng. "Adult guidelines, particularly family history, are typically used to recognize C-AYA patients with possible heritable cancer, but studies have found a family history of cancer in only about 40% of patients with pathogenic and/or likely pathogenic variants."

The researchers also conducted a drug-target network analysis to determine if the pathogenic and/or likely pathogenic germline variants detected in the dataset were located within genes that could potentially be targeted by drug therapies. Their analysis found that 511 (34%) patients had at least one pathogenic and/or likely pathogenic variant on a gene that is potentially druggable. Notably, they discovered that approximately one-third of these patients had variants that can be targeted by existing FDA-approved drugs.

"Currently, the majority of available targeted therapies are geared to adult patients, leaving few safe and effective treatment options for C-AYA patients," noted Dr. Eng. "However, we found that a significant number of the germline altered genes in C-AYA solid tumor cancers are targetable by FDA-approved drugs, which presents an opportunity to harness drug repurposing to identify therapeutic options for C-AYA patients."

Dr. Eng is the inaugural chair of the Genomic Medicine Institute and director of the Center for Personalized Genetic Healthcare. She holds the Sondra J. and Stephen R. Hardis Endowed Chair in Cancer Genomic Medicine.

This work was supported by a VeloSano Pilot Award, belonging to a grant program that provides seed funding for cancer research activities being performed anywhere at the Cleveland Clinic.

Credit: 
Cleveland Clinic

Four years of calculations lead to new insights into muon anomaly

image: A typical diagrammatic representation of the hadronic light-by-light scattering contribution with Argonne's Mira supercomputer in the background.

Image: 
Luchang Jin, University of Connecticut

Two decades ago, an experiment at the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory pinpointed a mysterious mismatch between established particle physics theory and actual lab measurements. When researchers gauged the behavior of a subatomic particle called the muon, the results did not agree with theoretical calculations, posing a potential challenge to the Standard Model — our current understanding of how the universe works.

Ever since then, scientists around the world have been trying to verify this discrepancy and determine its significance. The answer could either uphold the Standard Model, which defines all of the known subatomic particles and how they interact, or introduce the possibility of an entirely undiscovered physics. A multi-institutional research team (including Brookhaven, Columbia University, and the universities of Connecticut, Nagoya and Regensburg, RIKEN) have used Argonne National Laboratory's Mira supercomputer to help narrow down the possible explanations for the discrepancy, delivering a newly precise theoretical calculation that refines one piece of this very complex puzzle. The work, funded in part by the DOE’s Office of Science through its Office of High Energy Physics and Advanced Scientific Computing Research programs, has been published in the journal Physical Review Letters.

"For a long time, many people thought this contribution, because it was so challenging, would explain the discrepancy. But we found previous estimates were not far off.” — Thomas Blum, study co-author

A muon is a heavier version of the electron and has the same electric charge. The measurement in question is of the muon's magnetic moment, which defines how the particle wobbles when it interacts with an external magnetic field. The earlier Brookhaven experiment, known as Muon g-2, examined muons as they interacted with an electromagnet storage ring 50 feet in diameter. The experimental results diverged from the value predicted by theory by an extremely small amount measured in parts per million, but in the realm of the Standard Model, such a difference is big enough to be notable.

"If you account for uncertainties in both the calculations and the measurements, we can't tell if this is a real discrepancy or just a statistical fluctuation," said Thomas Blum, a physicist at the University of Connecticut who co-authored the paper. "So both experimentalists and theorists are trying to improve the sharpness of their results."

As Taku Izubuchi, a physicist at Brookhaven Lab who is a co-author on the paper, noted, “Physicists have been trying to understand the anomalous magnetic moment of the muon by comparing precise theoretical calculations and accurate experiments since the 1940s. This sequence of work has led to many discoveries in particle physics and continues to expand the limits of our knowledge and capabilities in both theory and experiment.”

If the discrepancy between experimental results and theoretical predictions is indeed real, that would mean some other factor — perhaps some yet-to-be discovered particle — is causing the muon to behave differently than expected, and the Standard Model would need to be revised.

The team's work centered on a notoriously difficult aspect of the anomaly involving the strong force, which is one of four basic forces in nature that govern how particles interact, along with weak, electromagnetic, and gravitational force. The biggest uncertainties in the muon calculations come from particles that interact through the strong force, known as hadronic contributions. These hadronic contributions are defined by a theory called quantum chromodynamics (QCD).

The researchers used a method called lattice QCD to analyze a type of hadronic contribution, light-by-light scattering. "To do the calculation, we simulate the quantum field in a small cubic box that contains the light-by-light scattering process we are interested in," said Luchang Jin, a physicist at the University of Connecticut and paper co-author. "We can easily end up with millions of points in time and space in the simulation."

That's where Mira came in. The team used the supercomputer, housed at the Argonne Leadership Computing Facility (ALCF), to solve the complex mathematical equations of QCD, which encode all possible strong interactions with the muon. The ALCF, a DOE Office of Science User Facility, recently retired Mira to make room for the more powerful Aurora supercomputer, an exascale system scheduled to arrive in 2021.

"Mira was ideally suited for this work," said James Osborn, a computational scientist with the ALCF and Argonne’s Computational Science division. "With nearly 50,000 nodes connected by a very fast network, our massively parallel system enabled the team to run large simulations very efficiently."

After four years of running calculations on Mira, the researchers produced the first-ever result for the hadronic light-by-light scattering contribution to the muon anomalous magnetic moment, controlling for all errors.

"For a long time, many people thought this contribution, because it was so challenging, would explain the discrepancy," Blum said. "But we found previous estimates were not far off, and that the real value cannot explain the discrepancy."

Meanwhile, a new version of the Muon g-2 experiment is underway at Fermi National Accelerator Laboratory, aiming to reduce uncertainty on the experimental side by a factor of four. Those results will add more insight to the theoretical work being done now.

"As far as we know, the discrepancy still stands," Blum said. "We are waiting to see whether the results together point to new physics, or whether the current Standard Model is still the best theory we have to explain nature."

Credit: 
DOE/Argonne National Laboratory

Epidemiologists develop new tool for measuring the pace of aging across the life course

May 5, 2020 -- A study just released by Columbia University Mailman School of Public Health is reporting a blood-DNA-methylation measure that is sensitive to variation in the pace of biological aging among individuals born the same year. The tool - DunedinPoAm -- offers a unique measurement for intervention trials and natural experiment studies investigating how the rate of aging may be changed by behavioral or drug therapy, or by changes to the environment. The study findings are published online in the journal eLife.

"The goal of our study was to distill a measurement of the rate of biological aging based on 12-years of follow-up on 18 different clinical tests into a blood test that can be administered at a single time point." said lead author Daniel Belsky, PhD, assistant professor of epidemiology at Columbia Mailman School and a researcher at the Columbia Aging Center.

Midlife adults measured to be aging faster according to the new measurement showed faster declines in physical and cognitive functioning and looked older in facial photographs. Older adults measured to be aging faster by the tool were at increased risk for chronic disease and mortality. In other analyses, the researchers showed that DunedinPoAm captured new information not measured by proposed measures of biological aging known as epigenetic clocks, that 18-year-olds with histories of childhood poverty and victimization showed faster aging as measured by DunedinPoAm, and that DunedinPoAm predictions were disrupted by a caloric restriction intervention in a randomized trial.

In a 2015 paper, Belsky and colleagues at Duke University, who also collaborated on this study, tracked a battery of clinical tests measured in 954 members of the Dunedin Study birth cohort when the participants were 26, 32, and 38 years old to measure their rate of aging (PNAS paper). A striking finding of that earlier study was that the rate of biological aging was already highly variable in young adults who had not yet developed chronic disease. But the measure the researchers developed in that earlier study, called "Pace of Aging", required long-follow-up time and in-depth clinical assessment. "It wasn't very useful for studies that need to test the impact of a new drug or lifestyle intervention in a matter of a few years," said Belsky.

In their new study, the researchers aimed to develop a blood test that could be given at the start and end of a randomized controlled trial to determine if the treatment had slowed participants' pace of aging. Slowing the pace of aging is an emerging frontier in medical research as a novel approach to preventing multiple chronic diseases.

The authors' analysis focused on DNA samples derived from white blood cells. They analyzed chemical tags on the DNA called methylation marks. DNA methylation is an epigenetic process that can change the way genes are expressed. DNA methylation marks change as we age, with some marks being added and others lost. "We focused our analysis on DNA methylation in white blood cells because these molecular markers are relatively easy to measure and have shown great promise in previous research on aging," explained Belsky.

The study included analysis of data from the NZ-based Dunedin Study, the UK-based Understanding Society and E-Risk Studies, the U.S.-based Normative Aging Study, and the CALERIE randomized trial.

Previous studies have attempted to measure aging by analyzing DNA methylation differences between people of different chronological ages. "One limitation of this approach, noted Belsky, is that individuals born in different years have grown up under different historical conditions, with a possibility of more exposure to childhood diseases, tobacco smoke, airborne lead, and less exposure to antibiotics and other medications, as well as lower quality nutrition, all of which affect DNA methylation. An alternative approach is to study individuals who were all born the same year, and find methylation patterns that differentiate those who have been aging biologically faster or slower than their same-age peers."

The authors used a machine-learning technique called "elastic-net regression" to sift through data on more than 400,000 different DNA methylation marks to find the ones that related to the physiological changes captured in their Pace of Aging measure. In the end, the analysis identified a set of 46 methylation marks that, together, measured Pace of Aging. The 46 marks are combined together according in an algorithm the researchers named "DunedinPoAm" for Dunedin (P)ace (o)f (A)ging in (m)ethylation. The average person has a DunedinPoAm value of 1 - indicating 1 year of biological aging per chronological year. Among Dunedin Study participants, the range of values extend from just above 0.6 (indicating an aging rate nearly 40 percent slower than the norm) to nearly 1.4 (indicating an aging rate 40 percent faster than the norm).

Credit: 
Columbia University's Mailman School of Public Health

Study reveals how spaceflight affects risk of blood clots in female astronauts

A study of female astronauts has assessed the risk of blood clots associated with spaceflight.

The study, published in Aerospace Medicine and Human Performance, in collaboration with King's College London, the Centre for Space Medicine Baylor College of Medicine, NASA Johnson Space Centre and the International Space University, examines the potential risk factors for developing a blood clot (venous thromboembolism) in space.

The findings, which looked at 38 female astronaut flights between 2000 and 2014, found spaceflight and combined oral contraceptive pill (COCP) use does not appear to increase the risk of venous thromboemoblism (VTE).

Dr Varsha Jain, lead author of the study from King's College London and a Wellbeing of Women Clinical Research Fellow at the Medical Research Council Centre for Reproductive Health at the University of Edinburgh, said: "The first episode of an astronaut developing a blood clot in space was reported earlier this year. It is unknown how spaceflight impacts the risk of an astronaut developing a blood clot. This study aimed to look specifically at the potential blood clot developing risks for female astronauts during spaceflight. We wanted to understand if their use of the hormonal contraceptive pill for menstrual cycle control, increased that risk."

Developing a VTE in space is life threatening and potentially a mission critical risk. The risk may have been further increased by COCP use, however as female astronauts are more fit and healthy than general population, their risk remains low.

The study, which is the first of its kind, proposes more blood tests be carried out during astronaut selection and during medical reviews. There are points during pre-mission training and during spaceflight, such as particular training activities, which may briefly increase the risk of developing a blood clot, and the authors recommend a review of these.

Finally, the study advises a more holistic approach to be taken for contraceptive agent prescribing as women from all professions, including astronauts, may wish to control their menstrual cycles and occupation related risks should be considered during a risk review.

Dr Jain said: "There may be possible time points in an astronaut's pre-mission training or during the space mission itself where blood clot risk may potentially be transiently increased. Due to the potentially life-threatening nature of blood clots, we would advise further targeted research in this area to further understand how an astronaut's risk of developing a blood clot is altered by spaceflight."

Dr Virginia Wotring, associate Professor at the International Space University and senior author of the study, said: "We see a need for continuing studies with female astronauts. Much of the previous biomedical research in space was conducted on mostly male astronauts, because most of the astronauts were male. That has changed, and now we need to understand how the spaceflight environment impacts female physiology."

Credit: 
King's College London

Long-term risks of hypertensive disorders during pregnancy impact more women

Twice as many women who experienced a hypertensive disorder during any of their pregnancies were at increased risk of developing heart or kidney diseases earlier in life based on incidence per woman versus per pregnancy, according to a study published today in the Journal of the American College of Cardiology. This is one of the first studies to look at incidence of hypertensive disorders per woman vs. per pregnancy, which accounts for women who are pregnant multiple times.

Hypertensive disorders of pregnancy (HDP) include four categories: preeclampsia, gestational hypertension, chronic hypertension and superimposed preeclampsia (women with chronic hypertension who develop preeclampsia). Women who have preeclampsia during pregnancy are at risk for death from heart disease as early as the first decade after giving birth.

"Despite the rates of HDP increasing over the past three decades, the incidence rates of HDP per-pregnancy and per-woman had not yet been studied," said Vesna D. Garovic, MD, PhD, professor of medicine in the department of internal medicine and obstetrics and gynecology at Mayo Clinic and lead author of the study. "By only looking at HDP rates per-pregnancy, we have been vastly underestimating the number of women who are affected by this condition and may be at risk for future heart or kidney disease. Looking at the per-woman rate allowed us to assess women with more than one pregnancy, who may have had HDP, including preeclampsia, during one of her pregnancies, but not the other."

The researchers used the Rochester Epidemiology Project, a medical record system of all providers in Olmsted County, Minnesota, to compare the risk of heart and kidney disease in pregnant women with and without a history of HDP who delivered (liveborn or stillborn) between 1976 and 1982. The researchers identified 9,862 pregnancies among 7,544 women living in Olmsted County during the assessment period. Each medical chart was screened to determine which women had possible HDP, where a positive screen was defined as two elevated blood pressures taken at any prenatal visit, during delivery or postnatal before hospital discharge.

During the six-year assessment period, 659 women had a total of 719 HDP pregnancies, an incidence rate per-pregnancy of 7.3% for HDP and 3.3% for preeclampsia. To assess HDP incidence rate per-woman, the researchers identified 1,839 women who had sufficient information for all of their pregnancies. The per-woman HDP incidence was twice that of the per-pregnancy rate, at 15.3% (n=281) and 7.5% (n=138) for HDP and preeclampsia, respectively. Women younger than 20 years of age and women older than 35 years of age had the highest incidence rates of preeclampsia and gestational hypertension.

Across a follow-up period of 36.2 years, 571 women with history of HDP developed a chronic condition, including (but not limited to) cardiac arrhythmias, coronary artery disease, heart failure, stroke, chronic kidney disease and hypertension, suggesting that 1 in 6 women may be at an increased risk for heart or kidney disease. Women with HDP developed a chronic condition at an accelerated rate and at earlier age compared with their counterparts.

This study has several limitations, including the lack of diversity in the study population (predominantly white) and looking at a population from four decades ago. A more ethnically diverse contemporary cohort should be pursued. The researchers stress the need for lifestyle interventions and preventive care in this high-risk population.

"With this study, Garovic and colleagues advance our understanding of the burden of HDP-associated multimorbidity," said Michael C. Honigberg, MD, MPP, research fellow in the department of medicine at Massachusetts General Hospital, in an accompanying editorial comment. "The authors have shown that the total HDP burden expressed as incidence per-woman is considerably higher than per-pregnancy. The discovery of effective-targeted risk-reducing interventions for women with HDP would make pregnancy an even more actionable and powerful screening test."

Credit: 
American College of Cardiology

Non-fatal injuries cost US $1,590 and 11 days off work per injured employee every year

Non-fatal injuries in the US add up to an estimated $1590 and an average of 11 days off work per injured employee every year, indicates an analysis of medical insurance claims and productivity data, published online in the journal Injury Prevention.

These figures exclude people without workplace health insurance, those out of work, and caregivers.

There are more than 30 million annual visits to emergency care for non-fatal injuries every year in the US, with total medical costs exceeding US$133 billion.

Previous estimates of lost productivity attributable to injury have been based on absenteeism associated with injuries sustained only in the workplace and haven't assessed the impact of different types of injury.

To try and rectify this, and calculate the overall value of lost workplace productivity, the researchers mined millions of workplace health insurance claims data (MarketScan) and Health and Productivity Management databases for sick leave taken between 2014 and 2015.

They looked specifically at non-fatal injuries treated in emergency departments for 18-64 year olds with health insurance cover, by injury type and body region affected, as well as the amount of sick leave taken in the year following the injury.

These data were then compared with the number of days of sick leave taken by employees who had not sustained injuries.

The injuries analysed included burns, poisonings, firearm wounds, falls, bites and stings, road traffic collisions, and those caused by machinery and overexertion.

The researchers estimated that the total annual value of lost workplace productivity attributable to all types of non-fatal injury and, initially treated in emergency care, amounted to an average 11 days and US$1590 for each injured employee.

Values ranged from 1.5 days and US$210 for bites and stings to 44 days and US$6196 for motorbike injuries. Days taken off work ranged from 4 for other head, face and neck injuries to almost 20 for traumatic brain injuries.

The researchers admit that their calculations exclude long term disabilities or long term physical and mental illness caused by violent assault. Nor do the figures include injuries among those without workplace health insurance, the jobless, or caregivers.

But they conclude:"Non-fatal injuries are preventable and incur substantial lost work productivity at a high cost to individuals, employers and society."

Credit: 
BMJ Group

New ancient plant captures snapshot of evolution

image: In this image of one of the new ancient species' reproductive structures, elliptical impressions of sporangia can be seen in one row, while on the right, another row displays preserved carbonized spore masses.

Image: 
Andrew Leslie

In a brilliant dance, a cornucopia of flowers, pinecones and acorns connected by wind, rain, insects and animals ensure the reproductive future of seed plants. But before plants achieved these elaborate specializations for sex, they went through millions of years of evolution. Now, researchers have captured a glimpse of that evolutionary process with the discovery of a new ancient plant species.

The fossilized specimen likely belongs to the herbaceous barinophytes, an unusual extinct group of plants that may be related to clubmosses, and is one of the most comprehensive examples of a seemingly intermediate stage of plant reproductive biology. The new species, which is about 400 million years old and from the Early Devonian period, produced a spectrum of spore sizes - a precursor to the specialized strategies of land plants that span the world's habitats. The research was published in Current Biology May 4.

"Usually when we see heterosporous plants appear in the fossil record, they just sort of pop into existence," said the study's senior author, Andrew Leslie, an assistant professor of geological sciences at Stanford's School of Earth, Energy & Environmental Sciences (Stanford Earth). "We think this may be kind of a snapshot of this very rarely witnessed transition period in evolutionary history where you see high variation amongst spores in the reproductive structure."

A major shift

One of the most important time periods for the evolution of land plants, the Devonian witnessed diversification from small mosses to towering complex forests. The development of different spore sizes, or heterospory, represents a major modification to control reproduction - a feature that later evolved into small and large versions of these reproductive units.

"Think of all the different types of sexual systems that are in flowers - all of that is predicated on having separate small spores, or pollen, and big spores, which are inside the seeds," Leslie said. "With two discrete size classes, it's a more efficient way of packaging resources because the big spores can't move as easily as the little ones, but can better nourish offspring."

The earliest plants, from between 475 million to 400 million years ago, lacked reproductive specialization in the sense that they made the same types of spores, which would then grow into little plantlets that actually transferred reproductive cells. By partitioning reproductive resources, plants assumed more control over reproduction, according to the researchers.

The new species, together with the previously described plant group Chaleuria of the same age, represents the first evidence of more advanced reproductive biology in land plants. The next example doesn't appear in the fossil record until about 20 million years later.

"These kinds of fossils help us locate when and how exactly plants achieved that kind of partitioning of their reproductive resources," Leslie said. "The very end of that evolutionary history of specialization is something like a flower."

A fortuitous find

The researchers began analyses of the fossils after they had been stored in the collections at the Smithsonian National Museum of Natural History for decades. From about 30 small chips of rock originally excavated from the Campbellton Formation of New Brunswick in Canada by late paleobotanist and study co-author Francis Hueber, they identified more than 80 reproductive structures, or sporangia. The spores themselves range from about 70 to 200 microns in diameter - about a strand to two strands of hair. While some of the structures contained exclusively large or small spores, others held only intermediate-sized spores and others held the entire range of spore sizes - possibly with some producing sperm and others eggs.

"It's rare to get this many sporangia with well-preserved spores that you can measure," Leslie said. "We just kind of got lucky in how they were preserved."

Fossil and modern heterosporous plants primarily live in wetland environments, such as floodplains and swamps, where fertilization of large spores is most effective. The ancient species, which will be formally described in a follow-up paper, has a medley of spores that is not like anything living today, Leslie said.

"The overarching story in land plant reproduction is one of increased division of labor and specialization and complexity, but that has to begin somewhere - and it began with simply producing small spores and big spores," Leslie said. "With these kinds of fossils, we can identify some ways the plants were able to do that."

Credit: 
Stanford's School of Earth, Energy & Environmental Sciences