Culture

Trust among corvids

image: A pair of Siberian jays foraging in the study population in Swedish Lapland.

Image: 
Michael Griesser

Siberian jays are group living birds within the corvid family that employ a wide repertoire of calls to warn each other of predators. Sporadically, however, birds use one of these calls to trick their neighbouring conspecifics and gain access to their food. Researchers from the universities of Konstanz (Germany), Wageningen (Netherlands), and Zurich (Switzerland) have now examined how Siberian jays avoid being deceived by their neighbours. The study, published in the journal Science Advances, shows that these birds have great trust in the warning calls from members of their own group, but mainly ignore such calls from conspecifics of neighbouring territories. Thus, the birds use social information to differentiate between trustworthy and presumably false warning calls. Similar mechanisms could have played a role in the formation of human language diversity and especially in the formation of dialects.

Deception and lies

Deception and lies are surprising aspects of human communication and the use of language in which false information is intentionally communicated to others, allowing an individual to gain an advantage over the recipient of such false information. However, language is actually highly pro-social and cooperative and is mainly used to share reliable information. Thus, language can only function properly and be maintained if deception is kept to a minimum or other mechanisms are in place to recognize and avoid deception.

People do judge the reliability of communication partners based on personal experience. "If someone repeatedly lies to you, you will most likely stop trusting this person very quickly," says Dr Michael Griesser, a biologist at the University of Konstanz. Griesser authored the study together with Dr Filipe Cunha, whose doctoral thesis he supervised. But do we observe deception in animals as well, and, if so, which mechanisms do animals use to avoid being deceived?

Warning calls of the Siberian jay

Indeed, a number of species are able to deceive their conspecifics, including some species of primates and birds like the Siberian jay (Perisoreus infaustus). Siberian jays live in territorial groups and have an elaborate communication system: A wide range of calls allow them to warn each other of the presence of different predators as well as the behaviour of their fiercest enemy, the hawk.

Occasionally, however, neighbours intruding into a group's territory use the same calls that would otherwise indicate the presence of a perched hawk for a different purpose. Their aim is to deceive the members of the group about the presence of the predator, thus scaring them away to get access to their food. "It is a commonly observed phenomenon in the animal kingdom that warning calls are used to deceive others. Clearly, the recipients of the false information potentially pay a high price if they ignore the warning," says Cunha.

Only trust those you know?

To find out how Siberian jays identify and respond to this type of deception, the researchers examined a population of wild Siberian jays in northern Sweden. They attracted experienced individuals to a feeding site and recorded video footage of what happened. As soon as such an experienced individual visited the feeder, a loudspeaker played recordings of Siberian jays' warning calls designating a perched hawk. These calls were recordings from former members of the visitor's own group, birds from neighbouring territories, or birds that the visitor had never encountered before. Using the video recordings, the researchers measured how long it took the visitor to leave and return to the feeder.

These "playback experiments" demonstrated that experienced Siberian jays responded quicker and took longer to return to the feeder when hearing warning calls of a former member of their own group than when exposed to warning calls of neighbouring groups or previously unknown individuals. "Siberian jays thus have a simple rule to avoid being tricked: They only trust the warning calls from members of their own group, meaning cooperation partners. Familiarity alone is not enough, otherwise the birds would also have trusted the calls of their neighbours," Griesser explains.

Deception as a possible factor in language and dialect formation

Michael Griesser draws a comparison to humans and their languages and dialects. Just like Siberian jays, humans preferentially trust others who belong to the same group as themselves and therefore more likely are cooperation partners. "It could thus very well be the case that vulnerability to deception has been a driver of the rapid diversification of human languages and facilitating the formation of dialects as they allow the identification of local cooperation partners," Griesser considers.

Credit: 
University of Konstanz

Foster care, homelessness are higher education hurdles

A college education is estimated to add $1 million to a person's lifetime earning potential, but for some students the path to earning one is riddled with obstacles. That journey is even more difficult for students who have been in the foster care system or experienced homelessness, according to a new study from the University of Georgia.

But the more college administrators and faculty know about these students' problems, the more they can do to ease the burden.

Getting into universities in the first place can frequently be a challenge for students who've had unstable home lives, said David Meyers, co-author of the study.

"Research tells us that every time a student moves from one foster care placement to another, they lose six months of educational progress," said Meyers, a public service associate in the J.W. Fanning Institute for Leadership Development. "That's a pretty serious setback. It's a challenge for them to participate in after-school activities or athletics. Their college resume is not going to be as strong as those students who don't face those same challenges."

It's a similar struggle for students who've experienced homelessness. For those who beat the odds, getting into college is just the start of a whole new set of hurdles. The added stressors of having to figure out how to pay for courses, books and housing once they get there--something many of their classmates don't have to think about--take a tremendous toll.

"Having to act like an adult when you're still a kid presents huge challenges for students trying to get into college," said Kim Skobba, co-author of the paper and an associate professor in the College of Family and Consumer Sciences. "But then when you get to college, you're still on your own."

Entirely on their own

The study, published in the Journal of Adolescent Research, focuses on the experiences of 27 college students, all attending four-year institutions, who had been in foster care, experienced homelessness or both. The researchers conducted a series of three in-depth interviews with each participant over the course of one academic year, and several clear themes emerged.

These students all had to "get by" largely on their own. They often were without parental guidance or support during high school, and in college they were entirely on their own. Many took jobs, sometimes going to school full time while also working full or nearly full-time hours.

One student described having six classes while also working 40 hours a week, saying, "I kept breaking down. ... I was staying up to about 2 or 4 in the morning doing homework and waking up at 7." (This type of experience was more common among students who had been homeless than those who were in foster care at the time of their high school graduation.)

One of the biggest expenses for all the students in the study was paying for and maintaining stable housing. Eleven of them experienced at least one period of homelessness since beginning college, living in their cars or couch surfing.

Another constant issue was finding money for books and food. Even with scholarship support, many of the students would ask professors whether the book was essential for success in their course and if so would borrow a friend's book or even one of the professor's copies, if possible.

Perhaps not surprisingly, these stressors made it difficult for students to focus on their academics.

"It takes a mental and emotional toll on these students," Meyers said. "We think about it in financial terms, but it really, I think, also shows up in sort of this constant emotional challenge. Being thoughtful, being vigilant, never really having the luxury of being able to set it aside."

Finding solutions

Institutions like UGA are taking steps to address this issue, with programs that provide emotional support while connecting students to resources they might otherwise not know exist.

Embark@UGA, for example, is the campus-based component of Embark Georgia, an effort led by Meyers and Lori Tiller, a colleague at the Fanning Institute. The program is a statewide network that connects the University System of Georgia and Technical College System of Georgia to the Division of Family and Children Services, the Georgia Department of Education, and numerous nonprofit and community organizations seeking to increase college access and retention for students who have experienced foster care or homelessness.

Through Embark, each USG university and technical college campus and every high school in Georgia has a point of contact to help identify and provide resources to homeless and former foster students who need help.

Additionally, scholarships like Let All the Big Dawgs Eat, which provides a food stipend for students, have also helped narrow the gap. UGA also has made a point to start using free online textbooks in many courses.

But not all schools have the same resources.

"Expanding programs at the federal level that would serve students who've been in foster care or homeless would really help close that gap," Skobba said. "We also don't want them taking out huge loans because that's not a good financial situation long term. And some kind of financial aid grant program serving this group would make a huge difference."

Another big help? Understanding and awareness from professors that not all students are able to spend hundreds of dollars on textbooks or don't have a personal laptop to use for class assignments.

"I think I was already a pretty flexible understanding professor, but just realizing that if you're working 40 hours because that's what it takes to stay in school, some things are going to drop from time to time," Skobba said. "Having a little bit of breathing room in your syllabus and assignments is probably beneficial to all students, but it's going to be especially helpful for this group of students."

Credit: 
University of Georgia

Taking a bite out of tooth evolution: Frogs have lost teeth more than 20 times

image: Some frog species have teeth while others are toothless. Still others have a combination of true teeth and toothlike structures. The Solomon Island leaf frog, Cornufer guentheri, has true teeth on its upper jaw and bony fangs on its lower jaw, which do not have enamel or dentin, a dense tissue found in teeth.

Image: 
Daniel Paluh/Florida Museum of Natural History

GAINESVILLE, Fla. --- Scientists have long known that frogs are oddballs when it comes to teeth. Some have tiny teeth on their upper jaws and the roof of their mouths while others sport fanglike structures. Some species are completely toothless. And only one frog, out of the more-than 7,000 species, has true teeth on both upper and lower jaws.

Now, the first comprehensive study of tooth evolution in frogs is bringing the group's dental history into focus. Florida Museum of Natural History researchers analyzed CT scans of nearly every living amphibian genus to reveal that frogs have lost teeth over 20 times during their evolution, more than any other vertebrate group. Some frog species may have even re-evolved teeth after losing them millions of years before.

Researchers also found a correlation between the absence of teeth in frogs and a specialized diet on small insects, such as ants and termites. Their analysis of frogs' amphibian relatives, the salamanders and obscure wormlike animals known as caecilians, showed these groups retained teeth on both upper and lower jaws throughout their evolutionary history.

"Through this study, we have really been able to show that tooth loss in vertebrates is largely a story about frogs, with over 20 independent losses," said lead study author Daniel Paluh, a Ph.D. candidate in the University of Florida's department of biology. "Only eight other groups of living vertebrates, including seahorses, turtles, birds and a few mammals, have also evolved toothlessness."

Teeth first evolved more than 400 million years ago, quickly conferring a competitive advantage to animals that had them and leading to the diversification of sharks, bony fish and ultimately the vertebrates that first roamed onto land.

Throughout their long history, teeth have been an important component of vertebrate evolution, yet some groups have done equally well without them. Birds lost their teeth around 100 million years ago with the advent of the beak, and both the largest known vertebrate, the blue whale, and the smallest, a frog from New Guinea, are entirely toothless.

Few researchers have focused on studying frog teeth, however, for the simple reason that they're incredibly small.

"If you open a frog's mouth, chances are you will not see teeth even if they have them, because they're usually less than a millimeter long," or smaller than the tip of a pencil, Paluh said.

That hasn't stopped some people from trying. In his study of the relationships between frog species, the famous 19th-century paleontologist Edward Cope lumped all toothless frogs into the same group, which he called Bufoniformia.

Researchers using modern genetic techniques have since shown that species in Bufoniformia aren't actually closely related, suggesting that the loss of teeth occurred more than once in frog evolution. But there the story stalled.

In the past, accurately determining which frogs had teeth would have required laborious work that irrevocably damaged or destroyed portions of preserved specimens. Frogs are also a highly diverse group, making a comprehensive assessment of their teeth a difficult task.

But Paluh and his colleagues had one major advantage: The Florida Museum leads a massive multi-institutional effort to CT scan 20,000 vertebrate specimens, giving researchers the ability to study animals in ways not previously possible.

The project, called oVert, allows anyone with an Internet connection to access 3D models derived from the scans, which depict distinct features of an organism, including bones, vasculature, internal organs, muscle tissue - and teeth. For Paluh, it meant he could virtually peer into the gape of a frog.

Working remotely during COVID-19 lockdowns, Paluh and fellow members of the museum's Blackburn Lab used oVert scans to carry out the study. To get the clearest picture of changes in teeth over time, the researchers included representatives of all amphibian groups. They analyzed patterns of tooth loss through time using a previously published map of evolutionary relationships between amphibians based on genetic data.

The study provides a powerful example of the research that can be accomplished with open-access data, said David Blackburn, Florida Museum curator of herpetology, Paluh's adviser and senior author of the study.

"We effectively crowdsourced the data collection across our lab, including people that were not in the U.S. at that time," Blackburn said.

Their results showed that far from losing teeth once during their evolution, as suggested by the now-debunked idea of the Bufoniformia, frogs have undergone "rampant tooth loss," Paluh said, with toothlessness popping up in groups as distantly related as toads and poison dart frogs.

The team also noted a tight correlation between the presence or absence of teeth in frogs and their eating habits. While dietary information is scant for many species of frogs, the researchers uncovered a connection between a diet of tiny insects and a lack of teeth.

"Having those teeth on the jaw to capture and hold on to prey becomes less important because they're eating really small invertebrates that they can just bring into their mouth with their highly modified tongue," said Paluh. "That seems to relax the selective pressures that are maintaining teeth."

Some species of poison dart frogs, for example, have evolved to feed mostly on ants and mites that produce toxic compounds, using their sticky, projectile tongues to scoop up their prey and swallow it whole. The frogs are able to store the toxins from their food source and repurpose them for their own use, secreting the compounds through their skin to ward off predators. And the turtle frog, a toothless burrowing species in Australia, tunnels through the maze of underground passages inside termite nests, hunting the insects that constructed them.

Teeth seem to be superfluous for mammals that feed on ants and termites as well. Pangolins and anteaters, which have highly specialized tongues for probing ant and termite nests, are both toothless.

Many questions remain about frogs' tooth biology, including how the genes that regulate their tooth production turn on and off. It's also unclear whether the serrated toothlike structures in frogs that regained these features are actually real teeth, Paluh said. To determine that, scientists will need to take a more in-depth look at these structures, looking for the presence of enamel and other key defining features.

Innovative techniques, such as those used in the oVert project, are beginning to underscore knowledge gaps and limitations like these, but they also open up the field to new discoveries, Blackburn said.

"We now have lots of new questions in my lab inspired by the surprising things turning up from 3D imaging from the oVert project, and those will lead us both back into museum collections and to the field to see what these animals are doing in the wild."

Credit: 
Florida Museum of Natural History

COVID-positive people have more severe strokes, Geisinger-led study finds

DANVILLE, Pa. - Among people who have strokes and COVID-19, there is a higher incidence of severe stroke as well as stroke in younger people, according to new data from a multinational study group on COVID-19 and stroke, led by a team of Geisinger researchers.

The COVID-19 Stroke Study Group's latest report, published in the journal Stroke, focused on a group of 432 patients from 17 countries diagnosed with COVID-19 and stroke. Among this group, the study found a significantly higher incidence of large vessel occlusion (LVO) -- strokes caused by a blockage in one of the brain's major arteries that are typically associated with more severe symptoms. Nearly 45% of strokes in the study group were LVOs; in the general population, 24 to 38% of ischemic strokes are LVOs.

The study group also had a high percentage of young patients who had strokes: more than a third were younger than 55, and nearly half were younger than 65. Pre-pandemic general population data showed 13% of strokes occurred in people under 55, and 21% in people younger than 65.

The data showed that that less-severe strokes, mostly in critically ill patients or overwhelmed health centers, were underdiagnosed. This finding is significant, the research team said, as minor or less-severe stroke may be an important risk factor for a more severe stroke in the future.

"Our observation of a higher median stroke severity in countries with lower healthcare spending may reflect a lower capacity for the diagnosis of mild stroke in patients during the pandemic, but this may also indicate that patients with mild stroke symptoms refused to present to the hospitals," said Ramin Zand, M.D., a vascular neurologist and clinician-scientist at Geisinger and leader of the study group.

Throughout the pandemic, people with COVID-19 have reported symptoms involving the nervous system, ranging from a loss of smell or taste to more severe and life-threatening conditions such as altered mental state, meningitis and stroke. A group of Geisinger scientists and a team of experts from around the world formed the COVID-19 Stroke Study Group shortly after the pandemic began to study the correlation between COVID-19 infection and stroke risk.

Results from the first phase of the study, which included data on 26,175 patients, indicated an overall stroke risk of 0.5% to 1.2% among hospitalized patients with COVID-19 infection. The finding demonstrated that, even though there were increasing reports of patients with COVID-19 experiencing stroke, the overall risk is low.

"Our initial data showed that the overall incidence of stroke was low among patients with COVID-19, and while that hasn't changed, this new data shows that there are certain groups of patients -- for example, younger patients -- who are more affected," said Vida Abedi, Ph.D., a scientist in the department of molecular and functional genomics at Geisinger. "We hope these findings highlight new research directions to better identify patients at risk and help improve the quality of care."

Credit: 
Geisinger Health System

Mass of human chromosomes measured for the first time

image: The spread of 46 human chromosomes measured using X-rays in the study, with colour added.

Image: 
Archana Bhartiya et al/ Chromosome Research

Mass of human chromosomes measured for the first time

The mass of human chromosomes, which contain the instructions for life in nearly every cell of our bodies, has been measured with X-rays for the first time in a new study led by UCL researchers.

For the study, published in Chromosome Research, researchers used a powerful X-ray beam at the UK's national synchrotron facility, Diamond Light Source, to determine the number of electrons in a spread of 46 chromosomes which they used to calculate mass.

They found that the chromosomes were about 20 times heavier than the DNA they contained - a much larger mass than previously expected, suggesting there might be missing components yet to be discovered.

As well as DNA, chromosomes consist of proteins that serve a variety of functions, from reading the DNA to regulating processes of cell division to tightly packaging two-metre strands of DNA into our cells.

Senior author Professor Ian Robinson (London Centre for Nanotechnology at UCL) said: "Chromosomes have been investigated by scientists for 130 years but there are still parts of these complex structures that are poorly understood.

"The mass of DNA we know from the Human Genome Project, but this is the first time we have been able to precisely measure the masses of chromosomes that include this DNA.

"Our measurement suggests the 46 chromosomes in each of our cells weigh 242 picograms (trillionths of a gram). This is heavier than we would expect, and, if replicated, points to unexplained excess mass in chromosomes."

In the study, researchers used a method called X-ray ptychography, which involves stitching together the diffraction patterns that occur as the X-ray beam passes through the chromosomes, to create a highly sensitive 3D reconstruction. The fine resolution was possible as the beam deployed at Diamond Light Source was billions of times brighter than the Sun (ie, there was a very large number of photons passing through at a given time).

The chromosomes were imaged in metaphase, just before they were about to divide into two daughter cells. This is when packaging proteins wind up the DNA into very compact, precise structures.

Archana Bhartiya, a PhD student at the London Centre for Nanotechnology at UCL and lead author of the paper, said: "A better understanding of chromosomes may have important implications for human health.

"A vast amount of study of chromosomes is undertaken in medical labs to diagnose cancer from patient samples. Any improvements in our abilities to image chromosomes would therefore be highly valuable."

Each human cell, at metaphase, normally contains 23 pairs of chromosomes, or 46 in total. Within these are four copies of 3.5 billion base pairs of DNA.

Credit: 
University College London

Scientists identify mechanism linking traumatic brain injury to neurodegenerative disease

Scientists have revealed a potential mechanism for how traumatic brain injury leads to neurodegenerative diseases, according to a study in fruit flies, and rat and human brain tissue, published today in eLife.

The results could aid the development of treatments that halt the progression of cell damage after brain injury, which can otherwise lead to neurological diseases such as amyotrophic lateral sclerosis (ALS), and Alzheimer's and Parkinson's disease.

Repeated head trauma is linked to a progressive neurodegenerative syndrome called chronic traumatic encephalopathy (CTE). Postmortem tissues from patients with CTE show dysfunctional levels of a molecule called TDP-43, which is also found in ALS, Alzheimer's disease and frontotemporal dementia.

"Although TDP-43 is a known indicator of neurodegeneration, it was not clear how repeated trauma promotes the build-up of TDP-43 in the brain," explains first author Eric Anderson, Postdoctoral Research Associate at the Department of Pediatrics at the University of Pittsburgh, Pennsylvania, US. "We have shown that repetitive brain trauma in fruit flies leads to a build-up of TDP-43. In this study we measured the changes of proteins in the fruit fly brain post injury to identify the molecular pathways that cause this."

From an analysis of 2,000 proteins, the team identified 361 that significantly changed in response to injury. These included components of the nuclear pore complex (NPC) involved in nucleocytoplasmic transport - the shuttling of important cargoes between the cell nucleus and the rest of the cell.

They found that a family of molecules that make up the NPC called nucleoporins (Nups) were increased in both larval and adult flies after injury. When they looked at the distribution pattern of Nups around the edge of the nucleus in fruit fly nerve cord cells, they found it was altered after brain trauma: there were gaps in the nuclear membrane and clumps of Nups. They also found changes in a key enzyme involved in transporting molecules in and out of the nucleus in injured brains. As a result, the transport of fluorescently labelled cargo in and out of the nucleus was impaired.

Having established that brain injury impairs the transport machinery between the nucleus and the rest of the cell, the team looked at whether the build-up of Nups leads to the aggregation of TDP-43 seen in neurodegenerative diseases. They created fruit flies that produce excess Nup protein and then stained the brain cells for the fruit fly version of TDP-43, called Tbph. They found a significant increase in the number of Tbph deposits in brains that had too much Nup compared with normal brains. Moreover, these high levels of Nups were also toxic to the flies, causing decreased motor function and reducing the distance they could climb in a certain timeframe. When the level of Nups was reduced in cells after injury, this improved the flies' climbing ability and lifespan, highlighting an avenue to explore for new treatments.

Finally, the team looked at whether the increased build-up of a Nup molecule (Nup62) was also seen in human brain tissue after injury. They examined postmortem brain tissue from patients with mild and severe CTE matched to healthy tissue from people of the same age. All mild and severe patients were involved in sports, while healthier cases were not. They found that Nup62 was present in large amounts in the wrong place in patients with mild and severe disease, but not in the healthy group, and the degree of Nup62 aggregation increased with the severity of disease. They also saw similar changes in the distribution of Nup62 in a rat model of traumatic brain injury.

"Our study reveals that traumatic brain injury can disrupt nuclear transport machinery of the cells, which plays an essential role in normal cell functions such as communication," concludes senior author Udai Pandey, associate professor of pediatrics, human genetics and neurology at the University of Pittsburgh School of Medicine. "This suggests that the accumulation of neurodegenerative hallmark proteins caused by injury begins with these nuclear transport defects, and that targeting these defects could be a strategy for preventing trauma-induced neurological disorders."

Credit: 
eLife

Mumpreneur success still requires conventional masculine behaviour

A new study led by Kent Business School, University of Kent, finds that whilst the mumpreneur identity may enable women to participate in the business world and be recognised as 'proper' entrepreneurs, this success is dependent on alignment with the conventional masculine norms of entrepreneurship.

These conventional masculine behaviours include working long hours and an ongoing dedicated commitment to the success of a business.

Published in the International Small Business Journal and based on an interview study of women business owners, the study highlights the interviewees' belief that entrepreneurship and motherhood are compatible but challenges the claim in existing research that mumpreneurship represents a new feminised identity and a different way of doing business.

The study conceptualises the mumpreneur as the hybrid combination of masculine and feminine behaviours, examining the tensions that emerge in simultaneously running a business and a family, and considering if these are managed through the curtailment of entrepreneurial activity.

The study found that for those women who see themselves as entrepreneurial mums, entrepreneurial curtailment is not an option and conventional masculine behaviours are valued higher than the feminine in the context of successful business development.

The consequences of this hybrid behaviour are significant:

To be identified as a 'normal' entrepreneur, feminine behaviours are accepted alongside masculine commitment to business, so long as they are not disruptive of the latter.

Mumpreneurs must balance both behaviours yet avoid engaging in excessive feminine conduct that may restrict business development or devalue their entrepreneurial activities.

Mumpreneurs perceived as 'too feminine' in their business activities are marginalised as unengaged in 'proper' entrepreneurship, creating a hierarchy of business identities.

Patricia Lewis, Professor of Management at the University of Kent and Principal Investigator said: 'The mumpreneur identity has undoubtedly had a positive impact on the way women's entrepreneurship is viewed. Nevertheless, our study demonstrates that it has not disrupted the dominant discourses of masculine entrepreneurship or gendered power relations in the field. Women are still in a position of being committed to both sides of the balance between business and motherhood but are devalued as entrepreneurs when devoting time to their children rather than business.'

Credit: 
University of Kent

How AI could alert firefighters of imminent danger

image: NIST firefighters douse flames bursting from a building as a flashover occurs during an experiment.

Image: 
NIST

Firefighting is a race against time. Exactly how much time? For firefighters, that part is often unclear. Building fires can turn from bad to deadly in an instant, and the warning signs are frequently difficult to discern amid the mayhem of an inferno.

Seeking to remove this major blind spot, researchers at the National Institute of Standards and Technology (NIST) have developed P-Flash, or the Prediction Model for Flashover. The artificial-intelligence-powered tool was designed to predict and warn of a deadly phenomenon in burning buildings known as flashover, when flammable materials in a room ignite almost simultaneously, producing a blaze only limited in size by available oxygen. The tool's predictions are based on temperature data from a building's heat detectors, and, remarkably, it is designed to operate even after heat detectors begin to fail, making do with the remaining devices.

The team tested P-Flash's ability to predict imminent flashovers in over a thousand simulated fires and more than a dozen real-world fires. Research, just published in the Proceedings of the AAAI Conference on Artificial Intelligence, suggests the model shows promise in anticipating simulated flashovers and shows how real-world data helped the researchers identify an unmodeled physical phenomenon that if addressed could improve the tool's forecasting in actual fires. With further development, P-Flash could enhance the ability of firefighters to hone their real-time tactics, helping them save building occupants as well as themselves.

Flashovers are so dangerous in part because it's challenging to see them coming. There are indicators to watch, such as increasingly intense heat or flames rolling across the ceiling. However, these signs can be easy to miss in many situations, such as when a firefighter is searching for trapped victims with heavy equipment in tow and smoke obscuring the view. And from the outside, as firefighters approach a scene, the conditions inside are even less clear.

"I don't think the fire service has many tools technology-wise that predict flashover at the scene," said NIST researcher Christopher Brown, who also serves as a volunteer firefighter. "Our biggest tool is just observation, and that can be very deceiving. Things look one way on the outside, and when you get inside, it could be quite different."

Computer models that predict flashover based on temperature are not entirely new, but until now, they have relied on constant streams of temperature data, which are obtainable in a lab but not guaranteed during a real fire.

Heat detectors, which are commonly installed in commercial buildings and can be used in homes alongside smoke alarms, are for the most part expected to operate only at temperatures up to 150 degrees Celsius (302 degrees Fahrenheit), far below the 600 degrees Celsius (1,100 degrees Fahrenheit) at which a flashover typically begins to occur. To bridge the gap created by lost data, NIST researchers applied a form of artificial intelligence known as machine learning.

"You lose the data, but you've got the trend up to where the heat detector fails, and you've got other detectors. With machine learning, you could use that data as a jumping-off point to extrapolate whether flashover is going to occur or already occurred," said NIST chemical engineer Thomas Cleary, a co-author of the study.

Machine-learning algorithms uncover patterns in large datasets and build models based on their findings. These models can be useful for predicting certain outcomes, such as how much time will pass before a room is engulfed in flames.

To build P-Flash, the authors fed their algorithm temperature data from heat detectors in a burning three-bedroom, one-story ranch-style home -- the most common type of home in a majority of states. This building was of a digital rather than brick-and-mortar variety, however.

Because machine learning algorithms require great quantities of data, and conducting hundreds of large-scale fire tests was not feasible, the team burned this virtual building repeatedly using NIST's Consolidated Model of Fire and Smoke Transport, or CFAST, a fire modeling program validated by real fire experiments, Cleary said.

The authors ran 5,041 simulations, with slight but critical variations between each. Different pieces of furniture throughout the house ignited with every run. Windows and bedroom doors were randomly configured to be open or closed. And the front door, which always started closed, opened up at some point to represent evacuating occupants. Heat detectors placed in the rooms produced temperature data until they were inevitably disabled by the intense heat.

To learn about P-Flash's ability to predict flashovers after heat detectors fail, the researchers split up the simulated temperature recordings, allowing the algorithm to learn from a set of 4,033 while keeping the others out of sight. Once P-Flash had wrapped up a study session, the team quizzed it on a set of 504 simulations, fine-tuned the model based on its grade and repeated the process. After attaining a desired performance, the researchers put P-Flash up against a final set of 504.

The researchers found that the model correctly predicted flashovers one minute beforehand for about 86% of the simulated fires. Another important aspect of P-Flash's performance was that even when it missed the mark, it mostly did so by producing false positives -- predictions that an event would happen earlier than it actually did -- which is better than the alternative of giving firefighters a false sense of security.

"You always want to be on the safe side. Even though we can accept a small number of false positives, our model development places a premium on minimizing or, better yet, eliminating false negatives," said NIST mechanical engineer and corresponding author Wai Cheong Tam.

The initial tests were promising, but the team had not grown complacent.

"One very important question remained, which was, can our model be trusted if we only train our model using synthetic data?" Tam said.

Luckily, the researchers came across an opportunity to find answers in real-world data produced by Underwriters Laboratories (UL) in a recent study funded by the National Institute of Justice. UL had carried out 13 experiments in a ranch-style home matching the one P-Flash was trained on, and as with the simulations, ignition sources and ventilation varied between each fire.

The NIST team trained P-Flash on thousands of simulations as before, but this time they swapped in temperature data from the UL experiments as the final test. And this time, the predictions played out a bit differently.

P-Flash, attempting to predict flashovers up to 30 seconds beforehand, performed well when fires started in open areas such the kitchen or living room. But when fires started in a bedroom, behind closed doors, the model could almost never tell when flashover was imminent.

The team identified a phenomenon called the enclosure effect as a possible explanation for the sharp drop-off in accuracy. When fires burn in small, closed-off spaces, heat has little ability to dissipate, so temperature rises quickly. However, many of the experiments that form the basis of P-Flash's training material were carried out in open lab spaces, Tam said. As such, temperatures from the UL experiments shot up nearly twice as fast as the synthetic data.

Despite revealing a weak spot in the tool, the team finds the results to be encouraging and a step in the right direction. The researchers' next task is to zero in on the enclosure effect and represent it in simulations. To do that they plan on performing more full-scale experiments themselves.

When its weak spots are patched and its predictions sharpened, the researchers envision that their system could be embedded in hand-held devices able to communicate with detectors in a building through the cloud, Tam said.

Firefighters would not only be able to tell their colleagues when it's time to escape, but they would be able to know danger spots in the building before they arrive and adjust their tactics to maximize their chances of saving lives.

Credit: 
National Institute of Standards and Technology (NIST)

Study finds that a firm's place in a supply chain influences lending and borrowing

EUGENE, Ore. -- June 1, 2021 -- Businesses typically rely on banks and financial markets for financing, but credit provided by suppliers also can play an important role, especially in manufacturing. Yet why firms lend and borrow extensively from each other is still an open question.

In a paper online ahead of print in the Journal of Financial Economics, "Trade Credit and Profitability in Production Networks," Youchang Wu, an associate professor at the University of Oregon, and coauthor Michael Gofman, an assistant professor at the University of Rochester, examined trade credit from a new angle.

They noted that for an average nonfinancial firm in North America, the outstanding amount of trade credit it receives from suppliers is about 21 percent of annual production costs. Moreover, most firms simultaneously borrow from their suppliers and lend to their customers, with the average outstanding amount of trade credit provided to customers at around 15 percent of annual sales.

Previous studies on trade credit, they noted, have focused on a firm's role either as a lender or a borrower of trade credit, ignoring the fact that trade credit flows along supplier-customer links in complex production networks.

Using a comprehensive database of supplier-customer relationships from 2003 to 2018, Wu and Gofman analyzed more than 200,000 supply chains formed by more than 5,600 nonfinancial firms. By locating a firm in the supply chain, their study accounts for a firm's dual role as a supplier and a customer. This novel approach allowed the researchers to uncover new details about trade credit within and across supply chains.

In particular, they found that within the supply chain, more upstream firms borrow more from suppliers, lend more to customers and hold more net trade credit, despite appearing to have weaker financing capacity than more downstream firms.

The length of the supply chains they examined varies significantly.

An example of a longer supply chain is one in which Intermolecular Inc. supplies advanced materials to Micron Technology Inc., which creates computer memory and computer data storage that it provides to Nvidia Corp., which uses them to manufacture graphics cards it supplies to Tesla. In contrast, a short supply chain example is one in which Sensata Technology provides sensors directly to Tesla. In longer supply chains, firms tend to be more profitable, and the increase in trade credit provision from the lower to the upper level of the chain is more gradual.

Both within and across supply chains, the authors noted that there is an almost one-to-one correspondence between the variation in the trade credit a firm provides and the variation in the trade credit it receives. These findings are less consistent with the idea of financially strong firms lending to financially weak firms, an implication of the financing advantage theory.

"Our findings are more consistent with the recursive moral hazard theory of trade credit," said Wu, who teaches in the Department of Finance at the Lundquist College of Business and is the John B. Rogers Research Scholar and coordinator of the UO's finance doctoral program.

"This theory argues that more upstream firms have more severe incentive problems, especially when they are not that profitable, because the quality of their products is revealed only after a long delay," he said. "Thus, more net trade credit provided by upstream firms helps to align incentives."

The authors did, however, find evidence that a firm's provision of trade credit is related to its financial status during an economic downturn. For instance, during the 2008-2009 financial crisis, upstream firms experienced a larger decline in profit margins than did downstream firms, and net provision of trade credit dropped significantly, suggesting that financial strength plays a more important role in determining the provision and use of trade credit during a crisis period.

Overall, Wu and Gofman's systematic study highlights variations in trade credit practices across firms, which can help both researchers and practitioners better understand the role of trade credit in production networks as well as examine other economic and financial questions related to supply chains.

Credit: 
University of Oregon

Research team investigates ride-sharing decisions

image: Illustration

Image: 
Christiane Kunath

In ride-sharing, trips of two or more customers with similar origins and destinations are combined into a single cab ride. The concept can make a significant contribution to sustainable urban mobility. However, its acceptance depends on human needs and behavior. For example, while shared rides typically offer a financial advantage, passengers might suffer drawbacks in terms of comfort and trip duration. These factors give rise to different adoption behaviors that explain usage patterns observed in 360 million real-world ride requests from New York City and Chicago in 2019. The study has now been published in the journal Nature Communications.

Ride-sharing (or ride-pooling) is most efficient in places with high demand and a large number similar ride requests. Still, it has been difficult to answer if and under what conditions people are actually willing to adopt ride-sharing. In their study, the researchers decipher the complex incentive structure underlying the decision of whether or not to adopt ride-sharing. In a game-theoretic model, they describe the sharing adoption of all users who book rides from the same location.

The researchers demonstrate how interactions between those individuals lead to two qualitatively different patterns of acceptance. In one, willingness to share rides is consistently high. In the other case, however, the willingness to share rides decreases as the overall demand for rides increases. If there are only few users in the system, the number of ride-sharing bookings increases with the number of ride requests, yet if there are many users, the usage levels out. The relative amount of shared ride requests therefore decreases - despite optimized routing with shorter detours for the passengers when demand is high.

"Passengers speculate on being able to take advantage of the cheaper fare when sharing a ride, but they actually hope to be transported alone and thus directly from A to B due to low demand for rides," explains David Storch, a doctoral student at the Chair of Network Dynamics and lead author of the study. When demand is high, for example during typical rush hours, the prospect of being transported as a single passenger is lower - "Passengers almost certainly lose comfort as they share a ride. They tend to book the more expensive fare more often to travel alone."

In an analysis of more than 360 million real trip requests in New York City and Chicago, the researchers were able to identify the demand patterns they had previously found in their model, supporting the validity of their findings. The analysis shows that, depending on the starting point of the trip, both adoption patterns exist in parallel in the two cities. Malte Schröder, research associate at the Chair, interprets the results as follows: "Since both adoption patterns coexist in cities, a moderate increase of the financial incentives is probably already sufficient to strongly increase the acceptance of ride-sharing in other places and for other user groups."

Credit: 
Technische Universität Dresden

Direct action of SARS-CoV-2 on organs may cause exacerbated immune response in children

image: Electron micrograph of the brain of a child with MIS-C associated with COVID-19 and encephalopathy: immunohistochemical detection of SARS-CoV-2 nucleocapsid antigen in brain endothelial cells, with cytoplasm stained red.

Image: 
Amaro Nunes Duarte Neto

Besides common symptoms such as fever, cough and respiratory distress, some children have an atypical form of COVID-19 known as multisystem inflammatory syndrome in children (MIS-C), characterized by persistent fever and inflammation of several organs, such as the heart and intestines, as well as the lungs to a lesser extent. Reports of MIS-C have been increasingly associated with severe cases and deaths in several countries including Brazil since the onset of the pandemic.

Researchers affiliated with the University of São Paulo's Medical School (FM-USP) and Adolfo Lutz Institute in Brazil performed the largest series of autopsies to date on children who died from COVID-19. Their findings show that the ability of SARS-CoV-2 to invade and damage several organs is one of the factors leading to MIS-C, producing a wide array of clinical manifestations that include abdominal pain, heart failure and seizures, as well as persistent fever.

The study was supported by São Paulo Research Foundation - FAPESP
and reported in an article in EClinicalMedicine, a Lancet group journal.

"The direct action of the virus on the tissue of various organs is one of the reasons why children with this syndrome have an exacerbated and altered inflammatory response to infection," Marisa Dolhnikoff, last author of the article, told. Dolhnikoff is a professor at FM-USP.

The researchers performed autopsies on five children who died from COVID-19 in São Paulo: one boy and four girls aged between 7 months and 15 years. Two were seriously ill before being infected by SARS-CoV-2, one with cancer and the other with a congenital genetic disorder.

The other three were previously healthy and developed MIS-C with different clinical manifestations. One had myocarditis (inflammation of the heart muscle), another had colitis (inflammation of the bowel), and a third had acute encephalopathy (brain damage) with seizures.

A minimally invasive technique, ultrasound-guided with coaxial and punch needles, was used to collect tissue samples from all major organs. The presence of SARS-CoV-2 in the samples was determined by real-time reverse transcription polymerase chain reaction (RT-PCR, the technique also used to diagnose COVID-19) and by immunohistochemistry, in which antibodies were deployed to detect the viral nucleocapsid protein (N) and one of the spike proteins (S2).

Histopathological analysis showed that both children with severe pre-existing disease had "classic" severe COVID-19, characterized by acute respiratory distress due to extensive damage to the lung alveoli caused by SARS-CoV-2. The virus was also detected in other organs.

The three previously healthy children were found to have inflammatory lesions outside the lungs, such as myocarditis and colitis. The virus was detected in heart endothelial and muscle cells from the patient with myocarditis, in intestinal tissue from the child with acute colitis, and in brain tissue from the patient with acute encephalopathy.

"We found that SARS-CoV-2 had spread throughout the body via the blood vessels, infecting various types of cell and tissue in these children. The clinical manifestations varied according to the organ targeted," Dolhnikoff said. "It's important for pediatricians to watch out for these possible differences in the clinical manifestations of COVID-19 in children of all ages so that the infection is diagnosed and MIS-C can be treated early on."

MIS-C may occur a few days or weeks after infection by SARS-CoV-2. The runaway inflammatory reaction was thought to occur whether or not the virus was still in the organism, as a result of the immune response, but the study found evidence that the manifestations of MIS-C are also triggered by the direct action of the virus on the cells of infected organs.

"We're not saying everything described to date about pediatric multisystem inflammatory syndrome is wrong. We're merely adding the observation that the damage done to tissues by the virus is associated with this exacerbated inflammatory response in children, and is very probably a key component in its induction," Dolhnikoff said.

Why some children respond to infection by SARS-CoV-2 with the exacerbated inflammation that characterizes MIS-C is unknown, but the answer may include a genetic component.

Endothelial cells targeted

The researchers found that the virus's main targets included endothelial cells, which line blood vessels of all sizes and regulate exchanges between the bloodstream and the surrounding tissues. "One hypothesis is that when an endothelial cell is infected it activates bloodstream mediators that trigger an inflammatory cascade and the other reactions observed in children with MIS-C, such as persistent fever, colitis, myocarditis and encephalitis," said Amaro Nunes Duarte Neto, first author of the article. Duarte Neto is an infectious disease specialist and pathologist at FM-USP and Adolfo Lutz Institute.

"The virus induces these reactions in the cells, but it's the immune system that produces a response with adverse effects on the patient," he said. "It's not an autoimmune response, however, like what we see in lupus, psoriasis or inflammatory arthritis, which also involve damage to blood vessels. In MIS-C, the virus is involved directly."

Electron microscopy analysis by Elia Caldini, a professor at FM-USP, supported these conclusions. Electron microscopes magnify viral particles more than 50,000 times directly, without the use of reagents. The technique enabled the researchers to describe alterations in cell cytoplasm associated with the presence of the virus.

"To confirm our identification of the virus unequivocally, we were the first to use immunolabeling of SARS-CoV-2 in conjunction with electron microscopy," Caldini said. "We coupled colloidal gold particles to the specific antibodies used in light microscopy against structural viral proteins."

The researchers also detected microthrombi (small blood clots) for the first time in children. This had already been observed and reported in adults. "Phenomena relating to blood clotting should always be considered in COVID-19.

Our electron microscopy analysis showed that capillary blood vessels in all organs were obstructed by accumulated red and white blood cells, cellular debris, and fibrin, with disruption of the endothelial wall," Caldini said.

Credit: 
Fundação de Amparo à Pesquisa do Estado de São Paulo

A new model enables the recreation of the family tree of complex networks

In a new study published in Proceedings of the National Academy of Sciences (PNAS), a research team of the Institute of Complex Systems of the University of Barcelona (UBICS) analysed the time evolution of real complex networks and developed a model in which the emergence of new nodes can be related to pre-existing nodes, similarly to the evolution of species in biology.

This new study analyses the time evolution of the citation network in scientific journals and the international trade network over a 100-year period. According to M. Ángeles Serrano, ICREA researcher at UBICS, "what we observe in these real networks is that both grow in a self-similar way, that is, their connectivity properties remain invariable over time, so that the network structure is always the same, while the number of nodes increases".

This self-similarity in growth, which is surprising per se, has been explained by the researchers using a model named geometric branching growth (GBG). In this model, the new nodes come from pre-existing nodes, in a similar way to family trees. For instance, in the world trade network, countries are nodes, and therefore they branch off, and transactions correspond to the links. The key property that characterizes the evolution of systems in study, and therefore the model, is inheritance. In the example, when a country is divided, the new sovereign countries inherit the richness and the trade partners of the original state.

This model is related to a previous study that enabled the production of self-similar reduced versions of complex networks, through a geometric renormalization. In these previous studies, scientists found that the connectivity in complex networks at different time scales is regulated by the same principles. "What we see in the new paper --notes the researcher-- is that these same principles remain over time too".

When both models are combined --GBG and geometric renormalization--, we can create copies of the original network in a wide range of measures, larger and smaller than the original one. "This way, we could predict descendant and ascendant nodes, or study phenomena that depend on the size of the network", highlights Serrano. "Networks present a fractal structure in time and space", the expert adds.

These branching processes are the base of the complex evolution of many real systems. "In short, both models enable us to understand interactions in real systems at different scales, one of the keys to understand and predict what their evolution will be like", concludes the expert.

Credit: 
University of Barcelona

Canadian prescription opioids users experience gaps in access to care

Stigma and high care needs can present barriers to the provision of high-quality primary care for people with opioid use disorder (OUD) and those prescribed opioids for chronic pain. A study published in PLOS Medicine by Tara Gomes at the Li Ka Shing Knowledge Institute of St. Michael's Hospital, Toronto, Canada and colleagues suggests that people treated for an opioid use disorder were less likely to find a new primary care provider (PCP) within one year of termination of enrolment with the previous physician.

People with substance use disorders often have complex medical needs, requiring regular access to primary care physicians. However, some physicians may be less willing to treat these patients due to stigma, high health care needs, or discomfort prescribing opioids. To assess differences in access to primary care, researchers conducted a retrospective cohort study, analysing records of 154,790 Ontario residents who lost their enrolment with a primary care physician between 2016 and 2017. They assigned individuals to one of three groups based on their history of opioid use: no opioid use, opioid pain therapy, and opioid agonist therapy (for OUD). The authors then analyzed the number of people from each group who had found a primary care provider within a year.

The researchers found that people receiving opioid agonist therapy were 45% less likely to secure another primary care physician in the next year compared to opioid unexposed individuals. The study was limited in that the authors were unable to identify people with OUD if they were not in treatment and could not identify people who received care from walk-in clinics. However, the research is an important step in identifying inequities in access to primary care and management of complex chronic conditions.

According to the authors "Ongoing efforts are needed to address stigma and discrimination faced by people who use opioids within the health care system, and to facilitate access to high quality, consistent primary care services for chronic pain patients and those with OUD".

Dr. Gomes also notes, "There are considerable barriers to accessing primary care among people who use opioids, and this is most apparent among people who are being treated for an opioid use disorder. This highlights how financial disincentives within the healthcare system, and stigma and discrimination against people who use drugs introduce barriers to high quality care".

Credit: 
PLOS

Healthy lifestyle linked to better cognition for oldest adults -- regardless of genetic risk

A new analysis of adults aged 80 years and older shows that a healthier lifestyle is associated with a lower risk of cognitive impairment, and that this link does not depend on whether a person carries a particular form of the gene APOE. Xurui Jin of Duke Kunshan University in Jiangsu, China, and colleagues present these findings in the open-access journal PLOS Medicine.

The APOE gene comes in several different forms, and people with a form known as APOE ε4 have an increased risk of cognitive impairment and Alzheimer's disease. Previous research has also linked cognitive function to lifestyle factors, such as smoking, exercise, and diet. However, it has been unclear whether the benefits of a healthy lifestyle are affected by APOE ε4, particularly for adults over 80 years of age.

To clarify the relationship between APOE ε4 and lifestyle, Jin and colleagues examined data from 6,160 adults aged 80 or older who had participated in a larger, ongoing study known as the Chinese Longitudinal Healthy Longevity Survey. The researchers statistically analyzed the data to investigate links between APOE ε4, lifestyle, and cognition. They also accounted for sociodemographics and other factors that could impact cognition.

The analysis confirmed that participants with healthy lifestyles or intermediately healthy lifestyles were significantly less likely to have cognitive impairment than those with an unhealthy lifestyle, by 55 and 28 percent, respectively. In addition, participants with APOE ε4 were 17 percent more likely to have cognitive impairment than those with other forms of APOE.

A previous study suggested that in individuals at low and intermediate genetic risk, favorable lifestyle profiles are related to a lower risk of dementia compared to unfavorable profiles. But these protective associations were not found in those at high genetic risk. However, the investigation showed the link between lifestyle and cognitive impairment did not vary significantly based on APOE ε4 status which represented the genetic dementia risk. This suggests that maintaining a healthier lifestyle could be important for maintaining cognitive function in adults over 80 years of age, regardless of genetic risk.

This cross-sectional study emphasized the importance of a healthy lifestyle on cognitive health. While further research will be needed to validate these findings among different population, this study could help inform efforts to boost cognitive function for the oldest of adults.

In the next step, the team will explore this association using the AD polygenetic risk score (AD-PRS) and explore the interactive relationship between AD-PRS and lifestyle on cognition with the longitudinal data.

Credit: 
PLOS

The evolutionary fates of supergenes unmasked

image: Supergenes govern colony social form in Solenopsis invicta (top left; SB/Sb alleles), heterostyly in Primula veris (top right; S/s alleles), and polymorphic female-limited mimicry in Papilio polytes (bottom right; H/h alleles).

Image: 
Image courtesy of Tanja Slotte. Photo Credits: Alex Wild (Solenopsis), Tanja Slotte (Primula), and Krushnamegh Kunte (Papilio).

While the term "supergene" may bring to mind the genetic hocus-pocus of Peter Parker's transformation into Spiderman, supergenes are actually fairly common phenomena in the realm of biology. A supergene refers to a genomic region containing multiple genes or genetic elements that are tightly linked, allowing genetic variants across the region to be co-inherited. Supergenes may arise when there is a clear benefit to inheriting specific combinations of biological traits together. Perhaps the most well-known examples of supergenes are sex chromosomes, which allow traits that are beneficial to the reproductive success of one sex to be co-inherited. In humans, this explains the prevalence of male-specific genes on the Y chromosome. While the concept of supergenes arose nearly a century ago, until recently, the study of their origin, evolution, and eventual fate was largely theoretical. Now, however, thanks to advances in genomic sequencing and computational biology, scientists can put those theories to the test with real-world data. In a recent review published in Genome Biology and Evolution titled " The genomic architecture and evolutionary fates of supergenes ", Associate Professor Tanja Slotte and her colleagues at Stockholm University in Sweden discuss new findings in the field of supergene evolution and reveal how the genomic architecture of a supergene is inextricably tied to its evolutionary fate.

"There is a rich history in evolutionary biology when it comes to the study of supergenes," says Slotte. "What I like about this topic is that there are theoretical models of supergene evolution to draw on, and at the same time, we can now thoroughly test those expectations empirically using genomic data or explore the expected effects of different genomic architectures using simulations." In particular, these new approaches can be used to assess the validity of some of the more well-established theories about supergenes.

Classical models posit that supergenes arise following the establishment of mutations at a minimum of two sites, followed by sequential accumulation of additional mutations. Selection for specific combinations of variants acts to suppress recombination, thus strengthening the linkage between mutations. This suppression of recombination often occurs through inversions, in which a genomic region is flipped on the chromosome, effectively inhibiting recombination. Because recombination enables the removal of deleterious mutations, however, once it has stopped, the supergene may degenerate through the accumulation of single-nucleotide mutations, as well as insertions, deletions, and the replication of transposable elements. Notably, this last characteristic of supergenes has made them particularly difficult to study until now. Notes Slotte, "This is a very good time for studying supergene evolution. Thanks to long-read sequencing and improved bioinformatic methods we can now obtain high-quality assemblies, including well assembled supergenic regions. This is not a trivial task, as non-recombining regions are often highly repetitive and therefore difficult to assemble."

In their review, Slotte and her collaborators discuss the ways in which new sequence data has challenged some classical supergene models. "One aspect that I find really fascinating is how empirical genomic studies are still yielding surprises when it comes to the origin and the genetic architecture of classic supergenes," says Slotte. An example of this is the case of the supergene governing wing pattern mimicry in the butterfly Heliconius numata. H. numata exhibits Müllerian mimicry, displaying one of seven different wing patterns, each of which mimics a different local species of the poisonous butterfly Melinaea, thus reinforcing their protection against predators. Recent data show that, rather than arising via the classical model of sequential mutation followed by inversion, the inverted chromosomal arrangement in H. numata arose via introgression from another Heliconius species. According to Slotte, this is a scenario that should be further explored using simulations and modeling.

Another surprise stemmed from recent studies of the S-locus supergene governing heterostyly in primroses. Heterostyly is a common plant adaptation resulting in the presence of two distinct flower morphs (S- and L-morphs) within a species. In each morph, the male and female reproductive organs in the flower are arranged in such a way that it is difficult for an individual plant to fertilize itself, limiting inbreeding and promoting outcrossing. Classical theories posited that S-morph flowers were heterozygotes (carrying two different versions of the S-locus supergene), while L-morph flowers were homozygotes (carrying two identical versions). Instead, new evidence reveals that primroses with S-morph flowers harbor an insertion spanning five genes that is absent from L-morph primroses, making S-morph plants hemizygotes (carrying a single copy of the S-locus).

In their paper, Slotte and colleagues further take advantage of advances in bioinformatics and computational biology to reveal new insights into supergene evolution. "When it comes to delineating expected patterns of evolution," says Slotte, "we benefit enormously from new efficient and flexible simulation software." Indeed, based on the new findings regarding the primrose S-locus supergene, Slotte and her co-authors used SLiM, a forward simulation program, to compare two hypothetical S-locus systems: the classic inversion model and the new model, in which an insertion leads to a supergene that is hemizygous in S-morphs. Their results revealed that the inversion accumulated more than six times more deleterious mutations than the hemizygous region and that each of these mutations on average had more deleterious effects, demonstrating that the specific genomic architecture of a supergene has a powerful effect on its ultimate degeneration and evolutionary fate.

In addition to the insights revealed by new sequence data and simulations, scientists have an unprecedented opportunity to study supergene functionality thanks to new genetic and molecular biology tools. According to Slotte, "Elucidating the function of genes located in supergenes has long been difficult, as by definition it is challenging to fine-map anything in a non-recombining region, but with the help of new genome editing techniques, this is now becoming increasingly feasible." In particular, Slotte hopes to use a combination of these approaches to study a new supergene target: "I recently got a great opportunity to bring this work to a new level through a Starting Grant from the European Research Council to study the supergene that governs heterostyly in wild flaxseed species (Linum). This work is now ongoing, and I am very excited about using this system to study supergene and plant mating system evolution."

Credit: 
SMBE Journals (Molecular Biology and Evolution and Genome Biology and Evolution)