Culture

US political parties become extremist to get more votes

Political parties have become more polarized since World War II, while voters have remained moderate

By moving to extremes, parties minimize constituency overlap

Researcher: 'Increased polarization is not voters' fault'

New study challenges a 1950's model by economist Anthony Downs

EVANSTON, Ill. -- New research shows that U.S. political parties are becoming increasingly polarized due to their quest for voters -- not because voters themselves are becoming more extremist.

The research team, which includes Northwestern University researchers, found that extremism is a strategy that has worked over the years even if voters' views remain in the center. Voters are not looking for a perfect representative but a "satisficing," meaning "good enough," candidate.

"Our assumption is not that people aren't trying to make the perfect choice, but in the presence of uncertainty, misinformation or a lack of information, voters move toward satisficing," said Northwestern's Daniel Abrams, a senior author of the study.

The study is now available online and will be published in SIAM Review's printed edition on Sept. 1.

Abrams is an associate professor of engineering sciences and applied mathematics in Northwestern's McCormick School of Engineering. Co-authors include Adilson Motter, the Morrison Professor of Physics and Astronomy in Northwestern's Weinberg College of Arts and Sciences, and Vicky Chuqiao Yang, a postdoctoral fellow at the Santa Fe Institute and former student in Abrams' laboratory.

To accommodate voters' "satisficing" behavior, the team developed a mathematical model using differential equations to understand how a rational political party would position itself to get the most votes. The tool is reactive, with the past influencing future behaviors of the parties.

The team tested 150 years of U.S. Congressional voting data and found the model's predictions are consistent with the political parties' historical trajectories: Congressional voting has shifted to the margins, but voters' positions have not changed much.

"The two major political parties have been getting more and more polarized since World War II, while historical data indicates the average American voter remains just as moderate on key issues and policies as they always have been," Abrams said.

The team found that polarization is instead tied to the ideological homogeneity within the constituencies of the two major parties. To differentiate themselves, the politicians of the parties move further away from the center.

The new model helps explains why. The moves to the extremes can be interpreted as attempts by the Democratic and Republican parties to minimize an overlap of constituency. Test runs of the model show how staying within the party lines creates a winning strategy.

"Right now, we have one party with a lot of support from minorities and women, and another party with a lot of support from white men," Motter said.

Why not have both parties appeal to everyone? "Because of the perception that if you gain support from one group, it comes at the expense of the other group," he added. "The model shows that the increased polarization is not voters' fault. It is a way to get votes. This study shows that we don't need to assume that voters have a hidden agenda driving polarization in Congress. There is no mastermind behind the policy. It is an emergent phenomenon."

The researchers caution that many other factors -- political contributions, gerrymandering and party primaries -- also contribute to election outcomes, which future work can examine.

The work challenges a model introduced in the late 1950s by economist Anthony Downs, which assumes everyone votes and makes well-informed, completely rational choices, picking the candidate closest to their opinions. The Downsian model predicts that political parties over time would move closer to the center.

However, U.S. voters' behaviors don't necessarily follow those patterns, and the parties' positions have become dramatically polarized.

"People aren't perfectly rational, but they're not totally irrational either," Abrams said. "They'll vote for the candidate that's good enough -- or not too bad -- without making fine distinctions among those that meet their perhaps low bar for good enough. If we want to reduce political polarization between the parties, we need both parties to be more tolerant of the diversity within their own ranks."

Credit: 
Northwestern University

Neutralizing antibodies appear to protect humans from coronavirus infection

Washington, DC - August 26, 2020 - A Seattle fishing vessel that departed port in May returned 18 days later with an unusual haul: the first human evidence that neutralizing antibodies provide protection from reinfection by SARS-CoV-2. The research is published in the Journal of Clinical Microbiology, a publication of the American Society for Microbiology.

Although SARS-CoV-2swept the vessel, infecting 104 of 122 people on board, all 3 individuals who tested positive for neutralizing antibodies to SARS-CoV-2 prior to embarking remained healthy during and following the expedition.

"These antibodies likely protect people from being reinfected," said corresponding author Alex Greninger, M.D., Ph.D., assistant director of clinical virology at the University of Washington Medical Center. "The titers found in these 3 individuals look like they're attainable with the vaccines under study," he said, referring to data emerging from the clinical trials. (A titer is the measurement of antibodies in blood.) "I'm optimistic about the vaccine."

Despite having only 3 people in the protected group, Dr. Greninger said that the high percentage of those on board who became infected--more than 85%--rendered the findings highly statistically significant.

One disappointment from the data: the high percentage of crew members infected "suggests that any pre-existing cross-reactive immunity caused by prior infection from other seasonal coronaviruses [such as cold viruses] provides limited protection against SARS-CoV-2 infection," the report states

The researchers tested the crew prior to the fishing expedition in an effort to keep the vessel free from coronavirus, and they tested them again after their return to port. Testing included using reverse transcriptase polymerase chain reaction (RT-PCR), which can detect viral genetic material indicating current infection. One limitation of this method is that some residual genetic material may remain for weeks after an infection has subsided.

Crew members were also tested with an assay that detects antibodies against the viral nucleoprotein, which can indicate lasting resistance following infection. The nucleoprotein encapsulates part of the viral genetic material of SARS-CoV-2. Crew members who tested positive for anti-nucleoprotein antibodies were further tested for anti-spike protein antibodies. The virus uses spike proteins to gain entry into human cells.

Credit: 
American Society for Microbiology

Thin-skinned solar panels printed with inkjet

video: The KAUST team is inkjet printing lightweight, ultrathin organic solar cells to harvest energy from light.

Image: 
© 2020 KAUST

Solar cells can now be made so thin, light and flexible that they can rest on a soap bubble. The new cells, which efficiently capture energy from light, could offer an alternative way to power novel electronic devices, such as medical skin patches, where conventional energy sources are unsuitable.

"The tremendous developments in electronic skin for robots, sensors for flying devices and biosensors to detect illness are all limited in terms of energy sources," says Eloïse Bihar, a postdoc in the team of Derya Baran, who led the research. "Rather than bulky batteries or a connection to an electrical grid, we thought of using lightweight, ultrathin organic solar cells to harvest energy from light, whether indoors or outdoors."

Until now, ultrathin organic solar cells were typically made by spin-coating or thermal evaporation, which are not scalable and which limit device geometry. This technique involved using a transparent and conductive, but brittle and inflexible, material called indium tin oxide (ITO) as an electrode. To overcome these limitations, the team applied inkjet printing. "We formulated functional inks for each the layer of the solar cell architecture," says Daniel Corzo, a Ph.D. student in Baran's team.

Instead of ITO, the team printed a transparent, flexible, conductive polymer called PEDOT:PSS, or poly(3,4-ethylenedioxythiophene) polystyrene sulfonate. The electrode layers sandwiched a light-capturing organic photovoltaic material. The whole device could be sealed within parylene, a flexible, waterproof, biocompatible protective coating.

Although inkjet printing is very amenable to scale up and low-cost manufacturing, developing the functional inks was a challenge, Corzo notes. "Inkjet printing is a science on its own," he says. "The intermolecular forces within the cartridge and the ink need to be overcome to eject very fine droplets from the very small nozzle. Solvents also play an important role once the ink is deposited because the drying behavior affects the film quality."

After optimizing the ink composition for each layer of the device, the solar cells were printed onto glass to test their performance. They achieved a power conversion efficiency (PCE) of 4.73 percent, beating the previous record of 4.1 percent for a fully printed cell. For the first time, the team also showed that they could print a cell onto an ultrathin flexible substrate, reaching a PCE of 3.6 percent.

"Our findings mark a stepping-stone for a new generation of versatile, ultralightweight printed solar cells that can be used as a power source or be integrated into skin-based or implantable medical devices," Bihar says.

Credit: 
King Abdullah University of Science & Technology (KAUST)

New treatment possibilities for young women diagnosed with rare form of ovarian cancer

image: A recent finding by researchers at the BC Cancer Research Institute and the University of British Columbia (UBC) may offer a new treatment possibility for people diagnosed with a rare and aggressive form of ovarian cancer.

Image: 
PHSA

A recent finding by researchers at the BC Cancer Research Institute and the University of British Columbia (UBC) may offer a new treatment possibility for people diagnosed with a rare and aggressive form of ovarian cancer.

Small cell carcinoma of the ovary, hypercalcemic type (SCCOHT), is a particularly devastating cancer that has no effective treatments and is usually diagnosed in women in their 20s. The study, published in Clinical Cancer Research, describes a metabolic vulnerability present in cells that may represent a therapeutic target if proven in clinical trials.

"Finding this vulnerability and identifying a way to exploit it could have a huge impact for anyone diagnosed with this rare disease," said the study's first author Jennifer Ji, an MD/PhD candidate at UBC's faculty of medicine and trainee at the BC Cancer Research Institute.

The discovery is welcome news to Justin Mattioli, whose 34-year-old wife Eileen, passed away from SCCOHT in the spring of 2019. Prior to her passing, Eileen made the decision to donate her tissue samples to help advance cancer research in the hopes of finding new treatments for others facing the disease.

"We would hate to see someone else go through what Eileen did," said Justin. "And there is a good possibility that this may help advance further research into other types of cancers as well."

Eileen's samples are being used as a new cell model, enabling researchers to test the effects of new treatments and to better understand the biology of the disease.

The team found that SCCOHT cancer cells have very low levels of an enzyme necessary for the production of arginine, an amino acid needed to help our cells build protein.

Non-cancerous cells have this enzyme and can produce their own arginine, but tumours without it cannot produce this amino acid themselves, meaning that they need to be in an arginine-rich environment to survive.

Using a small molecule agent, the team has found a way to eliminate arginine in the tumour environment, essentially starving the cancer to death while having minimal effect on normal cells.

"This agent basically absorbs all of the arginine within the tumour environment so cells can't produce it themselves, thus starving the tumour," said research team lead Dr. David Huntsman, a pathologist and ovarian cancer researcher at BC Cancer and professor in the departments of pathology and laboratory medicine and obstetrics and gynecology at UBC. "As such vulnerability has been also discovered in several other cancer types, we are now looking to partner with other research organizations who are evaluating these treatment options in patients whose cancer lacks the expression of this particular enzyme."

So far, researchers have validated this treatment in pre-clinical studies. They are now exploring combination therapy, with the use of Eileen's samples, in an effort to boost the response and avoid potential resistance. In addition, they want to test their findings in clinical trials.

"This research is another step to better understanding a very aggressive form of ovarian cancer and providing better treatment outcomes for women diagnosed with this disease," said Huntsman.

Credit: 
University of British Columbia

Researchers discover gene controlling nectar spur development

image: Second generation spurred and spur-less flowers derived from the interspecific cross of spur-less Aquilegia ecalcarata and spurred Aquilegia.

Image: 
Courtesy of Elena Kramer

The evolution of novel features - traits such as wings or eyes - helps organisms make the most use of their environment and promotes increased diversification among species. Understanding the underlying genetic and developmental mechanisms involved in the origin of these traits is of great interest to evolutionary biologists.

The flowering plant Aquilegia, a genus of 60-70 species found in temperate meadows, woodlands and mountain tops around the world, is known for a novel feature - the nectar spur, which is important for pollination, and for the ecology and evolution of the genus. More uniquely, the species Aquilegia ecalcarata (A. ecalcarata), native to montane regions of central China, is the only known species of Aquilegia in which the petals have naturally lost their spurs. A team of researchers from California State University in Sacramento, University of California in Santa Barbara, and Harvard University have used this spurless species to identify a key gene controlling the development of the nectar spur in Aquilegia.

In a August 26 paper published in Proceedings of the National Academy of Sciences, researchers sought to identify the gene or genes responsible for spur-loss in A. ecalcarata, for which there was initially no obvious genetic candidate. Evangeline Ballerini, Assistant Professor at California State University in Sacramento and lead researcher of the study, began by repeating a 1960 study by the Russian geneticist W. Praźmo (Acta Soc. Bot. Pol. 29, 423-442). Praźmo crossed a spurred species of Aquilegia with A. ecalcarata and found that patterns of phenotype distribution in the second generation indicated a single gene in the genome was responsible for spur loss. Ballerini then teamed with Professor Scott Hodges, University of California, Santa Barbara and Professor Elena Kramer, Harvard University, to recreate the inter-species cross between A. ecalcarata and four spurred Aquilegia. Their results confirmed that there appeared to be a single, recessive gene responsible for spur loss, however, it was in a region that contained approximately one thousand genes.

Researchers were able to draw on previous work by Ballerini that had identified all of the genes that are differentially expressed between the spurless A. ecalcarata and the spurred species. By focusing on gene expression - genes that are off when you don't have spurs and genes that are on when you do have spurs - they were able to ask, do any of the genes in the critical genomic region show this differential expression? "There was no guarantee that these methods would lead us to the gene we were looking for," Ballerini said. "There was definitely quite a bit of work that went into all of the experiments and analyses, but in the end there was a bit of luck too."

Of the 1000 genes, only one gene fit all the criteria of being off when the plant had no spur, on when the plant had a spur, and was specifically expressed in developing spurs. This gene, which the team refer to as POPOVICH (POP), was turned on in species with nectar spurs. In further testing, when the researchers turned off POP in species that had spurs, the petals lost their spurs. "POP appears to be present in the beginning of the developing spur and is responsible for promoting cell divisions in the specific regions where the spur develops" said Kramer. "By turning off POP, the petals lose the cells that would normally become the nectar spur."

While this study has identified a key gene responsible for the development of the novel nectar spur in Aquilegia, the team is now interested in how POP was involved in the evolution of the nectar spur and what its role may have been in the diversification of petal spur morphology across Aquilegia. The team plans to trace the evolution of the spur development by looking at the evolution of POP and understanding its role in the petal development of Aquilegia, which can aid in understanding the origin and modification of the nectar spur.

Across flowering plants, spurs have evolved many times independently. This study and others have suggested that, not surprisingly, those independent evolutionary events were achieved through different genetic mechanisms. "While POP promotes spur development in Aquilegia, it may not do so in other flowering plants" said Kramer. "However, it does provide a model of how to approach questions of how to identify a single gene in regions of the genome." The identification of a critical gene that controls spur development can lead to a broader understanding of the genetic program. "There are things that we will want to do now that we've identified this gene," Hodges said. "Since POP is a transcription factor, it must have particular genes that it's affecting. The next logical step would be to identify the targets of this gene, and that will tell us a lot more about how it functions."

Credit: 
Harvard University, Department of Organismic and Evolutionary Biology

Terms in Seattle-area rental ads reinforce neighborhood segregation

image: These word clouds show how certain terms dominate Seattle-area rental ads, based on neighborhood.

Image: 
Rebecca Gourley/University of Washington

A new University of Washington study of thousands of local rental ads finds a pattern of "racialized language" that can perpetuate neighborhood segregation, using specific terms to describe apartments in different areas of town.

Terms like "convenient" and "safe and secure" are more common in neighborhoods with a greater proportion of people of color, while "vintage" and "classic" are more popular in predominantly white neighborhoods.

"When you're looking at racial segregation, we all make housing choices, and those choices we make affect segregation. We should know if we're making choices based on racialized discourse," said Ian Kennedy, a graduate student at the UW and lead author of the study. "A racialized society can be perpetuated through means that aren't clearly conscious."

The findings don't mean the ads are overtly, or even intentionally, racist, Kennedy said. Rather, words and phrases - certain terms common to some neighborhoods, and certain terms for others - can reinforce perceptions of neighborhoods, influence where people choose to live, and ultimately, create areas of the city where some racial and ethnic groups are more prevalent than others.

The study published Aug. 3 in the journal Social Forces.

Past research has documented segregation in Seattle, and the legacy of redlining in some neighborhoods. Through the mid-20th century, real estate and rental ads identified properties in "restricted" areas - those with covenants designed to keep out people of color. The Fair Housing Act of 1968 prevented such discrimination in housing, and by 1970, overtly race-related language in local housing ads had essentially disappeared. But by then, as is the case elsewhere in the country, geographic and demographic patterns had been cemented, and less-explicit forms of discrimination continued.

Given Seattle's economic and population growth in recent years, Kennedy wanted to examine the factors that could sustain some of the de facto segregation that exists today. Seattle has grown by more than 130,000 people since 2010, and people new to the area may have little information about specific neighborhoods.

UW sociologist Kyle Crowder has written about how people tend to move to neighborhoods where there are others "like" them, often because others in their social networks live there or recommend them. Combined with the legacy of racial segregation, this perpetuates neighborhoods where predominantly white people live and shop, neighborhoods where Black people tend to live and shop, and so on.

For this study, Kennedy and the research team started with more than 400,000 Craigslist ads for the Seattle-Tacoma-Bellevue area between March 2017 and September 2018. Removing duplicate ads cut the database to about 45,000 from nearly 850 census tracts; data from the Census Bureau's American Community Survey established the racial and ethnic breakdown of each tract.

Kennedy used an approach called topic modeling to recognize groups of words appearing together, and categorized those groups into 40 topics. For example, the topic "vintage charm" typically included words like "vintage," "classic" and brick." The topic of "convenience and ease" included terms such as "easy," "convenient," "location" and "open."

Kennedy was then able to spot patterns between the terms in the ads, and the neighborhoods the ads were tied to.

Topics such as "vintage charm" and those related to walkability and surrounding amenities were associated more frequently with predominantly white neighborhoods in Seattle, such as Wallingford and Queen Anne. In neighborhoods with a greater proportion of people of color, such as Seattle's Northgate and in Kent, topics like "safe and friendly" and those pertaining to drive times and bus access were common. In particular, terms related to security - "safety," "secure," "controlled," "courtesy patrol" - were associated more frequently with neighborhoods with a higher proportion of Black residents.

"These associations are sadly aligned with what we know about racial stereotypes in the United States," Kennedy said. "We're worried that, in addition to influencing housing patterns, that these ads could also be a site for the reproduction of racial stereotypes."

A general theme emerged, Kennedy said: Listings in predominantly white neighborhoods highlighted history, culture and community. In neighborhoods that had a greater proportion of people of color, listings focused more often on features that separate the property from its surroundings, or simply on transportation out of the area.

The goal of rental ads, of course, is to occupy the unit, Kennedy said, so listings try to highlight what will draw a tenant; reversing perceptions and patterns requires a more systemic effort to discourage segregation. The study notes how the Chicago suburb of Oak Park took a proactive approach to integrating its community by promoting a variety of neighborhoods and working with real estate agents, landlords and prospective tenants on changing perceptions.

Credit: 
University of Washington

Study finds that water efficiency achievable throughout US without decree

A recent study co-authored by two Northern Arizona University researchers showed that targeted efforts to increase water efficiency could save enough water annually to fill Lake Mead. It could happen without significantly compromising economic production, jobs or tax revenue.

The study, published today in Environmental Research Letters, demonstrates that there is no one right answer to increase water efficiency--rather, there are dozens of right answers depending on region, industry and company. Ben Ruddell, the director of the FEWSION Project and director of the School of Informatics, Computing, and Cyber Systems (SICCS) and Richard Rushforth, an assistant research professor in SICCS, are co-authors on the study. Landon Marston, an assistant professor of civil and environmental engineering at Virginia Tech, led the study.

"What's unique about this study is that we try to answer the question of how much water conservation can readily and affordably be achieved in each region and industry of the United States by looking at the conservation that has already been achieved by the water conservation leaders in each industry and region," Ruddell said.

The study looked at how much water conservation can readily and affordably be achieved in each region and industry of the United States by looking at what conservation measures were already working and considering how much water is being used in every industry and throughout the country. Then the researchers ran statistics on that information, looking for areas that offer greater efficiency. The method controls for the differences in climate and technologies in different industries and states.

The study demonstrates how water users from farmers to manufacturers to service providers can collectively reduce their water consumption, both in their own processes and upstream throughout their supply chain, to reduce overexploitation of surface water and groundwater resources. It builds on earlier research evaluating potential water savings within the agricultural sector and in cities, applying a novel approach to water savings in the whole economy.

"The scope and detail of our study are unparalleled, and we believe this work will make a significant and timely contribution to the debate on how to conserve water while maintaining, or even increasing, economic activity," Marston said. "We find that some of the most water-stressed areas throughout the U.S. West and South have the greatest potential for water savings, with about half of these water savings obtained by improving water productivity in the production of corn, cotton and alfalfa."

The research argues that streamflow depletion throughout the West can be decreased on average by 6.6 percent to 23.5 percent without significantly reducing economic production or increasing costs. That is the other piece in this problem--that significant water savings can happen without significant harm to the economy. The majority of U.S. industries and regions can make the biggest contributions to water conservation by working with suppliers to reduce water use "upstream" in their supply chain; some large companies are already adopting this supply chain sustainability.

In the Southwest, which has a sizable agricultural industry as well as perennial water scarcity issues, the conversation about conservation has included ways to improve water use efficiency among farmers since the 1980s, but the study found that more improvement is readily achievable. The good news is that improvement is possible.

Although the study did not argue for specific conservation measures and water saving policies, it does offer a number of high-level recommendations. These included partnerships between large manufacturers, retailers and large metropolitan areas that are "downstream" in the water supply chain, which have more money and influence in making changes, and the large water users "upstream" to help organize and fund water conservation.

"This study argues that we can solve a large fraction of the U.S. water supply crisis simply by employing water conservation strategies that are already in routine use by the water productivity leaders in every industry and every region of the U.S.A.," Ruddell said. "We don't need a revolution in laws or technologies, or much more money, to achieve this. In the 21st century, water conservation has the potential to affordably boost and protect our national water supply in a big way, and avoid the need for hundreds of billions of dollars in taxing and spending on new water infrastructure to cope with drought and climate change."

Credit: 
Northern Arizona University

Steps outlined to reduce the risk of stroke during, after heart surgery

DALLAS, August 26, 2020 -- Steps for reducing the risk of stroke in patients undergoing heart surgery are detailed in a new American Heart Association Scientific Statement, "Considerations for Reduction of Risk of Perioperative Stroke for Adult Patients Undergoing Cardiac and Thoracic Aortic Operations," published today in the American Heart Association's flagship journal Circulation. Pre-screening, surgical technique changes, early diagnosis while in surgery and quick team response all contribute to better survival rates and reduce the risks of major disability for patients.

"Cardiac surgery has come a long way in recent decades, and improvements in pre-operative screening and treatment now really make a difference between a patient suffering a disabling stroke or surviving and thriving with a good quality of life," said Mario F.L. Gaudino, M.D., chair of the writing group for the scientific statement, and a cardiac surgeon and professor of cardiothoracic surgery at New York-Presbyterian and Weill Cornell Medicine in New York City. "This statement provides an overview of the latest surgical protocols and techniques that can reduce stroke risk after heart surgery and improve patient survival and outcomes."

A stroke that happens during or soon after heart surgery is called a perioperative stroke. Patients undergoing heart surgery who experience perioperative stroke have a 5 to 10 times higher risk of in-hospital death, increased costs and length of hospital stay, and increased risk of cognitive decline one year after surgery. The statement cites stroke as the most feared complication of cardiac surgery - most patients would sacrifice longevity for freedom from stroke.

Stroke risk for common cardiac procedures varies depending on both patient risk factors and the procedure. The risk is about 1% for a valve repair or coronary artery bypass alone; 2-3% if those procedures are combined; and 3-9% for surgeries involving the aorta, the body's main and largest artery. Stroke risk is also higher for the 27% to 40% of patients who develop atrial fibrillation after heart surgery. Atrial fibrillation causes the heart's smaller chambers to flutter and increases the risk of a dangerous blood clot that can dislodge, travel to the brain and cause a stroke.

Typical pre-surgery screening for perioperative stroke risk includes an assessment of age, high blood pressure, high cholesterol, Type 2 diabetes, smoking, heart failure, renal disease, atrial fibrillation and prior history of stroke or transient ischemic attack. The scientific statement further suggests monitoring and actions to diagnose and treat a surgery-related stroke quickly. Highlights of the statement's recommendations are:

Prevention during surgery

Monitor blood flow to the brain;

Intraoperative (during surgery) imaging of the aorta;

Tight blood pressure control; and

Closely monitor blood loss and the need for transfusion.

Early stroke diagnosis

Perform a complete neurologic exam as soon as possible after surgery;

If a patient is high-risk for perioperative stroke, consider a fast-track anesthesia protocol to help quickly identify signs of a stroke after surgery;

Have a stroke team in place to provide emergency treatment if a stroke is suspected; and

Conduct a head CT and CT angiography of head and neck as soon as stroke is suspected.

Rapid treatment of perioperative stroke

Transfer the patient to intensive care;

Optimize brain oxygenation and perfusion;

Consider clot busting or clot removal therapy; and

Evaluate patient's speech and swallow function; evaluate for rehabilitation; screen for depression; and begin preventive therapy for deep vein thrombosis.

"It's imperative that a stroke team work together to assess a patient's health before, during and after heart surgery. In addition to the surgeons, this multidisciplinary team should include stroke neurologists, neuro-interventionalists, neurocritical care specialists and neuro-anesthesiologists," added Gaudino. "Following these protocols can lead to quicker response times by medical teams in the event of an emergency and help to reduce the frequency of neurological injuries among patients."

This scientific statement was prepared by the volunteer writing group on behalf of the American Heart Association's clinical Council on Cardiovascular Surgery and Anesthesia; the Stroke Council; and the Council on Cardiovascular and Stroke Nursing.

Credit: 
American Heart Association

Spit in a tube to diagnose heart attack

Sophia Antipolis, France - 26 Aug 2020: A saliva test could fast track heart attack diagnosis, according to preliminary research presented today at ESC Congress 2020.1

The innovative technique requires patients to spit into a tube and provides results in 10 minutes, compared to at least one hour for the standard blood test.

Heart attacks need urgent diagnosis, followed by treatment to restore blood flow to blocked arteries. Diagnosis is based on symptoms (such as chest pain), an electrocardiogram (ECG) and a blood test for cardiac troponin, a protein released into the blood when the heart muscle is injured.

"There is a great need for a simple and rapid troponin test for patients with chest pain in the pre-hospital setting," said study author Dr. Roi Westreich of Soroka University Medical Centre, Beer Sheva, Israel. "Currently troponin testing uses blood samples. In this preliminary study we evaluated the feasibility of a novel method using saliva."

The purpose of the study was to see if cardiac troponin could be detected in the saliva of patients with heart muscle injury. Saliva samples underwent a unique processing procedure to remove highly abundant proteins.2 A total of 32 patients with heart muscle injury (i.e. they had a positive cardiac troponin blood test) and 13 healthy volunteers were requested to provide saliva samples by spitting into a collecting tube. Then, half of each sample was processed, and the other half remained in its natural state.

The researchers then tested the processed and unprocessed saliva samples for cardiac troponin. "Since no test has been developed for use on saliva, we had to use commercially available tests intended for whole blood, plasma, or serum, and adjust them for saliva examination," said Dr. Westreich.

For patients, the researchers compared the results from the saliva samples (processed and unprocessed) with the blood samples. There was strong agreement between the blood findings and the processed saliva, but not saliva in its natural state. Some 84% of the processed saliva samples tested positive for troponin, compared to just 6% of the unprocessed saliva.

Among healthy participants, no cardiac troponin was detected in the processed and unprocessed saliva samples.

Dr. Westreich said: "This early work shows the presence of cardiac troponin in the saliva of patients with myocardial injury. Further research is needed to determine how long troponin stays in the saliva after a heart attack. In addition, we need to know how many patients would erroneously be diagnosed with heart attack and how many cases would be missed."

The next steps in this research are to expand the number of patients being studied and create a prototype for a cardiac troponin test using saliva. "This prototype will be tailor-made for processed saliva and is expected to be more accurate than using a blood test on saliva," said Dr. Westreich. "It will be calibrated to show positive results when saliva troponin levels are higher than a certain threshold and show a yes/no result like a pregnancy test."

Credit: 
European Society of Cardiology

Radiation for young adult cancer linked to worse BC survival in premenopausal women

Bottom Line: Among premenopausal women with breast cancer, those who were previously treated with radiation for a primary childhood, adolescent, or young adult cancer had worse breast cancer-specific survival.

Journal in Which the Study was Published: Cancer Epidemiology, Biomarkers & Prevention, a journal of the American Association for Cancer Research

Author: Candice A. Sauder, MD, surgical oncologist at the University of California (UC) Davis Comprehensive Cancer Center

Background: "We traditionally use similar therapies for primary breast cancer and second primary breast cancer, and base our treatment approaches on specific prognostic factors," said Sauder. "Our results suggest that breast cancer-related survival is significantly decreased among all survivors of childhood, adolescent, and young adult cancer who were treated with radiation therapy and then develop breast cancer, even in the setting of early-stage breast cancer and other characteristics that are considered good prognostic factors. As such, we may need to tailor our treatment strategy for women with a second primary breast cancer."

According to the National Cancer Institute, a second primary cancer is defined as a new cancer that occurs in an individual who has had cancer in the past.

Treatments for many common childhood and adolescent/young adult (AYA) cancers incorporate radiation therapy, which is a risk factor for second primary breast cancer. Second primary breast malignancies in younger women who had received prior radiation therapy have unique clinical characteristics, noted Sauder. However, it is unknown whether such features are related to prior radiation treatments or to the premenopausal status, she said.

How the Study was Conducted: To better understand how radiation treatment used in the primary setting affects the clinical characteristics of second primary breast cancers in younger women, Sauder and colleagues interrogated the California Cancer Registry, which encompasses nearly all invasive cancers diagnosed in California. They analyzed data from women ages 12 to 50 (to capture premenopausal breast cancer based on approximations of age at menarche and menopause) diagnosed with primary (107,751 women) or second primary breast cancer (1,147 women) between January 1, 1988, and December 31, 2014. Patients with second primary breast cancer were limited to those who had a first primary cancer treated with radiation between the ages of 12 and 39.

The researchers compared demographic and clinical factors between women with second primary breast cancer and those with primary breast cancer. Further, they compared breast cancer-specific survival between these groups, both collectively and for specific subgroups, including age, race/ethnicity, lymph node involvement, hormone receptor status, and HER2 status.

Results: Overall, compared with premenopausal women with primary breast cancer, those with second primary breast cancer previously treated with radiation were more likely to be Hispanic or Black, had earlier stage tumors, had higher grade tumors, had cancer without lymph node involvement, and had tumors that were hormone receptor-negative. Women with second primary breast cancer in this cohort had roughly twice the risk of breast cancer-specific death compared with women with primary breast cancer.

The researchers also discovered that breast cancer-specific survival among women with second primary breast cancer previously treated with radiation was significantly worse for all subgroups considered. Notably, subgroups of women who typically have a better prognosis in the primary breast cancer setting --including women with hormone receptor-positive tumors, tumors without lymph node involvement, stage I disease, and women of Asian or Pacific Island ethnicity--experienced worse survival after a second primary breast cancer.

For example, women with second primary breast cancer previously treated with radiation had over twice the risk of breast cancer-specific mortality if they had stage I disease, and nearly twice the risk of breast cancer-specific mortality if they had stage II or stage III disease, compared with women whose primary breast cancer was in the same stage. Similarly, women with second primary breast cancer in this cohort had roughly 2.4 times the risk of breast cancer-specific mortality if they had tumors without lymph node involvement, and roughly 1.7 times the risk of breast cancer-specific mortality if they had tumors with lymph node involvement, compared with women with primary breast cancer with the same lymph node involvement status.

Author's Comments: "We found that the negative impact of second primary breast cancer among women previously treated with radiation was particularly strong in subgroups of patients that have superior survival after primary breast cancer," said Sauder. "It will be important to prospectively evaluate how certain treatments, such as specific radiation fields or chemotherapeutic agents, can affect second primary breast cancer outcomes."

Study Limitations: Limitations of the study include a lack of comorbidity data and genetic information, including BRCA mutation status, which can influence treatment decisions and may affect second primary breast cancer risk.

Credit: 
American Association for Cancer Research

Unlocking the mysteries of the brain

How does our brain store information?

Seeking an answer, researchers at CHU Sainte-Justine Hospital and Université de Montréal have made a major discovery in understanding the mechanisms underlying learning and memory formation.

The results of their study are presented today in Nature Communications.

Led by Professor Roberto Araya, the team studied the function and morphological transformation of dendritic spines, tiny protrusions located on the branches of neurons, during synaptic plasticity, thought to be the underlying mechanism for learning and memory.

"We are very excited because this is the first time that the rules of synaptic plasticity, a process directly related to memory formation in the brain, have been discovered in a way that allows us to better understand plasticity and ultimately how memories are formed when neurons of the cerebral neocortex receive single and/or multiple streams of sensory information" said Professor Araya.

A neuronal "tree"

The brain is made up of billions of excitable nerve cells better known as neurons. They specialize in communication and information processing.

"Imagine a tree," said Araya. "The roots are represented by the axon, the central trunk by the cell body, the peripheral branches by the dendrites and finally, the leaves by the dendritic spines. These thousands of small leaves act as a gateway by receiving excitatory information from other cells. They will decide whether this information is significant enough to be amplified and circulated to other neurons.

"This is a key concept," he added, "in the processing, integration and storage of information and therefore in memory and learning."

Neurons amplify the "volume"

Dendritic spines serve as a contact zone between neurons by receiving inputs (information) of varying strength. If an input is persistent, a mechanism by which neurons amplify the "volume" is triggered so that it can better "hear" that particular piece of information.

Otherwise, information of a low "volume" will be further turned down so that it goes unnoticed. This phenomenon corresponds to synaptic plasticity, which involves the potentiation or depression of synaptic input strength.

"This is the fundamental law of time-dependent plasticity, or Spike-timing-dependent plasticity (STDP), which adjusts the strength of connections between neurons in the brain and is believed to contribute to learning and memory," said Sabrina Tazerart, co-author of the study.

While the scientific literature shows this phenomenon and how neurons connect, the precise structural organization of dendritic spines and the rules that control the induction of synaptic plasticity have remained unknown.

"Laws of connections"

Araya's team has succeeded in shedding light onto the mechanisms underlying STDP.

"Until now, no one knew how synaptic inputs (incoming information) were arranged in the 'neural tree' and what precisely causes a dendritic spine to increase or decrease the strength, or loudness, of information it passes on," the professor said. "Our goal was to extract "laws of synaptic connectivity" responsible for building memories in the brain.'"

For their study, his team employed preclinical models at a juvenile stage, a critical period for learning and memory in the brain.

Using advanced techniques in two-photon microscopy that mimic synaptic contacts between two neurons, the researchers discovered an important law related to the arrangement of information received by dendritic spines.

Their work shows that depending on the number of inputs received (synapses) and their proximity, the information will be taken into account and stored differently.

"We found that if more than one input occurs within a small piece of tree branch, the cell will always consider this information important and will increase its volume," said co-first author Diana E. Mitchell.

"A major discovery"

"This is a major discovery," added Araya.

"Structural and functional alterations of dendritic spines, the major recipients of inputs from other neurons, are often associated with neurodegenerative conditions, such as Fragile X syndrome or autism, as the patient can no longer process or store information properly," he said.

"This disrupts the logic of memory construction. Now, by understanding the mechanisms underlying the dynamics of dendritic spines and how they impact the nervous system, we will be able to develop new and better-adapted therapeutic approaches."

Credit: 
University of Montreal

Experts reveal major holes in international ozone treaty

A new paper, co-authored by a University of Sussex scientist, has revealed major holes in an international treaty designed to help repair the ozone layer, putting human health at risk and increasing the speed of climate change.

Evidence amassed by scientists in the 1970s and 1980s showed that the depletion of the ozone layer in the stratosphere was one of the first truly global threats to humanity.

Chemicals produced through economic activity were slowly drifting to the upper atmosphere where they were destroying the ozone layer, which plays an indispensable role in protecting humanity and ecosystems by absorbing harmful ultraviolet radiation from the sun.

In 1987, countries signed up to a treaty to take reparative action, known as the 'Montreal Protocol on Substances that Deplete the Ozone Layer, which was eventually ratified by all 197 UN member states.'

But in a paper published today in Nature Communications, experts have flagged major gaps in the treaty which must be addressed if the ozone layer is to be repaired and avert the risks posed to human health and the climate.

Professor Joseph Alcamo, Director of the Sussex Sustainability Research Programme and former Chief Scientist at UNEP, said: "The Montreal Protocol and its amendments have no doubt been an effective worldwide effort to control the toughest substances depleting the ozone. But our paper shows that the treaty has developed too many gaps to fully repair the ozone layer. It's time to plug the holes in the ozone hole treaty."

Professor Alcamo, along with lead author Professor Susan Solomon of Massachusetts Institute of Technology (MIT) and co-author Professor A. R. Ravishankara of Colorado State University, have identified several 'gaps' which consist of ozone depleting substances not covered in the treaty.

These include:

Unaccounted for new sources of CFC and HFC emissions recently detected in the atmosphere.

Leakages of ozone depleting substances from old air conditioners, refrigerators and insulating foams.

Inadvertent releases of ozone-depleting gases from some manufacturing processes.

Emissions of the ozone-depleting gas, nitrous oxide, stemming mostly from agricultural activities.

The authors have called for a range of solutions to plug the gaps including:

A toughening of compliance with the treaty by using provisions that are already part of the Montreal Protocol.

Boosting the effectiveness of the treaty by adding in regular environmental monitoring of ozone-depleting substances.

Controlling the emissions of substances that have slipped through the treaty up to now, including nitrous oxide emissions from agriculture, and ozone-depleting substances leaking from old refrigerators and other equipment.

In addition, because ozone-depleting substances and their substitutes contribute significantly to global warming, the authors urge a faster phasing out of all of these substances as a way of combatting climate change.

The ozone layer absorbs harmful ultraviolet radiation from the sun but this protective layer is slowly destroyed by industrial gases that slowly drift up from the earth's surface including CFCs (chlorofluorocarbons) contained in refrigerants, foaming agents and, earlier, propellants in aerosol sprays.

Discovery of the 'ozone hole' above high latitudes in the 1980s provided final evidence of the importance of ozone depletion.

By 1985, countries had signed the Vienna Convention, which pledged to reduce CFCs and other ozone-depleting substances. Two years later, they signed the Montreal Protocol that laid out a plan of action.

During his time as the first Chief Scientist of UNEP, which hosts the Secretariat of the Montreal Protocol, Professor Alcamo coordinated groups of scientists in producing policy-oriented reports that addressed emerging ozone depletion issues.

UNEP reports that 98% of the chemicals targeted for removal in the Montreal Protocol had been phased out by 2009, avoiding hundreds of millions of cases of skin cancer and tens of millions of cases of cataracts. However, this new paper shows that some important sources were not targeted by the Protocol - and urgently need to be now.

Professor Alcamo said: "Since most ozone-depleting gases and their current substitutes are also potent greenhouse gases, it's time to use the Montreal Protocol to draw down these gases even faster to help avoid dangerous global warming.

"We won't be able to reach the global Sustainable Development Goals by 2030 without closing the gaps in the ozone treaty. It's hard to imagine, for example, how the global health and climate goals could be reached without drastically drawing down all ozone-depleting gases and their substitutes. If we fail, humanity will have to face a higher risk of skin cancers and more rapid climate change."

Credit: 
University of Sussex

Obesity linked with higher risk for COVID-19 complications

A review of COVID-19 studies reveals a troubling connection between two health crises: coronavirus and obesity.

From COVID-19 risk to recovery, the odds are stacked against those with obesity, and a new study in Obesity Reviews raises concerns about the impact of obesity on the effectiveness of a future COVID-19 vaccine.

Researchers examined the available published literature on individuals infected with the virus and found that those with obesity (body mass index over 30) were at a greatly increased risk for hospitalization (113%), more likely to be admitted to the intensive care unit (74%), and had a higher risk of death (48%) than those with a lower BMI.

A team of researchers at the University of North Carolina Chapel Hill’s Gillings School of Global Public Health and the Carolina Population Center collaborated with co authors from the Saudi Health Council and the World Bank on the paper.

For the paper, researchers reviewed immunological and biomedical data to provide a detailed layout of the mechanisms and pathways that link obesity with increased risk of COVID-19 as well as an increased likelihood of developing more severe complications from the virus.

Obesity is already associated with numerous underlying risk factors for COVID-19, including hypertension, heart disease, Type 2 diabetes, and chronic kidney and liver disease.

Metabolic changes caused by obesity — such as insulin resistance and inflammation – make it difficult for individuals with obesity to fight some infections, a trend that can be seen in other infectious diseases, such as influenza and hepatitis.

During times of infection, uncontrolled serum glucose, which is common in individuals with hyperglycemia, can impair immune cell function.

“All of these factors can influence immune cell metabolism, which determines how bodies respond to pathogens, like the SARS-CoV-2 coronavirus,” said co-author Melinda Beck, professor of nutrition at Gillings School of Global Public Health. “Individuals with obesity are also more likely to experience physical ailments that make fighting this disease harder, such as sleep apnea, which increases pulmonary hypertension, or a body mass index that increases difficulties in a hospital setting with intubation.”

Previous work by Beck and others has demonstrated that the influenza vaccine is less effective in adults with obesity. The same may be true for a future SARS-CoV-2 vaccine, said Beck.

“However, we are not saying that the vaccine will be ineffective in populations with obesity, but rather that obesity should be considered as a modifying factor to be considered for vaccine testing,” she said. “Even a less protective vaccine will still offer some level of immunity.”

Roughly 40% of Americans are obese and the pandemic’s resulting lockdown has led to a number of conditions that make it harder for individuals to achieve or sustain a healthy weight.

Working from home, limiting social visits and a reduction in everyday activities — all in an effort to stop the spread of the virus — means we’re moving less than ever, said lead study author Barry Popkin, professor of nutrition at Gillings School of Global Public Health and member of the Carolina Population Center

The ability to access healthy foods has also taken a hit. Economic hardships put those who are already food insecure at further risk, making them more vulnerable to conditions that can arise from consuming unhealthy foods.

“We’re not only at home more and experience more stress due to the pandemic, but we’re also not visiting the grocery store as often, which means the demand for highly processed junk foods and sugary beverages that are less expensive and more shelf-stable has increased,” he said. “These cheap, highly processed foods are high in sugar, sodium and saturated fat and laden with highly refined carbohydrates, which all increase the risk of not only excess weight gain but also key noncommunicable diseases.”

Popkin, who is part of the Global Food Research Program at UNC-Chapel Hill, said the findings highlight why governments must address the underlying dietary contributors to obesity and implement strong public health policies proven to reduce obesity at a population level. Other countries, like Chile and Mexico, have adopted policies from taxing foods high in sugar to introducing warning labels on packaged foods that are high in sugar, fats and sodium and restricting the marketing of junk foods to children.

“Given the significant threat COVID-19 represents to individuals with obesity, healthy food policies can play a supportive — and especially important — role in the mitigation of COVID-19 mortality and morbidity,” Popkin said.

Credit: 
University of North Carolina at Chapel Hill

Bacteria could survive travel between Earth and Mars when forming aggregates

image: The bacterial exposure experiment took place from 2015 to 2018 using the Exposed Facility located on the exterior of Kibo, the Japanese Experimental Module of the International Space Station.

Image: 
JAXA/NASA

Imagine microscopic life-forms, such as bacteria, transported through space, and landing on another planet. The bacteria finding suitable conditions for its survival could then start multiplying again, sparking life at the other side of the universe. This theory, called "panspermia", support the possibility that microbes may migrate between planets and distribute life in the universe. Long controversial, this theory implies that bacteria would survive the long journey in outer space, resisting to space vacuum, temperature fluctuations, and space radiations.

"The origin of life on Earth is the biggest mystery of human beings. Scientists can have totally different points of view on the matter. Some think that life is very rare and happened only once in the Universe, while others think that life can happen on every suitable planet. If panspermia is possible, life must exist much more often than we previously thought," says Dr. Akihiko Yamagishi, a Professor at Tokyo University of Pharmacy and Life Sciences and principal investigator of the space mission Tanpopo.

In 2018, Dr. Yamagishi and his team tested the presence of microbes in the atmosphere. Using an aircraft and scientific balloons, the researchers, found Deinococcal bacteria floating 12 km above the earth. But while Deinococcus are known to form large colonies (easily larger than one millimeter) and be resistant to environmental hazards like UV radiation, could they resist long enough in space to support the possibility of panspermia?

To answer this question, Dr. Yamagishi and the Tanpopo team, tested the survival of the radioresistant bacteria Deinococcus in space. The study, now published in Frontiers in Microbiology, shows that thick aggregates can provide sufficient protection for the survival of bacteria during several years in the harsh space environment.

Dr. Yamagishi and his team came to this conclusion by placing dried Deinococcus aggregates in exposure panels outside of the International Space Station (ISS). The samples of different thicknesses were exposed to space environment for one, two, or three years and then tested for their survival.

After three years, the researchers found that all aggregates superior to 0.5 mm partially survived to space conditions. Observations suggest that while the bacteria at the surface of the aggregate died, it created a protective layer for the bacteria beneath ensuring the survival of the colony. Using the survival data at one, two, and three years of exposure, the researchers estimated that a pellet thicker than 0.5 mm would have survived between 15 and 45 years on the ISS. The design of the experiment allowed the researcher to extrapolate and predict that a colony of 1 mm of diameter could potentially survive up to 8 years in outer space conditions.

"The results suggest that radioresistant Deinococcus could survive during the travel from Earth to Mars and vice versa, which is several months or years in the shortest orbit," says Dr. Yamagishi.

This work provides, to date, the best estimate of bacterial survival in space. And, while previous experiments prove that bacteria could survive in space for a long period when benefitting from the shielding of rock (i.e. lithopanspermia), this is the first long-term space study raising the possibility that bacteria could survive in space in the form of aggregates, raising the new concept of "massapanspermia". Yet, while we are one step closer to prove panspermia possible, the microbe transfer also depends on other processes such as ejection and landing, during which the survival of bacteria still needs to be assessed.

Credit: 
Frontiers

Antagonistic genes modify rice plant growth

image: A short paddy rice variety (above) and long deepwater variety (below) were placed in rising levels of deep water. ACE1 was specifically triggered in the deepwater variety, stimulating elongation above water level.

Image: 
Motoyuki Ashikari

Scientists at Nagoya University and colleagues in Japan have identified two antagonistic genes involved in rice plant stem growth. Their findings, published in the journal Nature, could lead to new ways for genetically modifying rice crops.

Longer, deepwater rice crops are planted in South Asia and West Africa to survive floods. Shorter paddy rice varieties are widely cultivated worldwide because they are easier to harvest.

A key driver of plant growth is a hormone called gibberellic acid. It activates cell division in the stem tissue, causing the stem to lengthen. Breeders know they can control plant height by stimulating or inhibiting gibberellic acid activity. However, exactly how this works has been unclear.

Bioscientist Motoyuki Ashikari has been studying the growth and evolution of rice for years. He and a team of researchers conducted genetic studies and identified two genes that are involved in regulating rice plant growth.

"We showed that gibberellic acid is necessary, but not enough, for stem elongation," says Ashikari.

Interestingly, the two genes, called ACCELERATOR OF INTERNODE ELONGATION 1 (ACE1) and DECELERATOR OF INTERNODE ELONGATION 1 (DEC1), counteract each other as part of the regulation process.

In the presence of gibberellic acid, ACE1 stimulates cell division and elongation of the stem's 'internode' sections in deepwater rice. The shorter paddy rice variety did not have a functional ACE1 gene, but it did have a homologous ACE1-like gene that was switched on to activate stem elongation at a different point of plant development.

DEC1 was found in both the deepwater and paddy rice varieties. Its expression was reduced when deepwater rice plants were placed in deep water or treated with gibberellic acid. However, DEC1 continued to be expressed in paddy rice, even under the same conditions, suggesting this gene helps to suppress stem growth.

"We also found that ACE1 and DEC1 are conserved and functional in other plant species, like barley and other grasses, so our investigations improve understanding of the regulation of stem elongation in members of the Gramineae family that may have similar stem elongation mechanisms," says Ashikari.

The team next aims to understand stem elongation at the molecular level by identifying factors that are associated with ACE1 and DEC1 expression.

Credit: 
Nagoya University