Tech

Science puts historical claims to the test

The latest analytical techniques available to scientists can confirm the validity of historical sources in some cases, and suggest a need for reconsideration in others.

As any historian will tell you, we can rarely take the claims made by our ancestors at face value. The authenticity of many of the artefacts which shape our understanding of the past have been hotly debated for centuries, with little consensus amongst researchers. Now, many of these disputes are being resolved through scientific research, including two studies recently published in EPJ Plus.

The first of these, led by Diego Armando Badillo-Sanchez at the University of Évora in Portugal, analysed an artefact named 'Francisco Pizarro's Banner of Arms' - believed to have been carried by the Spanish conquistador during his conquest of the Inca Empire in the 16th century.

The second team, headed by Armida Sodo at Roma Tre University in Italy, investigated a colour print of Charlemagne - the medieval ruler who united much of Western Europe - assumed to be from the 16th century.

The work of both teams shows that today's analytical techniques are sufficiently sophisticated to differentiate between authentic historical artefacts and forgeries. In their study, Badillo-Sanchez's team found that the fabrics, dyes, adhesives and decorative metals used to make Francisco Pizarro's Banner of Arms are consistent with the manufacturing techniques available to the Spanish in the 15th and 16th centuries.

In contrast, Sodo's team discovered that the pigments used in the Charlemagne print had not been invented until the 19th century. Both groups drew their conclusions by combining several analytical techniques, including Raman and infrared spectroscopy, X-ray fluorescence, chromatography and microscope imaging, which allowed them to study the fragile objects without damaging them.

The two studies show that with the help of the latest analytical techniques, the authenticity of many historical artefacts can be confirmed, while that of others may need to be reconsidered.

Credit: 
Springer

Feeding dogs and cats with raw food is not considered a significant source of infections

image: Raw food denotes any meat, internal organs, bones and cartilage fed to pets uncooked.

Image: 
Johanna Anturaniemi

An extensive international survey conducted at the University of Helsinki indicates that pet owners do not consider raw food to considerably increase infection risk in their household. In the survey, targeted at pet owners, raw food was reliably determined to be a contaminant only in three households.

The safety of feeding raw food to pets has become a topic of debate on a range of forums, but so far, no outbreaks of contamination among humans caused by raw pet food have been reported. Raw food denotes any meat, internal organs, bones and cartilage fed to pets uncooked.

Now, a survey conducted at the Faculty of Veterinary Medicine investigated perceptions on food-transmitted pathogens among pet owners who feed their pets raw food.

A total of 16,475 households from 81 countries responded to the survey. Out of these, only 39 households (0.24%) reported having been contaminated by pet food, and were also able to name the pathogen. The most common pathogens reported were Campylobacteria followed by Salmonella, in addition to which there were occurrences of Escherichia coli, Clostridium, Toxoplasma and a single Yersinia infection.

However, the meat fed to pets had been analysed in only three households (0.02%), identifying the same pathogen as found in the samples taken from the infected individuals. As well as the 39 households above, 24 households (0.15%) reported a contamination from pet food without being able to name the pathogen causing the symptoms.

In total, 99.6% of households feeding their pets raw food did not report any pathogens being transmitted from the raw food to humans. The time the responding households had been feeding raw food to their pets ranged from several weeks to 65 years, with 5.5 years as the mean value. The reported cases of illness covered whole time frame that raw food was consumed in the household.

The median age among the infected individuals was 40.1 years. From among the 39 households with infections, in four the infected individuals were children between two and six years of age, while in two households the infected were immunocompromised individuals (cancer and Crohn's disease). However, a quarter of these households had children between two and six years of age, while 15% had immunocompromised individuals.

"It was surprising to find that statistical analyses identified fewer infections in the households with more than 50% of the pet diet consisting of raw food. Furthermore, feeding pets raw salmon or turkey was associated with a smaller number of infections," says researcher Johanna Anturaniemi from the Faculty of Veterinary Medicine.

A positive correlation with infection was only found in relation to children between two and six years of age living in the household, even though most of the infected individuals (90%) were adults.

"This raises the question of whether the pathogens could have been transmitted by children from outdoors, daycare centres or other public spaces, even if pet food had been assumed to be the source of infection," Anturaniemi says.

According to the researchers, the role of other factors in infections cannot be assessed in more detail within the confines of this study; rather, further research is needed. In contrast, reports of outbreaks of pathogens linked to pet treats and dry food can be found from around the world. In fact, the Dogrisk research group is planning to conduct a comparative follow-up study where infections transmitted from pet food are to be investigated in households that use both raw food and dry food.

The survey was translated into five languages and made available to all dog and cat owners across the globe feeding their cats and dogs raw food.

Credit: 
University of Helsinki

How do we get so many different types of neurons in our brain?

image: The sad-1 gene, present in all of the worm's 300 neurons (visualized by fluorescence), is spliced into different versions in different neurons. Neurons with one version fluoresce red, neurons with the other version fluoresce green, and yellow neurons in the bottom panel contain both versions.

Image: 
SMU (Southern Methodist University)

DALLAS (SMU) - SMU (Southern Methodist University) researchers have discovered another layer of complexity in gene expression, which could help explain how we're able to have so many billions of neurons in our brain.

Neurons are cells inside the brain and nervous system that are responsible for everything we do, think or feel. They use electrical impulses and chemical signals to send information between different areas of the brain, and between the brain and the rest of the nervous system, to tell our body what to do. Humans have approximately 86 billion neurons in the brain that direct us to do things like lift an arm or remember a name.

Yet only a few thousand genes are responsible for creating those neurons.

All cells in the human nervous system have the same genetic information. But ultimately, genes are turned "on" or "off" like a light switch to give neurons specific features and roles.

Understanding the mechanism of how a gene is or is not turned on - the process known as gene expression - could help explain how so many neurons are developed in humans and other mammals.

"Studies like this are showing how by unique combinations of specific genes, you can make different specific neurons," said Adam D. Norris, co-author of the new study and Floyd B. James Assistant Professor in the Department of Biological Sciences at SMU. "So down the road, this could help us explain: No. 1, how did our brain get this complex? And No. 2, how can we imitate nature and make whatever type of neurons we might be interested in following these rules?"

Scientists already have part of the gene expression puzzle figured out, as previous studies have shown that proteins called transcription factors play a key role in helping to turn specific genes on or off by binding to nearby DNA.

It is also known that a process called RNA splicing, which is controlled by RNA binding proteins, can add an additional layer of regulation to that neuron. Once a gene is turned on, different versions of the RNA molecule can be created by RNA splicing.

But before the SMU study was done, which was published in the journal eLife, it was not exactly clear what the logistics of creating that diversity was.

"Before this, scientists had mostly been focused on transcription factors, which is layer No. 1 of gene expression. That's the layer that usually gets focused on as generating specific neuron types," Norris said. "We're adding that second layer and showing that [transcription factors and RNA binding proteins] have to be coordinated properly.

And Norris noted, "this was the first time where coordination of gene expression has been identified in a single neuron."

Using a combination of old school and cutting-edge genetics techniques, researchers looked at how the RNA of a gene called sad-1, also found in humans, was spliced in individual neurons of the worm Caenorhabditis elegans. They found that sad-1 was turned on in all neurons, but sad-1 underwent different splicing patterns in different neuron types.

And while transcription factors were not shown to be directly participating in the RNA splicing for the sad-1 gene, they were activating genes that code for RNA binding proteins differently between different types of neurons. It is these RNA binding proteins that control RNA splicing.

"Once that gene was turned on, these factors came in and subtly changed the content of that gene," Norris said.

As a result, sad-1 was spliced according to neuron-specific patterns.

They also found that the coordinated regulation had different details in different neurons.

"Picture two different neurons wanting to reach the same goal. You can imagine they either go through the exact same path to get there or they take divergent paths. In this study, we're showing that the answer so far is divergent paths," said Norris. "Even in a single neuron, there are multiple different layers of gene expression that together make that neuron the unique neuron that it is."

Norris used worm neurons because "unlike in humans, we know where every worm neuron is and what it should be up to. Therefore, we can very confidently know which genes are responsible for which neural process.

"The very specific details from this study will not apply to humans. But hopefully the principles involved will," Norris explained. "From the last few decades of work in the worm nervous system, specific genes found to have a specific effect on the worm's behavior were later shown to be responsible for the same types of things in human nerves."

Credit: 
Southern Methodist University

Sum of three cubes for 42 finally solved -- using real life planetary computer

Hot on the heels of the ground-breaking 'Sum-Of-Three-Cubes' solution for the number 33, a team led by the University of Bristol and Massachusetts Institute of Technology (MIT) has solved the final piece of the famous 65-year-old maths puzzle with an answer for the most elusive number of all - 42.

The original problem, set in 1954 at the University of Cambridge, looked for Solutions of the Diophantine Equation x^3+y^3+z^3=k, with k being all the numbers from one to 100.

Beyond the easily found small solutions, the problem soon became intractable as the more interesting answers - if indeed they existed - could not possibly be calculated, so vast were the numbers required.

But slowly, over many years, each value of k was eventually solved for (or proved unsolvable), thanks to sophisticated techniques and modern computers - except the last two, the most difficult of all; 33 and 42.

Fast forward to 2019 and Professor Andrew Booker's mathematical ingenuity plus weeks on a university supercomputer finally found an answer for 33, meaning that the last number outstanding in this decades-old conundrum, the toughest nut to crack, was that firm favourite of Douglas Adams fans everywhere.

However, solving 42 was another level of complexity. Professor Booker turned to MIT maths professor Andrew Sutherland, a world record breaker with massively parallel computations, and - as if by further cosmic coincidence - secured the services of a planetary computing platform reminiscent of "Deep Thought", the giant machine which gives the answer 42 in Hitchhiker's Guide to the Galaxy.

Professors Booker and Sutherland's solution for 42 would be found by using Charity Engine; a 'worldwide computer' that harnesses idle, unused computing power from over 500,000 home PCs to create a crowd-sourced, super-green platform made entirely from otherwise wasted capacity.

The answer, which took over a million hours of calculating to prove, is as follows:

X = -80538738812075974 Y = 80435758145817515 Z = 12602123297335631

And with these almost infinitely improbable numbers, the famous Solutions of the Diophantine Equation (1954) may finally be laid to rest for every value of k from one to 100 - even 42.

Professor Booker, who is based at the University of Bristol's School of Mathematics, said: "I feel relieved. In this game it's impossible to be sure that you'll find something. It's a bit like trying to predict earthquakes, in that we have only rough probabilities to go by.

"So, we might find what we're looking for with a few months of searching, or it might be that the solution isn't found for another century."

Credit: 
University of Bristol

GIS and eDNA analysis system successfully used to discover new habitats of rare salamander

image: This is a Yamato salamander (Hynobius vandenburghi) and its egg sac (lower right).

Image: 
Koki Tsunekawa (Nature and Science Club Bioscience team leader, Gifu Senior High School 3rd grade)

A research team has successfully identified an unknown population of the endangered Yamato salamander (Hynobius vandenburghi) in Gifu Prefecture, using a methodology combining GIS and eDNA analysis. This method could be applied to other critically endangered species, in addition to being utilized to locate small organisms that are difficult to find using conventional methods.

The study was conducted by students from the Bioscience team in Gifu Senior High School's Nature and Science Club (which has been conducting research into the species for 13 years). They were supervised by teachers and aided by university researchers, including Professor Toshifumi Minamoto from Kobe University's Graduate School of Human Development and Environment. The project was a collaboration between Gifu Senior High School, Kobe University, Gifu University and Gifu World Freshwater Aquarium.

It has been reported that there are approximately 50 Hynobius species of salamander worldwide, around 30 of which are endemic to Japan. Hynobius vandenburghi (until recently known by its previous classification of H. nebulosus), is only found in central and western Japan, with Gifu Prefecture marking the north east limit of the species' distribution (Figure 2). However, like approximately 60% of amphibian species in Japan, it falls under the ranking of critically endangered and vulnerable species, mainly due to habitat decline*. Only three sites providing habitats for Yamato salamanders had been discovered in Gifu Prefecture up until recently.

The research team utilized a combined methodology of GIS and eDNA analysis with the aim of discovering more Yamato salamander habitats. GIS (Geographic Information System) is a spatial analysis tool that allows data and geographic information to be collected, displayed and analyzed. Environmental DNA analysis involves locating DNA of the species in the environment (in this case in water samples) to understand what kind of organisms live in that habitat.

First of all, environmental factors (such as vegetation, elevation, and gradient inclination and direction) present near the known habitats in Gifu Prefecture were identified, and this information was entered into the GIS to locate new potential habitats. This resulted in a total of five new potential sites being discovered- three in Gifu city and one site each in Kaizu and Seki cities.

Next, each site was visited and water samples were taken. Yamato salamander often lay their egg sacs in shallow water near rice paddies and wooded areas, so the water samples were taken from these environments. The samples were then analyzed for Yamato salamander eDNA. eDNA was discovered in the water from the Kaizu City site, the Seki City site and one of the Gifu City sites.

Field surveys were also conducted to find eggs or adult salamanders at each of the sites where eDNA was discovered. A single pair of egg sacs were found at the Kaizu city site (Figure 3). This lends support to the idea that the combined methodology of GIS and eDNA analysis can be successfully utilized to find new habitats of rare and elusive species like the Yamato salamander.

As this research was carried out by supervised high school students, it is anticipated that this combined methodology can be utilized not only by experts but also as a useful tool for citizen-led conservation efforts. Another advantage of the GIS and eDNA analysis method is that it requires less time, energy and funds compared to conventional field capture (locating animal specimens). This could prove invaluable for identifying and protecting the habitats of endangered species in the face of rapidly declining biodiversity worldwide.

Credit: 
Kobe University

Role of cancer protein ARID1A at intersection of genome stability and tumor suppression

image: Rugang Zhang, Ph.D., The Wistar Institute

Image: 
The Wistar Institute

PHILADELPHIA -- (Sept. 6, 2019) -- The ARID1A tumor suppressor protein is required to maintain telomere cohesion and correct chromosome segregation after DNA replication. This finding, reported by Wistar researchers in Nature Communications, indicates that ARID1A-mutated cells undergo gross genomic alterations that are not compatible with survival and explains the lack of genomic instability characteristic of ARID1A-mutated cancers.

The ARID1A gene is one of the most frequently mutated genes in human cancers, with a mutation rate that reaches 60% in ovarian clear cell carcinoma, a disease known for its poor response to chemotherapy. ARID1A exerts fundamental tumor suppressive functions whose loss are conducive to cancer development.

"Although ARID1A is known to act as a guardian of genome integrity, cancer types with high frequency of ARID1A mutations are not typically associated with genomic instability as measured by changes in gene copy numbers," said Rugang Zhang, Ph.D., deputy director of The Wistar Institute Cancer Center, professor and co-program leader of the Gene Expression and Regulation Program, and lead author on the new study.

Zhang and colleagues have now shown that ARID1A is essential for telomere cohesion as it controls expression of STAG1, a component of the cohesin protein complex. In fact, ablation of ARID1A function in vitro leads to downregulation of STAG1 and, consequently, telomere defects and chromosomal alterations during cell division.

Importantly, cells with inactivated ARID1A showed reduced ability to form colonies in an assay that is used as a measure of survival of cancer cells during cell division, indicating that the chromosomal defects occurring in the absence of ARID1A are not compatible with cell survival.

"We identified a selection process that enriches for cancer cells lacking genomic instability," said Zhang. "This may explain the apparent paradox of preserved genomic stability in ARID1A-mutated cancers."

"Interestingly, ARID1A inactivation correlates with a poor response to cell division-targeting chemotherapy such as paclitaxel," said Bo Zhao, Ph.D., first author of the study and a postdoctoral researcher in the Zhang Lab. "Our findings might partly explain why clear cell ovarian cancers typically respond poorly to this class of chemotherapy."

Credit: 
The Wistar Institute

Measuring changes in magnetic order to find ways to transcend conventional electronics

image: Combination of Faraday rotation and second-harmonic generation obtained the trajectory of an optically induced coherent spin precession. The time-resolved SHG is a valuable tool for the study of antiferromagnetic spin dynamics providing complementary information that is inaccessible by other techniques.

Image: 
Tokyo Tech

Researchers around the world are constantly looking for ways to enhance or transcend the capabilities of electronic devices, which seem to be reaching their theoretical limits. Undoubtedly, one of the most important advantages of electronic technology is its speed, which, albeit high, can still be surpassed by orders of magnitude through other approaches that are not yet commercially available.

A possible way of surpassing traditional electronics is through the use of antiferromagnetic (AFM) materials. The electrons of AFM materials spontaneously align themselves in such a way that the overall magnetization of the material is practically zero. In fact, the order of an AFM material can be quantified in what is known as the 'order parameter'. Recent studies have even shown that the AFM order parameter can be 'switched' (that is, change it from one known value to another, really fast) using light or electric currents, which means that AFM materials could become the building blocks of future electronic devices.

However, the dynamics of the order-switching process are not understood because it is very difficult to measure the changes in the AFM order parameter in real time with high resolution. Current approaches rely on measuring only certain phenomena during AFM order switching and trying to obtain the full picture from there, which has proven to be unreliable for understanding other more intricate phenomena in detail. Therefore, a research team lead by Prof. Takuya Satoh from Tokyo Tech and researchers from ETH Zurich, developed a method for thoroughly measuring the changes in the AFM order of an YMnO3 crystal induced through optical excitation (that is, using a laser).

The main problem that the researchers addressed was the alleged "practical impossibility" of discerning between electron dynamics and changes in the AFM order in real time, which are both induced simultaneously when the material is excited to provoke order-parameter switching and when taking measurements. They employed a light-based measuring method called 'second-harmonic generation', whose output value is directly related to the AFM order parameter, and combined it with measurements of another light-based phenomenon called the Faraday effect. This effect occurs when a certain type of light or laser is irradiated on magnetically ordered materials; in the case of YMnO3, this effect alters its AFM order parameter in a predictable and well-understood way. This was key to their approach so that they could separate the origin and nature of multiple simultaneous quantum phenomena that affected the measurements of both methods differently.

Combining these two different measurement methods, the researchers managed to fully characterize the changes in the AFM order parameter in real time with ultrafast resolution. "The proposed general approach allows us to access order-parameter dynamics at timescales of less than one trillionth of a second," states Prof. Satoh. The approach presented is crucial for better understanding the inner workings of antiferromagnetic materials. "Precise and thorough tracking of the variations in the order parameter is indispensable for understanding the complex dynamics occurring during ultrafast switching and other AFM-related phenomena," explains Prof. Satoh. The tool provided by the researchers should now be exploited to carry out more research and hopefully bring about the development of revolutionary electronic devices with unprecedented speeds.

Credit: 
Tokyo Institute of Technology

New study confirms protective effect of diabetes drugs against kidney failure

A new meta-analysis published in The Lancet Diabetes and Endocrinology today has found that SGLT2 inhibitors can reduce the risk of dialysis, transplantation, or death due to kidney disease in people with type 2 diabetes.

Lead author Dr Brendon Neuen from The George Institute for Global Health commented: "We found SGLT2 inhibitors clearly and powerfully reduce the risk of kidney failure."

"These findings confirm those of the recently reported CREDENCE trial, where canagliflozin was shown to prevent loss of kidney function and kidney failure in people with type 2 diabetes."

"Ongoing trials of other SGLT2 inhibitors will definitively demonstrate whether all agents in the class have similar kidney benefits, but these results provide further strong support for the key role of SGLT2 inhibition in kidney protection for people with diabetes today."

In an editorial accompanying the publication Richard Gilbert, Professor of Medicine at the University of Toronto, commented: "After years of stagnation, we are now on the brink of a new paradigm in the prevention and treatment of kidney disease in people with type 2 diabetes."

The development of kidney failure is among the most important consequences of diabetic kidney disease, with profound impacts on patients and their caregivers. Currently more than 3 million people worldwide are estimated to be receiving treatment for kidney failure and that number is predicted to increase to more than 5 million by 2035.

SGLT2 inhibitors were developed to lower glucose levels for people with diabetes. Early studies showed they reduced levels of protein in the urine leading to great hopes they would protect against kidney failure. Since then, several large studies have been designed to examine whether SGLT2 inhibitors prevented heart attack, stroke and kidney disease.

The authors conducted a meta-analysis, pooling data from major randomised controlled trials of SGLT2 inhibitors that reported effects on kidney outcomes in people with type 2 diabetes.

Four studies involving almost 40,000 participants were included in the meta-analysis, which assessed three SGLT2 inhibitors - canagliflozin, empagliflozin, and dapagliflozin - although most of the findings came from the CREDENCE study of canagliflozin. The results revealed:

SGLT2 inhibitors reduced the risk of dialysis, transplantation, or death due to kidney disease by about 30%

SGLT2 inhibitors also reduced the risk of kidney failure by 30% and reduced the risk of acute kidney injury by 25%

Co-author, Associate Professor Meg Jardine from The George Institute for Global Health commented: "Clinical practice guidelines currently recommend treatment with angiotensin-converting enzyme (ACE) inhibitors or angiotensin receptor blockers (ARBs) to slow the progression of kidney disease in people with diabetes."

"But the risk of developing kidney failure remains high and diabetes is now the most common reason for people needing dialysis."

"The results of this meta-analysis are very encouraging for people with diabetic kidney disease. As more treatment options become available to halt the progression of the disease, it is hoped that fewer will go on to require more invasive and costly interventions such as dialysis and transplantation."

Credit: 
George Institute for Global Health

SRL publishes focus section on Subduction Zone processes in the Americas

The eastern Pacific Ocean margin stretching from Mexico to southern Chile offers seismologists a "natural laboratory" in which to study and test ideas about the processes of subduction zones, which are associated with some of the world's largest recorded earthquakes in the region, as well as phenomena such as volcanism and tsunami generation.

In a focus section published in Seismological Research Letters, researchers from around the globe share what they've learned from an unprecedented amount of data collected in the Latin American Subduction Zone over the past two decades. In their introduction to the section, editors Carlos A. Vargas of the Universidad Nacional de Colombia at Bogota, Susan L. Beck of the University of Arizona, Marino Protti of Universidad Nacional de Costa Rica, and Jaime A. Campos of the University of Chile say the data have led to "new ideas about the spectrum of the physical processes involved in seismic rupture, geometry of subduction, triggered [seismic] events, nonvolcanic tremors and the earthquake cycle."

Several of the papers in the section discuss past major earthquakes in Mexico and Chile, with an eye to characterizing the properties in each event that may help explain earthquake cycles and calculate future seismic hazards in the region. In their paper about the magnitude 8.8 Maule earthquake in 2010 and the magnitude 8.3 Illapel earthquake in 2015, for instance, Leonardo Aquirre and colleagues at the University of Concepcion discuss how satellite observations of crust deformation can be used to build a model of crust movement related to motion along a subducting slab during different parts of the earthquake cycle for central Chile. Analysis of the 2014 magnitude 7.3 Papanoa earthquake by Pouye Yazdi of the Universidad Politécnica de Madrid in Spain could offer valuable data to inform seismic hazard estimates in the part of the central part of the Mexican subduction zone.

Tsunamis are part of the subduction zone's past and probable near future, and understanding the possible scenarios to generate tsunamis at this interface is important for safety planning and economic mitigation, write Mauricio Fuentes Serrano of the University of Chile and colleagues. Probability maps created from these scenarios suggest that northern Chile has a significant tsunami hazard, the researchers conclude.

Two papers take a closer look at tectonic phenomena at some of the complex plate boundaries in the zone, including an analysis of tectonic tremor along the converging boundary between the Nazca and South American plates and the Nazca, South American and Antarctic plates. Increasingly, these observations are made with the help of ocean-bottom seismometers.

Other papers in the section examine subduction zone structure where the Rivera oceanic plate dives beneath the North American plate in Mexico's Jalisco Block and south of María Cleofas Island, as well as the complex structures below southern Peru associated with the subducting Nazca plate.

Credit: 
Seismological Society of America

UTA scientist explores using nanoparticles to reduce size of deep-seated tumors

Another collaborative project from a nanoparticles expert at The University of Texas at Arlington has yielded promising results in the search for more effective, targeted cancer treatments.

Wei Chen, in collaboration with colleagues from the University of Rhode Island and Brown University, recently published a new paper in the Proceedings of the National Academy of Sciences journal. The team investigated the use of X-rays and copper-cysteamine (Cu-Cy) nanoparticles to treat deep-seated tumors, resulting in statistically significant reduction in tumor size.

The project continues Chen's work developing photosensitizer nanoparticles for use in photodynamic therapy, a promising cancer treatment method that activates reactive oxygen species to kill cancer cells without damaging the healthy cells around them. Reactive oxygen species, a natural byproduct of the body's metabolism of oxygen, help kill toxins in the body but can also be harmful to cells if they reach critical levels.

"Traditional nanoparticles used for photodynamic therapy can only be activated by light that is not highly penetrating, meaning we're limited with how deep we can go to target tumors," said Chen, a UTA physics professor. "Cu-Cy is unique among other nanoparticles we've investigated because it can be activated by radiation, which can penetrate far deeper and reach tumors throughout the body."

The paper is the result of an ongoing collaboration between Chen and Michael Antosh, assistant professor of physics and medical physics program director at the University of Rhode Island.

"Building on Wei's expertise in this important research area, my lab focused on the experiments testing the effect of the nanoparticles on tumor size after radiation therapy," Antosh said. "With the help of expert analysis by Jing Wu, assistance professor of statistics at the University of Rhode Island, we have enough data to draw reliable conclusions for the first time that radiation and Cu-Cy nanoparticles act as an effective cancer treatment."

The discovery builds upon another paper by Chen that describes how nanoparticles can be activated effectively by microwaves for cancer cell destruction.

"We continue to pursue multiple combinations within photodynamic therapy," Chen said. "Each new discovery represents potential new avenues for treating cancer patients, because we can leave their healthy cells virtually unaffected."

Ten scientists served as authors on this paper, representing an inter-institutional, multidisciplinary effort encompassing physics, statistics and health science. Included on the team is Leon Cooper, a Nobel Prize laureate from Brown University.

Proceedings of the National Academy of Sciences is the official journal of the National Academy of Sciences, the foremost authority in the U.S. on original research and impactful scientific discoveries.

"It is an honor to have our work highlighted in this important publication," Chen said. "This collaborative group has great synergy, and we are grateful that this is evidenced by the results of this study."

Alex Weiss, chair of UTA's physics department, said Chen's latest study demonstrates his commitment to discovering more effective cancer treatments.

"He reaches across disciplines and to other institutions in order to fortify his research and maximize impact," Weiss said. "We are grateful and proud to have Dr. Chen represent UTA in this important field of research on national and international scales."

This is the second paper Antosh and Chen have written together, with several more expected to come. Chen is an official collaborator on a new grant Antosh has received to further investigate the Cu-Cy particles.

"The overall goal is to someday use these nanoparticles during radiation therapy in cancer patients," Antosh said. "The next step is to investigate some clinically important variables in order to adapt the methods from this pilot study to what happens in treatments in clinical settings."

Credit: 
University of Texas at Arlington

Study shows the social benefits of political incorrectness

When Rep. Alexandria Ocasio-Cortez refers to immigrant detention centers as "concentration camps," or President Trump calls immigrants "illegals," they may take some heat for being politically incorrect. But using politically incorrect speech brings some benefits: It's a powerful way to appear authentic.

Researchers at Berkeley Haas found that replacing even a single politically correct word or phrase with a politically incorrect one--"illegal" versus "undocumented" immigrants, for example--makes people view a speaker as more authentic and less likely to be swayed by others.

"The cost of political incorrectness is that the speaker seems less warm, but they also appear less strategic and more 'real,'" says Asst. Prof. Juliana Schroeder, co-author of the paper, which includes nine experiments with almost 5,000 people and is forthcoming in The Journal of Personality and Social Psychology. "The result may be that people may feel less hesitant in following politically incorrect leaders because they appear more committed to their beliefs."

Although politically correct speech is more often defended by liberals and derided by conservatives, the researchers also found there's nothing inherently partisan about the concept. In fact, conservatives are just as likely to be offended by politically incorrect speech when it's used to describe groups they care about, such as evangelicals or poor whites.

"Political incorrectness is frequently applied toward groups that liberals tend to feel more sympathy towards, such as immigrants or LGBTQ individuals, so liberals tend to view it negatively and conservatives tend to think it's authentic," says Berkeley Haas PhD candidate Michael Rosenblum, the lead author of the paper (the third co-author is Francesca Gino of Harvard Business School). "But we found that the opposite can be true when such language is applied to groups that conservatives feel sympathy for--like using words such as 'bible thumper' or 'redneck'."

The researchers asked participants of all ideological backgrounds how they would define political correctness. The definition that emerged was "using language or behavior to seem sensitive to others' feelings, especially those others who seem disadvantaged." In order to study the phenomenon across the political spectrum, they focused on politically incorrect labels, such as "illegal immigrants," rather than political opinions, such as "illegal immigrants are destroying America."

That allowed them to gauge people's reactions when just a single word or phrase was changed in otherwise identical statements. They found that most people, whether they identified as moderate liberals or conservatives, viewed politically incorrect statements as more authentic. They also thought they could better predict politically incorrect speakers' other opinions, believing in their conviction.

In one field experiment, the researchers found that using politically correct language gives the illusion that the speaker can be more easily influenced. They asked 500 pre-screened pairs of people to have an online debate on a topic they disagreed on: funding for historically black churches. (The topic was selected because it had a roughly 50/50 split for and against in a pilot survey; no significant difference in support and opposition across political ideology; and involved both a racial minority and religious beliefs.) Before the conversation, one partner was instructed to either use politically correct or incorrect language in making their points.

Afterwards, people believed they had better persuaded the politically correct partners than the politically incorrect partners. Their partners, however, reported being equally persuaded, whether they were using PC or politically incorrect language. "There was a perception that PC speakers were more persuadable, though in reality they weren't," Rosenblum said.

Although President Trump's wildly politically incorrect statements seem to make him more popular in certain circles, copycat politicians should take heed. The researchers found that politically incorrect statements make a person appear significantly colder, and because they appear more convinced of their beliefs, they may also appear less willing to engage in crucial political dialogue.

Credit: 
University of California - Berkeley Haas School of Business

Lessons in learning

For decades, there has been evidence that active learning - classroom techniques designed to get students to participate in the learning process - produces better educational outcomes for students at virtually all levels.

And a new Harvard study suggests it may be important to let students know it.

The study, published September 4 in the Proceedings of the National Academy of Sciences, shows that, though students felt like they learned more through traditional lectures, they actually learned more when taking part in active learning classrooms.

The lead author Louis Deslauriers, Director of Science Teaching and Learning and senior Physics preceptor, knew that students would learn more from active learning. He published a key study in Science in 2011 that showed just that. But many students and faculty remained hesitant to switch to active learning.

"Often, students seemed genuinely to prefer smooth-as-silk traditional lectures," Deslauriers said. "We wanted to take them at their word. Perhaps they actually felt like they learned more from lectures than they did from active learning."

In addition to Deslauriers, the study is authored by Director of Science Education and Lecturer on Physics Logan McCarty, senior preceptor in Applied Physics Kelly Miller, preceptor in Physics Greg Kestin, and Kristina Callaghan, now a lecturer in Physics at the University of California, Merced.

The question of whether students' perceptions of their learning matches with their actual learning is particularly important, Deslauriers said, because though students eventually see the value of active learning, it can initially feel frustrating.

"Deep learning is hard work. The effort involved in active learning can be misinterpreted as a sign of poor learning," he said. "On the other hand, a superstar lecturer can explain things in such a way as to make students feel like they are learning more than they actually are."

To understand that dichotomy, Deslauriers and his co-authors designed an experiment that would expose students in an introductory physics class to both traditional lectures and active learning.

For the first 11 weeks of the 15-week class, students were taught using standard methods by an experienced instructor. In the 12th week, though, things changed - half the class was randomly assigned to a classroom that used active learning, while the other half attended highly polished lectures. In a subsequent class, the two groups were reversed. Notably, both groups used identical class content and only active engagement with the material was toggled on and off.

Following each class, students were surveyed on how much they agreed or disagreed with statements like "I feel like I learned a lot from this lecture" and "I wish all my physics courses were taught this way." Students were also tested on how much they learned in the class with 12 multiple choice questions.

When the results were tallied, the authors found that students felt like they learned more from the lectures, but in fact scored higher on tests following the active learning sessions.

"Actual learning and feeling of learning were strongly anticorrelated," Deslauriers said, "as shown through the robust statistical analysis by co-author Kelly Miller, who is an expert in educational statistics and active learning."

But those results, the study authors warned, shouldn't be interpreted as suggesting students dislike active learning. In fact many studies have shown students quickly warm to the idea, once they begin to see the results.

"In all the courses at Harvard that we've transformed to active learning," Deslauriers said, "the overall course evaluations went up."

"It can be tempting to engage the class simply by folding lectures into a compelling 'story,' especially when that's what students seem to like," said Kestin, a co-author of the study, who is a physicist and a video producer with NOVA | PBS. "I show my students the data from this study on the first day of class to help them appreciate the importance of their own involvement in active learning."

McCarty, who oversees curricular efforts across the sciences, hopes this study will encourage more faculty colleagues to use active learning in their courses.

"We want to make sure that other instructors are thinking hard about the way they're teaching," he said. "In our classes, we start each topic by asking students to gather in small groups to solve some problems. While they work, we walk around the room to observe them and answer questions. Then we come together and give a short lecture targeted specifically at the misconceptions and struggles we saw during the problem-solving activity. So far we've transformed over a dozen classes to use this kind of active learning approach. It's extremely efficient - we can cover just as much material as we would using lectures."

A pioneer in work on active learning, Professor of Physics Eric Mazur hailed the study as debunking long-held beliefs about how students learn.

"This work unambiguously debunks the illusion of learning from lectures," he said. "It also explains why instructors and students cling to the belief that listening to lectures constitutes learning. I recommend every lecturer reads this article."

The work also earned accolades from Dean of Science Christopher Stubbs, Professor of Physics and of Astronomy, who was an early convert to this style of active learning.

"When I first switched to teaching using active learning, some students resisted that change," he said. "This research confirms that faculty should persist and encourage active learning. Active engagement in every classroom, led by our incredible science faculty, should be the hallmark of residential undergraduate education at Harvard."

Ultimately, Deslauriers said, the study shows that it's important to ensure that both instructors and students aren't fooled into thinking that lectures - even well-presented ones - are the best learning option.

"A great lecture can get students to feel like they are learning a lot," he said. "Students might give fabulous evaluations to an amazing lecturer based on this feeling of learning, even though their actual learning isn't optimal. This could help to explain why study after study shows that student evaluations seem to be completely uncorrelated with actual learning."

Credit: 
Harvard University

Researchers find alarming risk for people coming off chronic opioid prescriptions

With a huge push to reduce opioid prescribing, little is known about the real-world benefits or risks to patients.

A recent study published in the Journal of General Internal Medicine found an alarming outcome: Patients coming off opioids for pain were three times more likely to die of an overdose in the years that followed.

"We are worried by these results, because they suggest that the policy recommendations intended to make opioid prescribing safer are not working as intended," said lead author Jocelyn James, assistant professor of general internal medicine at the University of Washington School of Medicine. "We have to make sure we develop systems to protect patients."

Physicians had already begun to reduce opioid prescribing by 2016, when the CDC issued its first guideline on opioid prescribing. That trend accelerated after 2016.

While reduced prescribing may well be intended to improve patient safety, little is known about the real world benefits or risks of this sea change in opioid prescribing.

The observational study looked at a cohort of 572 patients with chronic pain enrolled in an opioid registry. Chronic opioid therapy was discontinued in 344 patients and 187 continued to visit a primary care clinic. During the study period, 119 registry patients died (20.8%); 21 patients died of a definitive or possible overdose - 17 were discontinued patients and four were patients being seen at a clinic.

As the researchers concluded: "Discontinuing chronic opioid therapy was associated with increased risk of death."

Researchers said that improved clinical strategies, including multimodal pain management and treatment of opioid-use disorder, may be needed for this high-risk group.

At the time of this study, state rules did not allow medication treatment of opioid-use disorder in the primary care setting, said co-author Joseph Merrill, professor of general internal medicine at the University of Washington School of Medicine.

But after those rules changed, he said the addiction clinic at Harborview has developed a strong program to provide medication treatment for opioid-use disorder, including those who develop problems related to prescription pain medication.

"We hope these findings encourage others who prescribe opioids to do the same," Merrill said.

Researchers said the UW Medicine study is the third study published this year to look at the risks of stopping opioids:

A study published in the Journal of Substance Abuse Treatment found that among patients at high dose who stopped opioids, almost half had their doses reduced to 0 in a single day and many wound up in emergency departments.

A New York study in the Journal of General Internal Medicine found that ending opioid prescriptions was often followed by an end to the care relationship.

This spring the United States Food and Drug Administration issued a warning that suddenly stopping opioids can present a risk to patients.

Credit: 
University of Washington School of Medicine/UW Medicine

Largest-ever ancient-DNA study illuminates millennia of South and Central Asian prehistory

image: The first sequenced genome from an archaeological site associated with the ancient Indus Valley Civilization came from this woman buried at the city of Rakhigarhi.

Image: 
Vasant Shinde/<em>Cell</em>

The largest-ever study of ancient human DNA, along with the first genome of an individual from the ancient Indus Valley Civilization, reveal in unprecedented detail the shifting ancestry of Central and South Asian populations over time.

The research, published online Sept. 5 in a pair of papers in Science and Cell, also answers longstanding questions about the origins of farming and the source of Indo-European languages in South and Central Asia.

Geneticists, archaeologists and anthropologists from North America, Europe, Central Asia and South Asia analyzed the genomes of 524 never before-studied ancient individuals. The work increased the worldwide total of published ancient genomes by about 25 percent.

By comparing these genomes to one another and to previously sequenced genomes, and by putting the information into context alongside archaeological, linguistic and other records, the researchers filled in many of the key details about who lived in various parts of this region from the Mesolithic Era (about 12,000 years ago) to the Iron Age (until around 2,000 years ago) and how they relate to the people who live there today.

"With this many samples, we can detect subtle interactions between populations as well as outliers within populations, something that has only become possible in the last couple of years through technological advances," said David Reich, co-senior author of both papers and professor of genetics in the Blavatnik Institute at Harvard Medical School.

"These studies speak to two of the most profound cultural transformations in ancient Eurasia--the transition from hunting and gathering to farming and the spread of Indo-European languages, which are spoken today from the British Isles to South Asia--along with the movement of people," said Vagheesh Narasimhan, co-first author of both papers and a postdoctoral fellow in the Reich lab. "The studies are particularly significant because Central and South Asia are such understudied parts of the world."

"One of the most exciting aspects of this study is the way it integrates genetics with archaeology and linguistics," said Ron Pinhasi of the University of Vienna, co-senior author of the Science paper. "The new results emerged after combining data, methods and perspectives from diverse academic disciplines, an integrative approach that provides much more information about the past than any one of these disciplines could alone."

"In addition, the introduction of new sampling methodologies allowed us to minimize damage to skeletons while maximizing the chance of obtaining genetic data from regions where DNA preservation is often poor," Pinhasi added.

Language key

Indo-European languages--including Hindi/Urdu, Bengali, Punjabi, Persian, Russian, English, Spanish, Gaelic and more than 400 others--make up the largest language family on Earth.

For decades, specialists have debated how Indo-European languages made their way to distant parts of the world. Did they spread via herders from the Eurasian Steppe? Or did they travel with farmers moving west and east from Anatolia (present-day Turkey)?

A 2015 paper by Reich and colleagues indicated that Indo-European languages arrived in Europe via the steppe. The Science study now makes a similar case for South Asia by showing that present-day South Asians have little if any ancestry from farmers with Anatolian roots.

"We can rule out a large-scale spread of farmers with Anatolian roots into South Asia, the centerpiece of the 'Anatolian hypothesis' that such movement brought farming and Indo-European languages into the region," said Reich, who is also an investigator of the Howard Hughes Medical Institute and the Broad Institute. "Since no substantial movements of people occurred, this is checkmate for the Anatolian hypothesis."

One new line of evidence in favor of a steppe origin for Indo-European languages is the detection of genetic patterns that connect speakers of the Indo-Iranian and Balto-Slavic branches of Indo-European. The researchers found that present-day speakers of both branches descend from a subgroup of steppe pastoralists who moved west toward Europe almost 5,000 years ago and then spread back eastward into Central and South Asia in the following 1,500 years.

"This provides a simple explanation in terms of ancient movements of people for the otherwise puzzling shared linguistic features of these two branches of Indo-European, which today are separated by vast geographic distances," said Reich.

A second line of evidence in favor of a steppe origin is the researchers' discovery that of the 140 present-day South Asian populations analyzed in the study, a handful show a remarkable spike in ancestry from the steppe. All but one of these steppe-enriched populations are historically priestly groups, including Brahmins--traditional custodians of texts written in the ancient Indo-European language Sanskrit.

"The finding that Brahmins often have more steppe ancestry than other groups in South Asia, controlling for other factors, provides a fascinating new argument in favor of a steppe origin for Indo-European languages in South Asia," said Reich.

"This study has filled in a large piece of the puzzle of the spread of Indo-European," said co-author Nick Patterson, research fellow in genetics at HMS and a staff scientist at the Broad Institute of MIT and Harvard. "I believe the high-level picture is now understood."

"This problem has been in the air for 200 years or more and it's now rapidly being sorted out," he added. "I'm very excited by that."

Agriculture origins

The studies inform another longstanding debate, this one about whether the change from a hunting and gathering economy to one based on farming was driven more by movements of people, the copying of ideas or local invention.

In Europe, ancient-DNA studies have shown that agriculture arrived along with an influx of people with ancestry from Anatolia.

The new study reveals a similar dynamic in Iran and Turan (southern Central Asia), where the researchers found that Anatolian-related ancestry and farming arrived around the same time.

"This confirms that the spread of agriculture entailed not only a westward route from Anatolia to Europe but also an eastward route from Anatolia into regions of Asia previously only inhabited by hunter-gatherer groups," said Pinhasi.

Then, as farming spread northward through the mountains of Inner Asia thousands of years after taking hold in Iran and Turan, "the links between ancestry and economy get more complex," said archaeologist Michael Frachetti of Washington University in St. Louis, co-senior author who led much of the skeletal sampling for the Science paper.

By around 5,000 years ago, the researchers found, southwestern Asian ancestry flowed north along with farming technology, while Siberian or steppe ancestry flowed south onto the Iranian plateau. The two-way pattern of movement took place along the mountains, a corridor that Frachetti previously showed was a "Bronze Age Silk Road" along which people exchanged crops and ideas between East and West.

In South Asia, however, the story appears quite different. Not only did the researchers find no trace of the Anatolian-related ancestry that is a hallmark of the spread of farming to the west, but the Iranian-related ancestry they detected in South Asians comes from a lineage that separated from ancient Iranian farmers and hunter-gatherers before those groups split from each other.

The researchers concluded that farming in South Asia was not due to the movement of people from the earlier farming cultures of the west; instead, local foragers adopted it.

"Prior to the arrival of steppe pastoralists bringing their Indo-European languages about 4,000 years ago, we find no evidence of large-scale movements of people into South Asia," said Reich.

First glimpse of the ancestry of the Indus Valley Civilization

Running from the Himalayas to the Arabian Sea, the Indus River Valley was the site of one of the first civilizations of the ancient world, flourishing between 4,000 and 5,000 years ago. People built towns with populations in the tens of thousands. They used standardized weights and measures and exchanged goods with places as far-flung as East Africa.

But who were they?

Before now, geneticists were unable to extract viable data from skeletons buried at Indus Valley Civilization archaeological sites because the heat and volatile climate of lowland South Asia have degraded most DNA beyond scientists' ability to analyze it.

The Cell paper changes this.

After screening more than 60 skeletal samples from the largest known town of the Indus Valley Civilization, called Rakhigarhi, the authors found one with a hint of ancient DNA. After more than 100 sequencing attempts, they generated enough data to reach meaningful conclusions.

The ancient woman's genome matched those of 11 other ancient people reported in the Science paper who lived in what is now Iran and Turkmenistan at sites known to have exchanged objects with the Indus Valley Civilization. All 12 had a distinctive mix of ancestry, including a lineage related to Southeast Asian hunter-gatherers and an Iranian-related lineage specific to South Asia. Because this mix was different from the majority of people living in Iran and Turkmenistan at that time, the authors propose that the 11 individuals reported in the Science paper were migrants, likely from the Indus Valley Civilization.

None of the 12 had evidence of ancestry from steppe pastoralists, consistent with the model that that group hadn't arrived yet in South Asia.

The Science paper further showed that after the decline of the Indus Valley Civilization between 4,000 and 3,500 years ago, a portion of the group to which these 12 individuals belonged mixed with people coming from the north who had steppe pastoralist ancestry, forming the Ancestral North Indians, one of the two primary ancestral populations of present-day people in India. A portion of the original group also mixed with people from peninsular India to form the other primary source population, the Ancestral South Indians.

"Mixtures of the Ancestral North Indians and Ancestral South Indians--both of whom owe their primary ancestry to people like that of the Indus Valley Civilization individual we sequenced--form the primary ancestry of South Asians today," said Patterson.

"The study directly ties present-day South Asians to the ancient peoples of South Asia's first civilization," added Narasimhan.

The authors caution that analyzing the genome of only one individual limits the conclusions that can be drawn about the entire population of the Indus Valley Civilization.

"My best guess is that the Indus Valley Civilization itself was genetically extremely diverse," said Patterson. "Additional genomes will surely enrich the picture."

Credit: 
Harvard Medical School

Exotic physics phenomenon is observed for first time

CAMBRIDGE, Mass. - An exotic physical phenomenon, involving optical waves, synthetic magnetic fields, and time reversal, has been directly observed for the first time, following decades of attempts. The new finding could lead to realizations of what are known as topological phases, and eventually to advances toward fault-tolerant quantum computers, the researchers say.

The new finding involves the non-Abelian Aharonov-Bohm Effect and is reported today in the journal Science by MIT graduate student Yi Yang, MIT visiting scholar Chao Peng (a professor at Peking University), MIT graduate student Di Zhu, Professor Hrvoje Buljan at University of Zagreb in Croatia, Francis Wright Davis Professor of Physics John Joannopoulos at MIT, Professor Bo Zhen at the University of Pennsylvania, and MIT professor of physics Marin Soljacic.

The finding relates to gauge fields, which describe transformations that particles undergo. Gauge fields fall into two classes, known as Abelian and non-Abelian. The Aharonov-Bohm Effect, named after the theorists who predicted it in 1959, confirmed that gauge fields -- beyond being a pure mathematical aid -- have physical consequences.

But the observations only worked in Abelian systems, or those in which gauge fields are commutative -- that is, they take place the same way both forward and backward in time. In 1975, Tai-Tsun Wu and Chen-Ning Yang generalized the effect to the non-Abelian regime as a thought experiment. Nevertheless, it remained unclear whether it would even be possible to ever observe the effect in a non-Abelian system. Physicists lacked ways of creating the effect in the lab, and also lacked ways of detecting the effect even if it could be produced. Now, both of those puzzles have been solved, and the observations carried out successfully.

The effect has to do with one of the strange and counterintuitive aspects of modern physics, the fact that virtually all fundamental physical phenomena are time-invariant. That means that the details of the way particles and forces interact can run either forward or backward in time, and a movie of how the events unfold can be run in either direction, so there's no way to tell which is the real version. But a few exotic phenomena violate this time symmetry.

Creating the Abelian version of the Aharonov-Bohm effects requires breaking the time-reversal symmetry, a challenging task in itself, Soljacic says. But to achieve the non-Abelian version of the effect requires breaking this time-reversal multiple times, and in different ways, making it an even greater challenge.

To produce the effect, the researchers use photon polarization. Then, they produced two different kinds of time-reversal breaking. They used fiber optics to produce two types of gauge fields that affected the geometric phases of the optical waves, first by sending them through a crystal biased by powerful magnetic fields, and second by modulating them with time-varying electrical signals, both of which break the time-reversal symmetry. They were then able to produce interference patterns that revealed the differences in how the light was affected when sent through the fiber-optic system in opposite directions, clockwise or counterclockwise. Without the breaking of time-reversal invariance, the beams should have been identical, but instead, their interference patterns revealed specific sets of differences as predicted, demonstrating the details of the elusive effect.

The original, Abelian version of the Aharonov-Bohm effect "has been observed with a series of experimental efforts, but the non-Abelian effect has not been observed until now," Yang says. The finding "allows us to do many things," he says, opening the door to a wide variety of potential experiments, including classical and quantum physical regimes, to explore variations of the effect.

The experimental approach devised by this team "might inspire the realization of exotic topological phases in quantum simulations using photons, polaritons, quantum gases, and superconducting qubits," Soljacic says. For photonics itself, this could be useful in a variety of optoelectronic applications, he says. In addition, the non-Abelian gauge fields that the group was able to synthesize produced a non-Abelian Berry phase, and "combined with interactions, it may potentially one day serve as a platform for fault-tolerant topological quantum computation," he says.

At this point, the experiment is primarily of interest for fundamental physics research, with the aim of gaining a better understanding of some basic underpinnings of modern physical theory. The many possible practical applications "will require additional breakthroughs going forward," Soljacic says.

For one thing, for quantum computation, the experiment would need to be scaled up from one single device to likely a whole lattice of them. And instead of the beams of laser light used in their experiment, it would require working with a source of single individual photons. But even in its present form, the system could be used to explore questions in topological physics, which is a very active area of current research, Soljacic says.

Credit: 
Massachusetts Institute of Technology