Tech

Biologists capture fleeting interactions between regulatory proteins and their genome-wide targets

New York University biologists captured highly transient interactions between transcription factors--proteins that control gene expression--and target genes in the genome and showed that these typically missed interactions have important practical implications. In a new study published in Nature Communications, the researchers developed a method to capture transient interactions of NLP7, a master transcription factor involved in nitrogen use in plants, revealing that the majority of a plant's response to nitrogen is controlled by these short-lived regulatory interactions.

"Our approaches to capturing transient transcription factor-target interactions genome-wide can be applied to validate dynamic interactions of transcription factors for any pathway of interest in agriculture or medicine," said Gloria Coruzzi, Carroll & Milton Petrie Professor in NYU's Department of Biology and Center for Genomics and Systems Biology and the paper's senior author.

Dynamic interactions between regulatory proteins and DNA are important for triggering controlled expression of genes into RNA in response to a changing cellular or external environment. However, the underlying transient interactions between transcription factors and their genome-wide targets have been largely missed, as current biochemical methods require stable--not fleeting--interactions between a transcription factor and its DNA target.

In the Nature Communications study, the researchers witnessed these elusive transient interactions between NLP7, a master transcription factor in plants that regulates genes involved in nitrogen uptake for plant growth, and its target genes. Nitrogen is a key nutrient for plant development and is found in soil and fertilizer.

The researchers captured highly transient interactions of NLP7 with genome-wide targets that even defied capture by biochemical detection methods performed within minutes of NLP7 nuclear import. They did this by fusing NLP7 to a DNA methylation enzyme from bacteria, which they then induced to enter the nucleus of a plant cell. At any time NLP7 touched a gene--even briefly--it would leave a permanent methylation mark on the DNA. They also showed that this highly transient interaction between NLP7 and its target genes in the genome led to new and continued transcription of the gene into RNA.

"We found that more than 50 percent of the genes regulated by NLP7 in whole plants involve highly transient transcription factor-DNA interactions that occur within five minutes of controlled NLP7 nuclear import captured in isolated plant cells. Moreover, the transient NLP7 binding activates a transcriptional cascade that regulates more than 50 percent of the nitrogen responsive genes in whole plant roots," explained Coruzzi.

Given that more than half of gene responses to nitrogen in plants are controlled by transient interactions with NLP7, the researchers note that the discovery of these elusive genome-wide targets of NLP7 have implications for improving nitrogen use efficiency, which can benefit agriculture and sustainability.

Credit: 
New York University

Re-thinking 'tipping points' in ecosystems and beyond

image: Two evolutionary spaces illustrate how a small change in environmental conditions with few immediate effects opens up a gradual path toward regime change.

Image: 
André de Roos

When a grassland becomes a desert, or a clearwater lake shifts to turbid, the consequences can be devastating for the species that inhabit them. These abrupt environmental changes, known as regime shifts, are the subject of new research in Nature Ecology & Evolution which shows how small environmental changes trigger slow evolutionary processes that eventually precipitate collapse.

Until now, research into regime shifts has focused on critical environmental thresholds, or "tipping points," in external conditions -- eg when crossing a certain temperature threshold triggers a sudden shift to desertification. But the new model by Catalina Chaparro-Pedraza and Santa Fe Institute External Professor André de Roos, both at the University of Amsterdam, reveals how a small change in the external environment, with little immediate impact, can induce slow evolutionary changes in the species that inhabit the system. After what the researchers call a "considerable delay," wherein species slowly evolve a new trait or behavior over generations, the regime shift manifests as a delayed reaction.

"Instead of looking for a straightforward relationship between environmental tipping points and ecosystemic collapse, our work brings evolution into the picture," Chaparro-Pedraza explains. "Even though the outcome is the same, we think it's critically important to map out different paths that lead to regime shifts so we can predict and eventually prevent them."

In their model, the researchers demonstrate how these evolution-induced, delayed regime shifts arise in communities of salmon. At different stages of their lives, salmon live in freshwater and marine ecosystems, which both have entirely different biological communities. When a slight change in the marine environment reduces the mortality exposure of the saltwater salmon population, the immediate effects are minor. However, it initiates an evolutionary process that slowly drives individual character traits, like the optimal body size for migrating from the river to an open ocean, to a critical threshold where a regime shift occurs. Remarkably, this regime shift produces dramatic changes in community composition in both the freshwater and marine communities simultaneously, even though nothing changed in the environmental conditions of the freshwater community.

Understanding the role of evolutionary processes in regime shifts could also shed light on other complex, interdependent systems. De Roos and Chaparro-Pedraza also examined data from the 2008 financial crisis, which, according to de Roos, "seem pretty much in line with the adaptation-induced regime shift we report in this paper." In this example, the 2008 crash can be seen as the delayed regime shift. The deregulation of the financial system in the 1970s and 1980s would be the environmental change with a negligible immediate effect, and the documented trend of banks changing their debt-to-asset ratio would be analogous to the evolutionary process triggered by the environmental change.

"Regime shifts don't just happen in ecosystems," says de Roos. "They also appear in systems like stock markets. Our model shows the evolutionary mechanism by which a sudden change -- like an ecosystem or financial collapse -- may be the result of a small environmental change in the distant past."

Credit: 
Santa Fe Institute

Exposure to 'fake news' during the 2016 US election has been overstated

Since the 2016 U.S. presidential election, debates have raged about the reach of so-called "fake news" websites and the role they played during the campaign. A study published in Nature Human Behaviour finds that the reach of these untrustworthy websites has been overstated.

To assess the audience for "fake news," researchers at Dartmouth, Princeton and the University of Exeter measured visits to these dubious and unreliable websites during the period before and immediately after the election using an online survey of 2,525 Americans and web traffic data collected by YouGov Pulse (Oct. 7 - Nov. 16, 2016) from respondents' laptops or desktop computers. This method avoids the problems with asking people to recall which websites they visited, an approach that is plagued with measurement error.

According to the findings, less than half of all Americans visited an untrustworthy website. Moreover, untrustworthy websites accounted for only six percent of all Americans' news diets on average.

Visits to dubious news sites differed sharply along ideological and partisan lines. Content from untrustworthy conservative sites accounted for nearly 5 percent of people's news diets compared to less than 1 percent for untrustworthy liberal sites. Respondents who identified themselves as Trump supporters were also more likely to visit an untrustworthy site (57 percent) than those who indicated that they were Clinton supporters (28 percent).

The data also revealed that Facebook was the most prominent gateway to untrustworthy websites; respondents were more likely to have visited Facebook than Google, Twitter or a webmail platform such as Gmail in the period immediately before visiting an untrustworthy website.

Finally, the study demonstrates that fact-checking websites appeared to be relatively ineffective in reaching the audiences of untrustworthy websites. Only 44 percent of respondents who visited such a website also visited a fact-checking site during the study, and almost none of them had read a fact-check debunking specific claims made in a potentially questionable article.

"These findings show why we need to measure exposure to 'fake news' rather than just assuming it is ubiquitous online," said Brendan Nyhan, a professor of government at Dartmouth. "Online misinformation is a serious problem, but one that we can only address appropriately if we know the magnitude of the problem."

Credit: 
Dartmouth College

Quantifying objects: bees recognize that six is more than four

Writing in iScience, zoologists have shown that insects have the cognitive abilities to perform so called numerosity estimation, allowing them to solve simple mathematical problems. Zoologist Professor Dr Martin Paul Nawrot and doctoral student Hannes Rapp from the 'Computational Systems Neuroscience' research group at the University of Cologne demonstrated these abilities in a computational model inspired by the honeybee.

'Experiments showed that insects such as honeybees can actually "count" up to a certain number of objects. For example, bees were able to compare sets of objects and evaluate whether they were the same size or whether one set was larger than the other', said Hannes Rapp, explaining the underlying question of what is known as numerical cognition. For example, the bee recognized that six diamonds are more than four circles.

So far, it has been unclear how the neuronal network for this cognitive ability is constructed. Earlier theoretical models had assumed a firmly implemented circular circuit with four involved neurons for the four arithmetical operations 'equal to', 'zero', 'more than' and 'less than', explained Professor Nawrot. 'However, our computer model showed that not four, but only one neuron is sufficient. The action potential of a single neuron varies depending on the math problem - and this can be trained on the neuron. As a result, the researchers identified a comparatively simple model with which a neural network can learn to solve numerical cognition tasks.

According to Nawrot, this model also helps the neural networks of an artificial intelligence to learn: 'A lot of money has already been invested into training artificial neural networks to visually recognize the number of objects. Deep learning methods in particular enable counting by the explicit or implicit recognition of several relevant objects within a static scene', Nawrot added. 'However, these model classes are expensive because they usually have to be trained on a very large number of patterns in the millions and often require cloud computing clusters. Our honeybee-inspired approach with a simple model and learning algorithm reduces this effort many times over.'

Credit: 
University of Cologne

Artificial intelligence could enhance diagnosis and treatment of sleep disorders

DARIEN, IL - Artificial intelligence has the potential to improve efficiencies and precision in sleep medicine, resulting in more patient-centered care and better outcomes, according to a new position statement from the American Academy of Sleep Medicine.

Published online as an accepted paper in the Journal of Clinical Sleep Medicine, the position statement was developed by the AASM's Artificial Intelligence in Sleep Medicine Committee. According to the statement, the electrophysiological data collected during polysomnography -- the most comprehensive type of sleep study -- is well-positioned for enhanced analysis through AI and machine-assisted learning.

"When we typically think of AI in sleep medicine, the obvious use case is for the scoring of sleep and associated events," said lead author and committee Chair Dr. Cathy Goldstein, associate professor of sleep medicine and neurology at the University of Michigan. "This would streamline the processes of sleep laboratories and free up sleep technologist time for direct patient care."

Because of the vast amounts of data collected by sleep centers, AI and machine learning could advance sleep care, resulting in more accurate diagnoses, prediction of disease and treatment prognosis, characterization of disease subtypes, precision in sleep scoring, and optimization and personalization of sleep treatments. Goldstein noted that AI could be used to automate sleep scoring while identifying additional insights from sleep data.

"AI could allow us to derive more meaningful information from sleep studies, given that our current summary metrics, for example, the apnea-hypopnea index, aren't predictive of the health and quality of life outcomes that are important to patients," she said. "Additionally, AI might help us understand mechanisms underlying obstructive sleep apnea, so we can select the right treatment for the right patient at the right time, as opposed to one-size-fits-all or trial and error approaches."

Important considerations for the integration of AI into the sleep medicine practice include transparency and disclosure, testing on novel data, and laboratory integration. The statement recommends that manufacturers disclose the intended population and goal of any program used in the evaluation of patients; test programs intended for clinical use on independent data; and aid sleep centers in evaluation of AI-based software performance.

"AI tools hold great promise for medicine in general, but there has also been a great deal of hype, exaggerated claims and misinformation," explained Goldstein. "We want to interface with industry in a way that will foster safe and efficacious use of AI software to benefit our patients. These tools can only benefit patients if used with careful oversight."

The position statement, and a detailed companion paper on the implications of AI in sleep medicine, are available on the Journal of Clinical Sleep Medicine website.

Credit: 
American Academy of Sleep Medicine

The 'Monday effect' is real -- and it's impacting your Amazon package delivery

image: Oliver Yao is a professor of decision and technology analytics at Lehigh University, where he holds the George N. Beckwith '32 Professorship.

Image: 
Source: Lehigh University

The "Monday Effect" is real - and it's impacting your Amazon package delivery.

So says researcher Oliver Yao, a professor of decision and technology analytics in Lehigh University's College of Business.

He's found that the "Monday Effect" - that letdown of returning to work after a weekend, which is documented to impact finance, productivity and psychology - also negatively affects supply chains.

Working with researchers at the University of Maryland and University of California, San Diego, Yao found that process interruptions that occur when operations are shut down over the weekend, along with human factors like the "Monday blues," hurt supply chain performance on Mondays. That means a longer time between when a purchase order is received and when it is shipped, as well as more errors in order fulfillment.

It's the first study to look at the impact of the "Monday Effect" on supply chains, the sequence of processes that move a product or service from creation to customer. The findings are published in a new article, "'Monday Effect' Performance Variations in Supply Chain Fulfillment: How IT-Enabled Procurement May Help," appearing in Information Systems Research. Co-authors are Martin Dresner of the Robert H. Smith School of Business at University of Maryland and Kevin Xiaoguo Zhu of the Rady School of Management at University of California, San Diego.

Significant shipping delays

Yao and colleagues used a dataset of more than 800,000 transaction records gathered during a 12-month period from the U.S. General Services Administration to look at variations in operations performance by days of the week. They also analyzed order and fulfillment data from one of the largest supermarket chains in China.

They found the "Monday Effect" was prevalent and significant. For example, time between receipt of a purchase order and shipping is 9.68 percent longer on Mondays than other weekdays, on average, said Yao, who holds the George N. Beckwith '32 Professorship. Mondays, it turns out, are subject to both process- and human-related impacts.

Weekends create bottlenecks at distribution centers that are tackled on Mondays as orders are processed, picked, staged and shipped to customers. Humans completing processing activities are impacted by adjusting to returning to work, more prone to errors and less efficient.

Most supply chain managers are unaware of this impact, Yao said. But they can take steps to counteract the "Monday Effect."

Combating the "Monday Effect"

Strategies include increased staffing on Mondays (or any day returning from a break, including holidays), fewer Monday meetings and non-fulfillment activities, better training, additional pay or mood-lifters such as free coffee or motivational talks, and double-checking Monday work.

The most effective way to reduce the Monday performance gap is integrating technology solutions, such as automated order processing systems, said Yao, who found using electronic markets can improve Monday performance by as much as 90 percent.

For example, technology reduces the Monday performance gap by 94 percent in order-to-shipping time, 71 percent in complete orders fulfilled, and 80 percent in the portion of shipments that have incorrect numbers of products.

Technology was most useful in orders of specialized, less frequently purchased or high-value products, about which employees might be less knowledgeable.

"Technology is more helpful in substituting for labor when humans are more prone to making mistakes," the researchers said. "Computer-to-computer links avoid potential human effects resulting from the weekend break."

After all, for computers and machines, Mondays are just another day.

Credit: 
Lehigh University

Speak math, not code

image: Eminent computer scientist Leslie Lamport, winner of 2013 Turing Award, speaking at the dialogue held in conjunction with the SMU-Global Young Scientists Summit 2020.

Image: 
Rebecca Tan

SMU Office of Research & Tech Transfer - Have you ever followed a recipe to bake some bread? If you have, congratulations; you have executed an algorithm. The algorithms that follow us around the internet to suggest items we might like, and those that control what shows up in our Facebook feeds may seem mysterious and uncanny at times. Yet, an algorithm is simply a set of instructions to be completed in a specified sequence, whether by human bakers or computer programs.

The difference, however, lies in how the algorithm is expressed. Recipes are written in English or other spoken languages while computer programs are written in programming languages or code. According to Leslie Lamport, winner of the 2013 Turing Award, thinking mathematically can be a useful step to specifying the algorithm for computer programmes, as it can help programmers clarify their thinking and make programs more efficient.

"Most programmers just start writing code; they don't even know what the algorithm is. It's like starting to build without a blueprint," said Dr. Lamport, speaking at an exclusive dialogue at the Singapore Management University (SMU) on 14 January 2020, held in conjunction with the SMU-Global Young Scientists Summit 2020.

"And the result? The program is hard to debug and inefficient because you would be trying to optimise at the code level rather than at the algorithm level. We should do what almost every other field of science and engineering does: initially describe the problem with math instead."

Why math is better than code

Using Euclid's algorithm as an example, Dr. Lamport walked the audience through how an algorithm can be expressed precisely yet simply with mathematics. Described by ancient Greek mathematician Euclid in 300 BC, Euclid's algorithm is a method for identifying the greatest common divisor (GCD) of two numbers, that is, the largest number that can divide the two numbers without leaving a remainder. For example, the GCD of the numbers 15 and 12 is 3.

The method is simple: subtract the smaller number from the larger number, then repeat this till both numbers are the same; the resulting number is the GCD. The entire procedure can be described in a single mathematical formula, said Dr. Lamport, who is recognised for developing the widely used LaTex file format, in addition to his pioneering work on distributed computing systems.

In contrast, writing Euclid's algorithm in code is more time consuming and cumbersome, and therefore harder to debug if it is not working correctly. "Euclid's program would have to contain a lot of lower level details, like what you should do if either number is less than or equal to zero," Dr. Lamport said. "You would have to decide that if you are writing a computer program but it's not the algorithm's problem."

How much more efficient would using math instead of code be? When engineers used TLA+, a high level formal specification language based on mathematics developed by Dr. Lamport to model, document and verify concurrent computing systems, they were able to dramatically reduce the size of an operating system originally used to control some experiments on the Rosetta spacecraft. "One of the results of specifying the software logic with TLA+ was that the code size was able to be reduced to about ten times less than the original," Dr. Lamport said. "You don't reduce the code size by ten times by better coding; you do it by cleaner architecture, which is just another word for a better algorithm."

On top of being more efficient, taking a mathematical approach has the additional benefit of making de-bugging easier. Amazon Web Services and Microsoft Azure engineers use TLA+ for their cloud services, Dr. Lamport said, and through it have found bugs in their system designs that could not be found via any other technique.

Get comfortable with math

Although math is both powerful and elegant when it comes to describing algorithms, many people - including computer programmers and engineers - are intimidated by it and shy away from using it. "Some students have asked us when can they stop doing and reviewing the math and start the software programming," said Professor Steven Miller, Vice Provost (Research) at SMU and formerly the Founding Dean of the School of Information Systems.

Dr. Lamport believes that getting used to 'speaking' in mathematics is a matter of exposure. "Why is 'two plus two equals four' considered simple but a logical operation like 'an element of' is hard to understand for most people? Logical operations such as "element of" simply means that something is part of a bunch of other things. That concept doesn't require you to learn any complicated thing like counting, as counting is actually quite complicated," he said.

"Why should 'element of' seem frightening when 'plus' seems so easy? It's just a matter of not being familiar with it, and this is not all your own fault - mathematicians are terrible at teaching it."

For Dr. Lamport, becoming fluent in mathematics is the first step, but for mathematical thinking to truly impact the way algorithms are written, it has to change the way we think. "I want to emphasise that mathematics doesn't solve the problem for you; you have to solve the problem," he said. "Thinking mathematically will help you solve the problem; and mathematics helps to ensure that the solution was right."

Credit: 
Singapore Management University

Quarantine on cruise ship resulted in more Corona patients

image: Professor at the Department of Public Health and Clinical Medicine, Umeå University, Sweden

Image: 
Mattias Pettersson

The cruise ship Diamond Princess was quarantined for over two weeks resulting in more coronavirus infected passengers than if they would have disembarked immediately. Rather the opposite to what was intended. This according to a study conducted at Umeå University in Sweden.

"The infection rate onboard the vessel was about four times higher than what can be seen on land in the worst infected areas of China. A probable cause is how close people stay to one another onboard a vessel," says Joacim Rocklöv, Professor of epidemiology at Umeå University and principal author of the article.

After a person travelling with the cruise ship Diamond Princess disembarked in Hong Kong and was tested positive for the coronavirus, Japanese authorities decided to disallow the 3,700 passengers onboard to leave the ship when it reached Yokohama. The ship was hence put in quarantine until 19 February. Passengers who showed signs of illness were, as far as possible, separated from other passengers onboard. When the quarantine in Yokohama in the end was removed and passengers could finally disembark, a total of 619 passengers had been infected by the coronavirus.

"If the ship had been immediately evacuated upon arrival in Yokohama, and the passengers who tested positive for the coronavirus and potential others in the risk zone had been taken care of, the scenario would have looked quite different. Our calculations show that only around 70 passengers would have been infected. A number that greatly falls short of the over 600 passengers the quarantine resulted in. The precautionary measure of putting the entire ship under quarantine was understandable, but due to the high risk of transmission on the ship, the decision is now questionable," says Joacim Rocklöv.

At the same time, the study also shows that if the precautionary measures of isolating potential carriers had not been carried out onboard, another 2,300 people would have been infected.

Credit: 
Umea University

Unique material could unlock new functionality in semiconductors

TROY, N.Y. -- If new and promising semiconductor materials are to make it into our phones, computers, and other increasingly capable electronics, researchers must obtain greater control over how those materials function.

In an article published today in Science Advances, Rensselaer Polytechnic Institute researchers detailed how they designed and synthesized a unique material with controllable capabilities that make it very promising for future electronics.

The researchers synthesized the material -- an organic-inorganic hybrid crystal made up of carbon, iodine, and lead -- and then demonstrated that it was capable of two material properties previously unseen in a single material. It exhibited spontaneous electric polarization that can be reversed when exposed to an electric field, a property known as ferroelectricity. It simultaneously displayed a type of asymmetry known as chirality -- a property that makes two distinct objects, like right and left hands, mirror images of one another but not able to be superimposed.

According to Jian Shi, an associate professor of materials science and engineering at Rensselaer, this unique combination of ferroelectricity and chirality is advantageous. When combined with the material's conductivity, both of these characteristics can enable other electrical, magnetic, or optical properties.

"What we have done here is equip a ferroelectric material with extra functionality, allowing it to be manipulated in previously impossible ways," Shi said.

The experimental discovery of this material was inspired by theoretical predictions by Ravishankar Sundararaman, an assistant professor of materials science and engineering at Rensselaer. A ferroelectric material with chirality, Sundararaman said, can be manipulated to respond differently to left- and right-handed light so that it produces specific electric and magnetic properties. This type of light-matter interaction is particularly promising for future communication and computing technologies.

Credit: 
Rensselaer Polytechnic Institute

Hunter-gatherers facilitated a cultural revolution through small social networks

image: The study, published in Science Advances, mapped close-range social interactions of Agta hunter-gatherers in the Philippines using radio sensor technology to record close range interactions between individuals every hour for one month. The researchers observed inter-camp migrations and visits almost on a daily basis.

Image: 
University of Zurich and UCL

Hunter-gatherer ancestors, from around 300,000 years ago, facilitated a cultural revolution by developing ideas in small social networks, and regularly drawing on knowledge from neighbouring camps, suggests a new study by UCL and University of Zurich.

The study, published in Science Advances, mapped close-range social interactions of Agta hunter-gatherers in the Philippines using radio sensor technology to record close range interactions between individuals every hour for one month. The researchers observed inter-camp migrations and visits almost on a daily basis.

The anthropologists found that the social structure of Agta hunter gatherers, built around small family units linked by strong friendships and high in-between camp mobility, was key to the development of new cultural ideas. This is because the social structure allowed for the co-existence of multiple traditions or solutions to a similar problem in different parts of the network. The researchers highlight that this is distinct from the closely bound society of our closest cousins, chimpanzees.

Professor Andrea Migliano (UCL Anthropology & University of Zurich), the first author of the paper, said: "It is fair to say that 'visits between camps' is the social media of current hunter-gatherers, and probably of our extinct hunter-gatherer ancestors.

"When we need a new solution for a problem, we go online and use multiple sources to obtain information from a variety of people. Hunter-gatherers use their social network exactly in the same way. The constant visits between camps are essential for information to be recombined and continuously generate cultural innovations."

The anthropologists tested the effect of the Agta social structure of sparsely populated close knit groups, on the evolution of cultural complexity using an agent-based model that simulated the creation of a new medicinal drug, having started with an original set of six medicinal plants.

This process was first simulated across the real social network of Agta hunter-gatherers. In this case, pairs of individuals were selected, based on the strength of their social ties, to combine different medicinal plants and share the discovery of any new super medicine with their close family ties. Second, the process was simulated over an artificial and fully connected network of a similar size, where all individuals were connected to each other and immediately transmitted any discoveries to all network members.

Contrary to some predictions, rates of cultural evolution were much higher across the real hunter-gatherer social networks. While fully connected networks spread innovations more quickly, the real hunter-gatherer networks promote the independent evolution of multiple medicines in different clusters of the network (different camps, households, family clusters), that can later be recombined producing a more complex culture.

Dr Lucio Vinicius (UCL Anthropology & University of Zurich), the last author of the paper, said: "Previous studies have shown that fluid social structures already characterised expanding Upper Palaeolithic human populations and that long-range cultural exchange in the Homo sapiens lineage dates back to at least 320,000 years ago.

"However, the link we found between cultural evolution and the fluid sociality of hunter-gatherers indicates that as hunter-gatherers expanded within and then out of Africa, this social structure of small and interconnected bands may have facilitated the sequence of cultural and technological revolutions that characterises our species."

Dr Mark Dyble (UCL Anthropology), co-author of the paper, said: "Humans have a unique capacity to create and accumulate culture. From a simple pencil to the International Space Station, human culture is a product of multiple minds over many generations, and cannot be recreated from scratch by one single individual.

"This capacity for cumulative culture is central to humanity's success, and evolved in our past hunter-gatherer ancestors. Our work shows that the kind of social organisation that is typical of contemporary hunter-gatherers serves to promote cultural evolution. If this kind of social structure was typical of hunter-gatherers in the past, it could go a long way to explaining why the human capacity for culture evolved".

Credit: 
University College London

Even damaged livers can handle life-saving medication

When you ingest a drug--whether over-the-counter Tylenol or medication prescribed by a doctor--your liver is your body's first responder. And just like other first responders, sometimes the liver gets hurt. Doctors used to make patients with drug-induced liver injury stop taking all their medications until the liver healed, but this could be dangerous. Now, researchers report in two recent papers that people with diabetes, hypertension and depression might be able to continue taking life saving medications even while they heal from drug-induced liver injuries.

Drug-induced liver injury, when a person accidentally harms their liver by taking medications prescribed by a doctor (or occasionally over the counter drugs), affects about 40,000 people in the US every year, and almost 1 million people globally.

"Doctors give patients drugs to treat diseases. No one wants their liver damaged, but it happens all the time," says UConn pharmacologist and toxicologist Xiaobo Zhong. When a person takes a medication by mouth, it goes into their stomach and then to the intestines, where it is absorbed into the blood. This blood, in turn, passes first through the liver before reaching the rest of the body. The liver has enzymes that break down medicines. But different people naturally have more or less of these enzymes. Sometimes, what could be a safe and effective dose in one person is too much for someone else who has different enzyme levels. This is why some individuals are more vulnerable to liver damage, even when taking drugs just as a doctor prescribed.

There is no standard guidance for doctors when a patient gets drug-induced liver damage. Often times they tell the person to stop taking all medications immediately and wait for their liver to recover. But that can take weeks or months.

"But if patients have chronic conditions such diabetes, hypertension, or depression, their conditions can run out of control," if they stop taking the medications, Zhong says. And that can be life threatening.

Zhong, together with UConn toxicologist José Manautou, graduate student Yifan Bao, and colleagues at University of Michigan, University of Pittsburgh, and Zengzhou University in Henan, China, tested whether mice whose livers had been damaged by acetaminophen (the active ingredient in Tylenol) had lower levels of drug metabolizing enzymes, called cytochrome P450 enzymes. They published their results on February 24 in Drug Metabolism and Disposition.

"Accidental drug-induced liver damage from acetominophen misuse is more common than people think, despite the efforts by the Food and Drug Administration to inform the public of this potential danger," says Manautou. Acetominophen toxicity involves certain P450 enzymes that the liver uses to process many other medicines, including those for diabetes, hypertension and depression.

Levels of P450 enzymes vary a lot from person to person. The team recently published another paper looking at P450 enzymes, this one in Molecular Pharmacology with graduate student Liming Chen as lead author. That paper found that the way a cell regulates specific P450 enzymes made mice more or less susceptible to liver damage from acetaminophen.

In the more recent paper in Drug Metabolism and Disposition, the team shows that levels of some P450 enzymes drop when the liver is damaged. That leaves people more susceptible to harms from drugs broken down by these enzymes. Now the researchers are investigating whether mice with drug-induced liver damage can safely take medications for diabetes, hypertension and depression. It looks like they can, as long as the doses are much smaller than normal. Because the damaged liver does not break down the medications as efficiently, they are just as effective at these lower doses.

The team still has to test whether these results hold in humans. They are currently looking to collaborate with local emergency room doctors who see many patients with drug-induced liver damage to better understand how their studies in rodents translate to humans.

Credit: 
University of Connecticut

Cartilage cells, chromosomes and DNA preserved in 75 million-year-old baby duck-billed dinosaur

image: Reconstruction of the nesting ground of Hypacrosaurus stebingeri from the Two Medicine formation of Montana.

In the center can be seen a deceased Hypacrosaurus nestling with the back of its skull embedded in shallow waters. A mourning adult is portrayed on the right. Art by Michael Rothman.

Image: 
©Science China Press

This study is lead by Dr. Alida Bailleul (Institute of Vertebrate Paleontology and Paleoanthropology, the Chinese Academy of Sciences) and Dr. Mary Schweitzer (North Carolina State University, NC Museum of Natural Sciences, Lund University and Museum of the Rockies). Microscopic analyses of skull fragments from these nestling dinosaurs were conducted by Alida Bailleul at the Museum of the Rockies. In one fragment she noticed some exquisitely preserved cells within preserved calcified cartilage tissues on the edges of a bone. Two cartilage cells were still linked together by an intercellular bridge, morphologically consistent with the end of cell division (see left image below). Internally, dark material resembling a cell nucleus was also visible. One cartilage cell preserved dark elongated structures morphologically consistent with chromosomes (center image below). "I couldn't believe it, my heart almost stopped beating," Bailleul says.

Bailleul and Schweitzer, together with lab director Wenxia Zheng, sought to determine whether original molecules were also preserved in this dinosaur cartilage. The team performed immunological and histochemical analyses on the skull of another nestling Hypacrosaurus from that same nesting ground in Schweitzer's North Carolina laboratory.

The team found that the organic matrix surrounding the fossilized cartilage cells reacted to antibodies of Collagen II, the dominant protein in cartilage in all vertebrates. "This immunological test supports the presence of remnants of original cartilaginous proteins in this dinosaur," Schweitzer says.

The researchers also isolated individual Hypacrosaurus cartilage cells and applied two DNA-stains, DAPI (4?,6-diamidino-2-phenylindole) and PI (propidium iodide). These bind specifically to DNA fragments in extant material, and some of the isolated dinosaur cells showed internal, positive binding in the same pattern as seen in modern cells, suggesting some original dinosaur DNA is preserved (see below, right image).

"These new exciting results add to growing evidence that cells and some of their biomolecules can persist in deep-time. They suggest DNA can preserve for tens of millions of years, and we hope that this study will encourage scientists working on ancient DNA to push current limits and to use new methodology in order to reveal all the unknown molecular secrets that ancient tissues have" Bailleul says.

The possibility that DNA can survive for tens of millions of years is not currently recognized by the scientific community. Rather, based upon kinetic experiments and modelling, it is generally accepted that DNA persists less than 1 million years. These new data support other results that suggest DNA in some form can persist in Mesozoic tissues, and lay the foundation for future efforts to recover and sequence DNA from other very ancient fossils in laboratories worldwide.

Credit: 
Science China Press

Sugar gets the red light from consumers in new study

Researchers have found that sugar content is the most important factor for people when making healthy food choices - overriding fat and salt.

A team from the University of Nottingham's Division of Food, Nutrition and Dietetics carried out a choice-based survey with 858 participants using the traffic light labelling system (TLL) to select healthy foods. The results showed that when deciding on the healthiness of items sugar was significantly the most important macronutrient for participants.

Dietician and PhD researcher Ola Anabtawi led the research published in the Journal of Human Nutrition and Dietetics, she explains: "When using the TLL consumers often have to make trade-offs between undesirable attributes and decide which to use to guide them in making a choice. We wanted to find out whether it was fat, saturated fat, sugar or salt they most wanted to avoid and see whether the traffic light labelling was influencing this decision."

Traffic light labelling was introduced to aid the selection of healthier choices with a simple red, amber green colour coding system. Supermarkets and food manufacturers use this on packaging to highlight nutritional information.

Participants in the study were shown three options of the same food item with different nutrition traffic light label combinations, this was repeated for three products- prepacked sandwiches, breakfast cereals and biscuits. They were asked to select which they thought was the healthiest product.

Foods with a high sugar content were by far perceived to be the worst for health with participants avoiding these products, with excess fat, saturated fat and salt being less off-putting. Products flagged with a red label were also avoided much more and had a more significant impact on making a healthy choice than the green label.

Ola continues: "Despite the lack of knowledge about the recommendations underpinning the TLL criteria participants' decisions about the healthiness of food products were significantly influenced by TLL information on the items' sugar content. TLL do, therefore, appear to be guide consumers beliefs in the absence of deep knowledge.

The dominance of sugar in decision-making shows the labelling system is having an impact in the current public health climate. However, it is important to consider the effect of disregarding other nutrients (i.e. fat and salt) for people with different nutritional needs. We suggest raising awareness of all nutrients to help the public achieve the well- balanced diet."

Credit: 
University of Nottingham

Tracking communication networks and the diffusion of information

image: The validated retweeting network during Italian presidential elections from the session Statistical Physics and Twitter Analysis.

Image: 
Fabio Saracco

Please Note: The 2020 American Physical Society (APS) March Meeting that was to be held in Denver, Colorado from March 2 through March 6 has been canceled. The decision was made late Saturday (February 29), out of an abundance of caution and based on the latest scientific data available regarding the transmission of the coronavirus disease (COVID-19). See our official release on the cancelation for more details.

DENVER, COLO., FEBRUARY 28, 2020--Tracking communication and movement in this hyperconnected world can seem overwhelming. People (and things) share information through countless platforms. Networks online and off both impact how people live their lives and interact with their surroundings. Scientists will present their findings on the dynamics and structure of communication at the 2020 American Physical Society March Meeting.

Babies "Forage" for Words

Language is an example of a network, or a structure designed to spread information. Most people learn to communicate during their infancy and this process has been extensively documented. But how infants vocally experiment during the day-to-day is less studied.

"Our hypothesis was that vocal responses from adults would serve as rewards for infants," said Ritwika Vallomparambath Panikkassery Sugasree, a physicist at the University of California, Merced.

In a new study, she and her colleagues examined how infants and adult caregivers' vocal interactions influence infants' exploration of language development. The results suggest babies "vocally forage" or search for vocal sounds that have value. These daily interactions could contribute to their overall linguistic development.

Determining the Best Communication Structures

People use language to convey phrases and concepts. Each language comprises a network that facilitates the transfer of information. Examining the structure of these networks helps scientists identify the features that support the efficient communication of information.

"If you were going to design a language, you would want it to convey a lot of information when you're speaking, but also to do so efficiently so that people's internal ideas in their head of what the structure looks like is similar to what the structure actually is," said Lynn, a statistical physicist at the University of Pennsylvania.

He and his colleagues measured how much information networks could convey and how efficiently they communicate that information. The team found that most successful networks combined community structure with heterogenous structure. In community structure, a specific word invokes a "community" or cluster of other words commonly associated with it, enabling people to anticipate what information comes next. In heterogeneous structure, there are hubs of hyperconnectivity that bridge the entire network, allowing networks to communicate much more information.

The importance of accounting for our neighbors' biases in our own opinions

Effective use of information from one's social network is important, especially accounting for neighbors' biases, according to a new mathematical model by Zachary Kilpatrick, an applied mathematician at the University of Colorado, Boulder. He and his colleagues gave rational agents pieces of evidence in favor of choice A or choice B. Each agent could have an innate bias for choice A or choice B. Their preference determined how easily persuaded they were by pieces of evidence provided in favor of one choice or the other.

The results suggest that when an individual doesn't make a decision after receiving evidence, their neighbors see that lack of a decision as a reflection on the direction of this evidence. The researchers also found that, in large social networks, a single agent's decision can generate a chain reaction in favor of that decision.

Social Media's Impact on Decision-Making

The spread of information, or misinformation, across social media networks can sway people's political decisions. By examining how information from the 2018 Italian election diffused through Twitter, Guido Caldarelli and his colleagues developed a method to anticipate how political groups develop and share their opinions on Twitter.

"There's been a lot of concern about how these social networks influence in some way the results of the elections and we wanted to study how the vast majority of people get a political idea from social networks about what's going on" said Caldarelli, a statistical physicist at IMT Alti Studi Lucca.

The team found that in Italy the right wing party has a more compact Twitter presence while the left is more fragmented online in the Italian online political debate. Their results also suggest that the more unified a political group is online, the better able it is to maintain a coherent and efficient set of bots to promote its message.

Tracking Movement in Offline Networks

People also live in tangible, geographic networks. Residential populations have racial and spatial biases that influence how they choose the location where they live. Using a migration model, Yuchao Chen and his colleagues simulated how residential segregation appears in urban centers.

"By combining these two factors, we can predict the spatial distribution of all the [people] in the city for each block," said Chen, a physicist and demographer at Cornell University, "It's a completely new type of prediction that no one else has done before."

The team found that how people choose where to live is defined by two factors: social preference and spatial preference. Social preference is the desire to live next to people of the same race or ethnicity. Spatial preference is the desire to live in a housing situation with a specific price range, size, and other influences. Both preferences dictate how people migrate to areas in residential networks over time.

Connected Cars On Single Lane Roads

Connected cars can communicate with outside systems, like navigation technology or smartphones. However, their connectivity lets these cars be easily hacked. Moreover, a large-scale hack of connected vehicles can have severe impacts on transportation. Skanda Vivek, a statistical physicist at the Georgia Gwinnett College, and his colleagues developed a new model that suggests that even a small hack can still have big consequences for urban traffic.

The researchers studied scenarios where hacked vehicles act as obstacles that block roads. They found that when only 5% of roads were blocked in the model grid, the entire street network experienced a significant traffic slowdown. This is because the initial hack caused road blockages, which disrupted surrounding traffic patterns in a domino effect.

"To block a major portion of a city's roads, a hacker doesn't need to hack vehicles on all of those roads," said Vivek, "Even a lower fraction of hacked vehicles that lead to a small fraction of roads being blocked eventually cascades to catastrophic disruption of city wide traffic."

Credit: 
American Physical Society

New study explains why superconductivity takes place in graphene

Graphene, a single sheet of carbon atoms, has many extreme electrical and mechanical properties. Two years ago, researchers showed how two sheets laid on top of each other and twisted at just the right angle can become superconducting, so that the material loses its electrical resistivity. New work explains why this superconductivity happens in a surprisingly high temperature.

Researchers at Aalto University and the University of Jyväskylä showed that graphene can be a superconductor at a much higher temperature than expected, due to a subtle quantum mechanics effect of graphene's electrons. The results were published in Physical Review B. The findings were highlighted in Physics viewpoint by the American Physical Society, and looks set to spark lively discussion in the physics community.

The discovery of the superconducting state in twisted bilayer graphene was selected as the Physics breakthrough of the year 2018 by the Physics World magazine, and it spurred an intense debate among physicist about the origin of superconductivity in graphene. Although superconductivity was found only at a few degrees above the absolute zero of temperature, uncovering its origin could help understanding high-temperature superconductors and allow us to produce superconductors that operate near room temperature. Such a discovery has been considered one of the "holy grails" of physics, as it would allow operating computers with radically smaller energy consumption than today.

The new work came from a collaboration between Päivi Törmä's group at Aalto University and Tero Heikkilä's group at the University of Jyväskylä. Both have studied the types of unusual superconductivity most likely found in graphene for several years.

"The geometric effect of the wave functions on superconductivity was discovered and studied in my group in several model systems. In this project it was exciting to see how these studies link to real materials", says the main author of the work, Aleksi Julku from Aalto University. "Besides showing the relevance of the geometric effect of the wave functions, our theory also predicts a number of observations that the experimentalists can check", explains Teemu Peltonen from the University of Jyväskylä.

Credit: 
Aalto University