Tech

Ancient lake contributed to past San Andreas fault ruptures

image: San Andreas fault area

Image: 
Rebecca Dzombak

Boulder, Colorado, USA: The San Andreas fault, which runs along the western coast of North America and crosses dense population centers like Los Angeles, California, is one of the most-studied faults in North America because of its significant hazard risk. Based on its roughly 150-year recurrence interval for magnitude 7.5 earthquakes and the fact that it's been over 300 years since that's happened, the southern San Andreas fault has long been called "overdue" for such an earthquake. For decades, geologists have been wondering why it has been so long since a major rupture has occurred. Now, some geophysicists think the "earthquake drought" could be partially explained by lakes -- or a lack thereof.

Today, at the Geological Society of America's 2020 Annual Meeting, Ph.D. student Ryley Hill will present new work using geophysical modeling to quantify how the presence of a large lake overlying the fault could have affected rupture timing on the southern San Andreas in the past. Hundreds of years ago, a giant lake -- Lake Cahuilla -- in southern California and northern Mexico covered swathes of the Mexicali, Imperial, and Coachella Valleys, through which the southern San Andreas cuts. The lake served as a key point for multiple Native American populations in the area, as evidenced by archaeological remains of fish traps and campsites. It has been slowly drying out since its most recent high water mark (between 1000 and 1500 CE). If the lake over the San Andreas has dried up and the weight of its water was removed, could that help explain why the San Andreas fault is in an earthquake drought?

Some researchers have already found a correlation between high water levels on Lake Cahuilla and fault ruptures by studying a 1,000-year record of earthquakes, written in disrupted layers of soils that are exposed in deeply dug trenches in the Coachella Valley. Hill's research builds on an existing body of modeling but expands to incorporate this unique 1,000-year record and focuses on improving one key factor: the complexity of water pressures in rocks under the lake.

Hill is exploring the effects of a lake on a fault's rupture timing, known as lake loading. Lake loading on a fault is the cumulative effect of two forces: the weight of the lake's water and the way in which that water creeps, or diffuses, into the ground under the lake. The weight of the lake's water pressing down on the ground increases the stress put on the rocks underneath it, weakening them -- including any faults that are present. The deeper the lake, the more stress those rocks are under, and the more likely the fault is to slip.

What's more complicated is how the pressure of water in empty spaces in soils and bedrock (porewater) changes over both time and space. "It's not that [water] lubricates the fault," Hill explains. It's more about one force balancing another, making it easier or harder for the fault to give way. "Imagine your hands stuck together, pressing in. If you try to slip them side by side, they don't want to slip very easily. But if you imagine water between them, there's a pressure that pushes [your hands] out -- that's basically reducing the stress [on your hands], and they slip really easily." Together, these two forces create an overall amount of stress on the fault. Once that stress builds up to a critical threshold, the fault ruptures, and Los Angeles experiences "the Big One."

Where previous modeling work focused on a fully drained state, with all of the lake water having diffused straight down (and at a single time), Hill's model is more complex, incorporating different levels of porewater pressure in the sediments and rocks underneath the lake and allowing pore pressures to be directly affected by the stresses from the water mass. That, in turn, affects the overall fault behavior.

While the work is ongoing, Hill says they've found two key responses. When lake water is at its highest, it increases the stresses enough to push the timeline for the fault reaching that critical stress point just over 25% sooner. "The lake could modulate this [fault slip] rate just a little bit," Hill says. "That's what we think maybe tipped the scales to cause the [fault] failure."

The overall effect of Lake Cahuilla drying up makes it harder for a fault to rupture in his model, pointing to its potential relevance for the recent quiet on the fault. But, Hill stresses, this influence pales in comparison to continent-scale tectonic forces. "As pore pressures decrease, technically, the bedrock gets stronger," he says. "But how strong it's getting is all relevant to tectonically driven slip rates. They're much, much stronger."

Credit: 
Geological Society of America

FSU researchers investigate material properties for longer-lasting, more efficient solar cells

image: Former Florida State University postdoctoral researcher Sarah Wieghold, left, and FSU Assistant Professor of Chemistry and Biochemistry Lea Nienhaus. Their research is helping to understand the fundamental processes in a material known as perovskites, work that could lead to more efficient solar cells that also do a better job of resisting degradation.

Image: 
FSU Photography Services

The designers of solar cells know their creations must contend with a wide range of temperatures and all sorts of weather conditions -- conditions that can impact their efficiency and useful lifetime.

Florida State University Assistant Professor of Chemistry and Biochemistry Lea Nienhaus and former FSU postdoctoral researcher Sarah Wieghold are helping to understand the fundamental processes in a material known as perovskites, work that could lead to more efficient solar cells that also do a better job of resisting degradation. They found that small tweaks to the chemical makeup of the materials as well as the magnitude of the electrical field it is exposed to can greatly affect the overall material stability.

Their latest work is published in a pair of studies in Journal of Materials Chemistry C and Journal of Applied Physics .

Their research is focused on improving the potential of perovskites, a material with a crystal structure based on positively charged lead ions known as cations and negatively charged halide anions. In a cubic perovskite crystal structure, the octahedra formed by the lead and halide ions are surrounded by additional positively charged cations.

The first perovskite solar cells, which were developed in 2006, had a solar energy power conversion efficiency of about 3 percent, but cells developed in 2020 have a power conversion efficiency of more than 25 percent. That rapid increase in efficiency makes them a promising material for further research, but they have drawbacks for commercial viability, such as a tendency to degrade quickly.

"How can we make perovskites more stable under real-world conditions in which they'll be used?" Nienhaus said. "What is causing the degradation? That's what we're trying to understand. Perovskites that don't degrade quickly could be a valuable tool for obtaining more energy from solar cells."

Perovskites are a so-called "soft material," despite the ionic bonds of the crystal lattice that make up their structure. The halides or cations in the material can move through that lattice, which may increase their rate of degradation, resulting in a lack of long-term stability.

In the Journal of Materials Chemistry C paper, the researchers investigated the combined influence of light and elevated temperature on the performance of mixed-cation mixed-halide perovskites.

They found that adding a small amount of the element cesium to the perovskite film increases the stability of the material under light and elevated temperatures. Adding rubidium, on the other hand, led to worse performance.

"We found that depending on the choice of the cation, two pathways of degradation can be observed in these materials, which we then correlated to a decrease in performance," said Wieghold, now an assistant scientist at the Center for Nanoscale Materials and the Advanced Photon Source at Argonne National Laboratory. "We also showed that the addition of cesium increased the film stability under our testing conditions, which are very promising results."

They also found that a decrease in film performance for the less stable perovskite mixtures was correlated with the formation of the compound lead bromide/iodide and an increase in electron-phonon interactions. The formation of lead bromide/iodide is due to the unwanted degradation mechanism, which needs to be avoided to achieve long-term stability and performance of these perovskite solar cells.

In the Journal of Applied Physics paper, they explored the link between voltage and the performance of perovskite materials. This showed that the ion movement in the material changes the underlying electrical response, which will be a critical factor in the photovoltaic performance.

"Perovskites present a great opportunity for the future of solar cells, and it's exciting to help move this science forward," Nienhaus said.

Credit: 
Florida State University

Risk score predicts prognosis of outpatients with COVID-19

BOSTON - A new artificial intelligence-based score considers multiple factors to predict the prognosis of individual patients with COVID-19 seen at urgent care clinics or emergency departments. The tool, which was created by investigators at Massachusetts General Hospital (MGH), can be used to rapidly and automatically determine which patients are most likely to develop complications and need to be hospitalized.

The impetus for the study began early during the U.S. epidemic when Massachusetts experienced frequent urgent care visits and hospital admissions. While working as an infectious diseases physician and as part of the MGH Biothreats team, Gregory Robbins, MD, recognized the need for a more sophisticated method to identify outpatients at greatest risk for experiencing negative outcomes.

As described in The Journal of Infectious Diseases, a team of experts in neurology, infectious disease, critical care, radiology, pathology, emergency medicine and machine learning designed the COVID-19 Acuity Score (CoVA) based on input from information on 9,381 adult outpatients seen in MGH's respiratory illness clinics and emergency department between March 7 and May 2, 2020. "The large sample size helped ensure that the machine learning model was able to learn which of the many different pieces of data available allow reliable predictions about the course of COVID-19 infection," said M. Brandon Westover, MD, PhD, an investigator in the Department of Neurology and director of Data Science at the MGH McCance Center for Brain Health. Westover is one of three co-senior authors of the study, along with Robbins and Shibani Mukerji, MD, PhD, associate director of MGH's Neuro-Infectious Diseases Unit.

CoVA was then tested in another 2,205 patients seen between May 3 and May 14. "Testing the model prospectively helped us to verify that the CoVA score actually works when it sees 'new' patients, in the real world," said first author Haoqi Sun, PhD, an investigator in the Department of Neurology and a research faculty member in the MGH Clinical Data Animation Center (CDAC). In this prospective validation group, 26.1 percent, 6.3 percent and 0.5 percent of patients experienced hospitalization, critical illness or death, respectively, within seven days. CoVA demonstrated excellent performance in predicting which patients would fall into these categories.

Among 30 predictors--which included demographics like age and gender, COVID-19 testing status, vital signs, medical history and chest X-ray results (when available)--the top five were age, diastolic blood pressure, blood oxygen saturation, COVID-19 testing status and respiratory rate.

"While several other groups have developed risk scores for complications of COVID-19, ours is unique in being based on such a large patient sample, being prospectively validated, and in being specifically designed for use in the outpatient setting, rather than for patients who are already hospitalized," Mukerji said. "CoVA is designed so that automated scoring could be incorporated into electronic medical record systems. We hope that it will be useful in case of future COVID-19 surges, when rapid clinical assessments may be critical."

Credit: 
Massachusetts General Hospital

'What wound did ever heal but by degrees?' delayed wound healing due to gene mutations

"Wound healing is one of the most complex biological processes," write Professor Kazumitsu Sugiura and Dr Kenta Saito from Fujita Health University, Japan, in their article recently published in Nature's Scientific Reports. As countless researchers in the field have noted, several automated coordinated biological activities take place during wound healing, from stopping the bleeding to cleaning out pathogens and debris to growing back and strengthening tissue.

One critical step in wound healing is the infiltration of inflammatory cells into the vicinity of a wound in the "cleaning out" phase. But this is something of a double-edged sword: either excessive or inadequate infiltration can delay wound healing.

This reality in part led Prof Sugiura and Dr Saito to hypothesize that the anti-inflammatory mediator IL-36Ra could be playing an important role in wound healing. IL-36Ra is encoded by the IL36RN gene. Mutations in this gene have been linked to various inflammatory skin disorders such as psoriasis. In Japan, approximately 2% of the population have two mutations of the IL36RN gene and experts have conjectured that this could be behind several skin disorders.

Previous studies involving mice with these mutations (Il36rn−/− mice) have revealed impaired wound healing. However, the exact role that IL-36Ra plays in the wound healing process remains unknown. To find out, the team led by Prof Sugiura and Dr Saito studied the healing of excisional wounds in 8-14-week-old Il36rn−/− mice and their wild-type littermates.

When the researchers examined the animals at 3- and 7-day postinjury timepoints, they found that open wound areas were larger in the Il36rn−/− mice than in the wild-type controls. The Il36rn−/− mice also exhibited diminished recovery of epithelial tissue--or the outer layer of the skin--and excessive formation of granulation tissue, the connective tissue and blood vessels that grow to fill wounds. Interestingly, examinations of the Il36rn−/− mice at the 3-day postinjury timepoint also revealed greater infiltration of proinflammatory neutrophils and macrophages (another type of immune cell involved in identifying and engulfing pathogens and dead cells) into the wound areas and greater gene expression for proinflammatory cytokines--proteins that regulate inflammation, among other things.

These results provide evidence for the deleterious effects of IL-36Ra deficiencies on wound healing, but they leave open the question of how clinicians can counter those effects.

The team led by Prof Sugiura and Dr Saito answer this question as well in their paper. Based on the findings of a recent study that toll-like receptor-4 (TLR4), a protein responsible for signaling cytokine production, plays an essential role in early wound repair, the researchers hypothesized that treatment with the TLR4 inhibitor TAK-242 would normalize wound healing in Il36rn−/− mice. As expected, intraperitoneal TAK-242 injections administered shortly after injury eliminated the delays in wound healing observed at the 3- and 7-day postinjury timepoints.

This is preliminary evidence for the utility of TLR4 inhibitors as a way to promote wound healing in people with IL-36Ra deficiencies. Of course, these findings should be approached with some caution due to the many unanswered questions concerning the physiology of inflammation in wound healing. Further, differences between murine and human wound healing mechanisms may limit the interspecies translatability of these findings. Nonetheless, these findings point to potential directions for future clinical research. As Prof Sugiura and Dr Saito note: "Our observations concerning TAK-242 highlight TLR4 as a novel therapeutic target for clinical research related to neutrophil skin diseases such as pyoderma gangrenosum." Although they did not directly experiment with the compensatory administration of IL-36Ra itself, they also speculate that "IL-36Ra can be used as a treatment for various inflammatory skin diseases such as psoriasis and atopic dermatitis. This could be the beginning of a new direction of research on wound healing!"

Credit: 
Fujita Health University

How to prevent the spread of tumor cells via the lymph vessels

What role do the lymphatic vessels play in the metastasis of cancer cells? Scientists from the German Cancer Research Center and the Mannheim Medical Faculty of the University of Heidelberg developed a method to investigate this question in mice. The aim of the work was to identify new ways to block the dangerous colonization and spread of tumor cells. The researchers discovered that an antibody against a signaling molecule of the vascular system causes the lymphatic vessels in the tumor to die, suppresses metastasis and thus prolongs the survival of the mice. Based on these findings, approaches may be developed to prevent the dangerous spread of tumor cells. The results have now been published in the journal Cancer Discovery.

Just like healthy tissue, tumors are supplied by two different vascular systems. In addition to blood vessels that supply oxygen and nutrients, the lymph vessels are responsible for transporting cells of the immune system and tissue fluid. The ability of cancer cells to spread through both pathways in the body and form daughter tumors, so-called metastases, has been known for a long time. In this work, the importance of the route via lymphatic vessels and the biological mechanisms involved were investigated for the first time in mice.

Until now, it has been difficult to study the complicated architecture of a tumor and its spread in a living organism. The Heidelberg and Mannheim research team led by Hellmut Augustin has now succeeded in developing a suitable model system, as Nicolas Gengenbacher, first author of the current publication, reports: "The key to this was a direct transplantation of tumor tissue from one mouse to another without prior cell culture. In this model, the natural tissue structure was preserved and the cancerous tumors were able to form functional lymph vessels that were connected to the lymphatic system - a prerequisite for lymphogenic metastasis".

Using these animals, the researchers were able to confirm that cancer cells often migrate via the lymph vessels first into nearby lymph nodes and from there continue to metastasize into vital organs. The surgical removal of the primary tumor enabled the researchers to simulate a disease situation that corresponded to that of a cancer patient after surgery: Daughter tumors and not the primary tumor became crucial for survival.

In their search for ways to prevent the development of metastases, the research team focused on the cells that line the lymph vessels from the inside, the so-called lymph endothelial cells. Endothelial cells control many important properties of the blood and lymph vessels and produce numerous signaling molecules and growth factors. The researchers found that the messenger substance angiopoietin 2 ensures the survival of lymph endothelial cells in tumors. An antibody that blocks angiopoietin-2 caused the lymph vessels in the tumor to selectively die. This interrupted the transport pathways for cancer cells to detach and prevented them from spreading to nearby lymph nodes. As a result, fewer daughter tumors formed in distant organs and the mice survived significantly longer.

Malignant cells often remain in the body after cancer surgery and can be the starting point for a relapse oft he disease. "Surprisingly, we were able to effectively prevent the spread of tumors in the mice even when we blocked angiopoietin-2 only shortly before tumor surgery," says Hellmut Augustin, head of the study. "However, we have only been able to show that angiopoietin-2 blockade has a therapeutic effect within this treatment window in experimental animals. Whether this approach also helps in humans against the spread of tumors must be clarified in further investigations."

Credit: 
German Cancer Research Center (Deutsches Krebsforschungszentrum, DKFZ)

Robots help to answer age-old question of why fish school

image: Robot-like fish provide insight into how fish can save energy by swimming in schools.

Image: 
Dr Liang Li, Max Planck Institute of Animal Behavior (MPI-AB)

A fish school is a striking demonstration of synchronicity. Yet centuries of study have left a basic question unanswered: do fish save energy by swimming in schools? Now, scientists from the Max Planck Institute of Animal Behavior (MPI-AB), the University of Konstanz, and Peking University have provided an answer that has long been suspected but never conclusively supported by experiments: yes.

Using biomimetic fish-like robots, the researchers show that fish could take advantage of the swirls of water generated by those in front by applying a simple behavioural rule. By adjusting their tail beat relative to near neighbours - a strategy called vortex phase matching - robots were shown to benefit hydrodynamically from a near neighbour no matter where they are positioned with respect to that neighbour. The previously unknown rule, revealed by the robots, was subsequently shown to be the strategy used by free swimming fish. The study is reported on 26 October 2020 inNature Communications.

"Fish schools are highly dynamic, social systems," says senior author Iain Couzin, Director of the MPI-AB who also co-directs the Cluster of Excellence 'Centre for the Advanced Study of Collective Behaviour' at the University of Konstanz. "Our results provide an explanation for how fish can profit from the vortices generated by near neighbours without having to keep fixed distances from each other."

Robotic solution

Answering the question of whether or not fish can save energy by swimming with others requires measuring their energy expenditure. Accurately doing so in free swimming fish has so far not been possible, and so past studies have sought to answer this question instead through theoretical models and predictions.

The new study, however, has overcome this barrier to experimental testing. The researchers developed a 3D robotic fish that has a soft tail fin and swims with an undulating motion that mimics accurately the movement of a real fish. But unlike their live counterparts, the robots allow for direct measurement of the power consumption associated with swimming together versus alone.

"We developed a biomimetic robot to solve the fundamental problem of finding out how much energy is used in swimming," says Liang Li, a postdoctoral fellow at the MPI-AB and first author on the study. "If we then have multiple robots interacting, we gain an efficient way to ask how different strategies of swimming together impact the costs of locomotion."

A simple rule for swimming in a school

The researchers studied robotic fish swimming in pairs versus alone. Running over 10,000 trials, they tested follower fish in every possible position relative to leaders - and then compared energy use with solo swimming.

The results showed a clear difference in energy consumption for robots that swam alone versus those that swam in pairs. The cause of this, they discovered, is the way that fish in front influence the hydrodynamics of fish behind. The energy consumed by a follower fish is determined by two factors: its distance behind the leader and the relative timing of the tail beats of the follower with respect to that of the leader. In other words, it matters whether the follower fish is positioned close to the front or far behind the leader and how the follower adjusts its tail beats to exploit the vortices created by the leader.

To save energy, it turns out that the secret is in synchronisation. That is, follower fish must match their tail beat to that of the leader with a specific time lag based on the spatial position - a strategy the researchers called "vortex phase matching." When followers are beside leader fish, the most energetically effective thing to do is to synchronise tail beats with the leader. But as followers fall behind, they should go out of synch having more and more lag as compared to the tail beat of the leader.

Visualising vortices

In order to visualise the hydrodynamics, researchers emitted tiny hydrogen bubbles into the water and imaged them with a laser - a technique that made the vortices created by the swimming motion of the robots visible. This showed that vortices are shed by the leader fish and move downstream. It also showed that robots could utilise these vortices in various ways. "It's not just about saving energy. By changing the way they synchronise, followers can also use the vortices shed by other fish to generate thrust and help them accelerate," says co-author Mate Nagy, head of the Collective Behaviour 'Lendület' Research Group in the Hungarian Academy of Sciences and Eötvös University, who conducted the work when he was a postdoctoral fellow at the MPI-AB.

The result in real fish

But do real fish use the strategy of vortex phase matching to save energy? To answer that, the researchers created a simple hydrodynamic model that predicts what real fish should do if they are using vortex phase matching. They used AI-assisted analysis of body posture of goldfish swimming together and found, indeed, that the strategy is being used in nature.

Says Couzin: "We discovered a simple rule for synchronising with neighbours that allows followers to continuously exploit socially-generated vortices. But before our robotic experiments, we simply didn't know what to look for, and so this rule has been hidden in plain sight."

Credit: 
University of Konstanz

Research provides a new understanding of how a model insect species sees color

image: Drosophila melanogaster under green and red fluorescence used as a marker to indicate the presence of inserted genes.

Image: 
Camilla Sharkey

Through an effort to characterize the color receptors in the eyes of the fruit fly Drosophila melanogaster, University of Minnesota researchers discovered the spectrum of light it can see deviates significantly from what was previously recorded.

"The fruit fly has been, and continues to be, critical in helping scientists understand genetics, neuroscience, cancer and other areas of study across the sciences," said Camilla Sharkey, a post-doctoral researcher in the College of Biological Sciences' Wardill Lab. "Furthering our understanding of how the eye of the fruit fly detects different wavelengths of light will aid scientists in their research around color reception and neural processing."

The research, led by U of M Assistant Professor Trevor Wardill, is published in Scientific Reports and is among the first research of its kind in two decades to examine Drosophila photoreceptor sensitivity in 20 years. Through their genetic work, and with the aid of technological advancements, researchers were able to target specific photoreceptors and examine their sensitivity to different wavelengths of light (or hue).

The study found:

all receptors -- those processing UV, blue and green -- had significant shifts in light sensitivities compared to what was previously known;

the most significant shift occurred in the green photoreceptor, with its light sensitivity shifting by 92 nanometers (nm) from 508 nm to 600 nm; equivalent to seeing orange rather than green best;

a yellow carotenoid filter in the eye (derived from Vitamin A) contributes to this shift; and

the red pigmented eyes of fruit flies have long-wavelength light leakage between photoreceptors, which could negatively impact a fly's vision.

Researchers discovered this by reducing carotenoids in the diets of the flies with red eyes and by testing flies with reduced eye pigmentation. While fly species with black eyes, such as house flies, are able to better isolate the long-wavelength light for each pixel of their vision, flies with red eyes, such as fruit flies, likely suffer from a degraded visual image.

"The carotenoid filter, which absorbs light on the blue and violet light spectrum, also has a secondary effect," said Sharkey. "It sharpens ultraviolet light photoreceptors, providing the flies better light wavelength discrimination, and -- as a result -- better color vision."

Credit: 
University of Minnesota

Uncertainties key to balancing flood risk and cost in elevating houses

image: Sultan, WA, November 11, 2006 -- Mitch McKron pumps water from the basement of his mitigated home that he had just raised in time to prevent it from flooding. Record rains swelled many western Washington Rivers that have breached their levees and flooded roads, property and towns.

Image: 
MARVIN NAUMAN/FEMA

What do you have on your 2020 Bingo Card? Wildfire, heat wave, global pandemic, or flooding? If it's flooding, then it's a good bet it will happen in many places in the U.S. sometime during the year.

People who live in areas designated as river flood zones often seek to raise their homes. Now a team of Penn State researchers suggests that considering uncertainties can improve decisions.

"Many houses located along rivers in Pennsylvania are in danger of being flooded," said Klaus Keller, professor of geosciences. "Some houses are elevated high, some to intermediate levels, and some not at all. Why is this?"

People in river flood zones are looking for good strategies on how high to elevate their houses. The Federal Emergency Management Agency -- FEMA -- recommends elevating houses to the height of a flood that has a 1% chance to occur in a given year, also known as the 100-year flood, plus at least one foot. This is the minimum elevation for which federal funding may be available. The researchers investigated if they might improve on this suggested elevation given uncertainties surrounding, for example, future flooding, the future value of money and the vulnerability of a house to flooding. They reported their results today (Oct. 26) in Nature Communications.

"Looking at the range of possible outcomes can help to improve decisions on how high to elevate a house," said Mahkameh Zarekarizi, former Penn State postdoctoral fellow, now a hydroclimate scientist at Jupiter Intelligence. "It is arguably better to fail in a computer model than in real life. In the computer, we can look at many possible future outcomes of flooding, costs and other uncertainties."

Decision makers may want to reduce the probability of being flooded and reduce the net costs.

"The decision makers may benefit from a map that shows the trade-offs between these goals," said Vivek Srikrishnan, assistant research professor, Earth and Environmental Systems Institute. "Home owners may want to see, for example, the total net price of reducing the risk of being flooded. A single recommendation such as the 100-year flood height plus at least one foot is silent on this question."

Credit: 
Penn State

Kid influencers are promoting junk food brands on YouTube -- garnering more than a billion views

Kids with wildly popular YouTube channels are frequently promoting unhealthy food and drinks in their videos, warn researchers at NYU School of Global Public Health and NYU Grossman School of Medicine in a new study published in the journal Pediatrics.

Food and beverage companies spend $1.8 billion dollars a year marketing their products to young people. Although television advertising is a major source of food marketing, companies have dramatically increased online advertising in response to consumers' growing social media use.

"Kids already see several thousand food commercials on television every year, and adding these YouTube videos on top of it may make it even more difficult for parents and children to maintain a healthy diet," said Marie Bragg, assistant professor of public health nutrition at NYU School of Global Public Health and assistant professor in the Department of Population Health at NYU Langone. "We need a digital media environment that supports healthy eating instead of discouraging it."

YouTube is the second most visited website in the world and is a popular destination for kids seeking entertainment. More than 80 percent of parents with a child younger than 12 years old allow their child to watch YouTube, and 35 percent of parents report that their kid watches YouTube regularly.

"The allure of YouTube may be especially strong in 2020 as many parents are working remotely and have to juggle the challenging task of having young kids at home because of COVID-19," said Bragg, the study's senior author.

When finding videos for young children to watch, millions of parents turn to videos of "kid influencers," or children whose parents film them doing activities such as science experiments, playing with toys, or celebrating their birthdays. The growing popularity of these YouTube videos have caught the attention of companies, who advertise or sponsor posts to promote their products before or during videos. In fact, the highest-paid YouTube influencer of the past two years was an 8-year-old who earned $26 million last year.

"Parents may not realize that kid influencers are often paid by food companies to promote unhealthy food and beverages in their videos. Our study is the first to quantify the extent to which junk food product placements appear in YouTube videos from kid influencers," said Bragg.

Bragg and her colleagues identified the five most popular kid influencers on YouTube of 2019--whose ages ranged from 3 to 14 years old--and analyzed their most-watched videos. Focusing on a sample of 418 YouTube videos, they recorded whether food or drinks were shown in the videos, what items and brands were shown, and assessed their nutritional quality.

The researchers found that nearly half of the most-popular videos from kid influencers (42.8 percent) promoted food and drinks. More than 90 percent of the products shown were unhealthy branded food, drinks, or fast food toys, with fast food as the most frequently featured junk food, followed by candy and soda. Only a few videos featured unhealthy unbranded items like hot dogs (4 percent), healthy unbranded items like fruit (3 percent), and healthy branded items like yogurt brands (2 percent).

The videos featuring junk food product placements were viewed more than 1 billion times--a staggering level of exposure for food and beverage companies.

"It was concerning to see that kid influencers are promoting a high volume of junk food in their YouTube videos, and that those videos are generating enormous amounts of screen time for these unhealthy products," said Bragg.

While the researchers do not know which food and drink product placements were paid endorsements, they find these videos problematic for public health because they enable food companies to directly--but subtly--promote unhealthy foods to young children and their parents.

"It's a perfect storm for encouraging poor nutrition--research shows that people trust influencers because they appear to be 'everyday people,' and when you see these kid influencers eating certain foods, it doesn't necessarily look like advertising. But it is advertising, and numerous studies have shown that children who see food ads consume more calories than children who see non-food ads, which is why the National Academy of Medicine and World Health Organization identify food marketing as a major driver of childhood obesity," said Bragg.

The researchers encourage federal and state regulators to strengthen and enforce regulations of junk food advertising by kid influencers.

"We hope that the results of this study encourage the Federal Trade Commission and state attorneys general to focus on this issue and identify strategies to protect children and public health," said study co-author Jennifer Pomeranz, assistant professor of public health policy and management at NYU School of Global Public Health.

Credit: 
New York University

People with type 2 diabetes need not avoid eating potatoes based on glycemic index

People with type 2 Diabetes (T2D) are frequently told to avoid eating potatoes, and other high Glycemic Index (GI) foods, because of the longstanding perception that these foods make it difficult to control blood sugar levels. This is especially problematic during the night when blood sugar tends to spike -- a phenomenon that has been associated with cardiovascular disease and endothelial disfunction. However, for the first time, a rigorously controlled clinical trial, including 24 adults with T2D, demonstrates that GI is not an accurate surrogate for an individual's glycemic response (GR) to a food consumed as part of an evening meal. Specifically, the findings published in Clinical Nutrition show that participants had a better 'nocturnal' GR when they ate a mixed meal with skinless white potatoes compared to an isoenergetic and macronutrient-matched mixed meal that included a low GI carbohydrate food -- basmati rice.

"Despite its frequent use among nutrition researchers, GI is not an appropriate tool for understanding how a meal impacts glycemic control; it is a very specific measurement for foods consumed in isolation, typically conducted under controlled laboratory conditions," says Dr. Brooke Devlin, PhD, the primary investigator, at Australian Catholic University in Melbourne. "It's rare that people eat foods in isolation, and findings from this study demonstrate how other factors, such as the time of day or food pairings, need to be considered when investigating the GR of mixed meals in individuals with T2D."

Participants were provided the same breakfast and lunch, but they were randomly assigned to one of four dinners, each including either skinless white potatoes (test meal) prepared in three different ways (boiled, roasted, boiled then cooled then reheated) or basmati rice (control meal). Participants repeated the experiment, with a 9-day break in between each trial, to cycle through all test meals and the control. In addition to having blood samples collected regularly (both immediately after the meal and again every 30 minutes, for 2 hours), participants also wore a continuous glucose monitor overnight to track changes in blood sugar levels while sleeping.

There were no differences between meals in glucose response following the dinner that contained any of the potato dishes or basmati rice. Moreover, participants' overnight GR was more favorable after eating the evening meal that included any of the high GI potato side dishes compared to low GI basmati rice.

"These findings are contrary to that of observational research and traditional dietary guidance that has led some to believe potatoes are not an appropriate food choice for people with T2D," added Devlin. "Our study shows high GI foods, like potatoes, can be consumed as part of a healthy evening meal without negatively affecting GR -- and while delivering key nutrients in relatively few calories, which is essential for people with T2D."

This study followed a rigorous methodology by using a randomized crossover design and measuring glucose levels both immediately post-meal and overnight to obtain a better picture of the potatoes' impact on GR. However, the researchers noted a few limitations: study participants' baseline GR was assessed for only one evening meal, the dinner provided was larger than what is typically recommended for people with T2D (but in line with Australian eating patterns, at 40 percent of an individual's total energy intake), and the potatoes' impact on long-term glycemic control was not assessed.

Despite such limitations, the researchers concluded that "potatoes are a vegetable that is sustainable, affordable and nutrient-dense, and thus, they can play an important role in modern diets irrespective of metabolic health status."

Credit: 
FoodMinds LLC

Study reveals details behind transplant disparities experienced by black patients

Highlights

In an analysis of information on patients with kidney failure, Black patients are less likely than white patients to be placed on transplant waiting lists.

For patients on such lists, Blacks are less likely to receive transplants than whites.

Results from the study will be presented online during ASN Kidney Week 2020 Reimagined October 19-October 25.

Washington, DC (October 25, 2020) -- Studies have observed that Black patients are less likely to receive kidney transplants than white patients, but it's not clear when during the transplant evaluation process this disparity occurs. Research that will be presented online during ASN Kidney Week 2020 Reimagined October 19-October 25 indicates that the disparity arises after physicians refer patients for transplantation.

The analysis included 60,229 patients (23,499 Black and 36,730 white) who started dialysis between 2015 and 2018 at a large dialysis organization.

Compared with whites, Black patients were 23% more likely to be referred for transplantation. Among referred patients, Black patients were 19% less likely to be placed on a waitlist than whites. Among wait-listed patients, Black patients were 52% less likely to receive a transplant than whites. Overall, Black patients were 54% less likely to receive transplants than white patients.

"We found that Black patients were actually more likely to be referred to a transplant center after starting dialysis, compared with white patients; however, they were less likely to be waitlisted for a transplant after referral, and less likely to receive a transplant once waitlisted," said lead author Steven M. Brunelli, MD, MSCE (DaVita Clinical Research). "Racial disparities seem to emerge beginning at the listing stage and carry through the organ allocation stage."

Credit: 
American Society of Nephrology

Oncotarget: Survival after resection of brain metastases: A matched cohort analysis

image: This figure depicts overall survival and local in-brain recurrence-free survival in the study's subgroups.

Image: 
Correspondence to - Bawarjan Schatlo - bawarjan.schatlo@med.uni-goettingen.de

The cover for issue 32 of Oncotarget features Figure 2, "This figure depicts overall survival and local in-brain recurrence-free survival in the study's subgroups," by Hussein, et al. which reported that the aim of the present study is to assess whether the use of 5-ALA has an impact on local recurrence or survival compared to conventional white light microscopic tumor resection.

Two groups were compared:

     In the “white light” group, resection was performed with conventional microscopy.

     In the 5-ALA group, fluorescence guided peritumoral resection was additionally performed after standard microscopic resection.

Local in-brain recurrence occurred in 21/175 patients with a rate of 15/119 in the white light and 6/56 in the 5-ALA group.

The use of 5-ALA did not result in lower in-brain recurrence or mortality compared to the use of white light microscopy.

Dr. Bawarjan Schatlo from the Department of Neurosurgery at The University of Medicine Goettingen said, "Metastatic brain disease is more common than primary brain tumors."

"Metastatic brain disease is more common than primary brain tumors."

Another group made the case for extending tumor resection 5 millimeters into peritumoral tissue to perform a so-called supramarginal resection.

Its aim is to prolong progression-free survival through radical resection and improved local tumor control.

In a series of 52 patients, Kamp and colleagues detected positive fluorescence in 62% of resected cerebral metastases.

Thus, the utility and importance of using methods to improve local control of brain metastases remains an unresolved issue.

The aim of the current study was to compare survival and local recurrence in a cohort of patients who underwent surgery for brain metastases with 5-ALA fluorescence microscopy to one that was operated using microscopic white light only.

The Schatlo Research Team concluded in their Oncotarget Research Paper, "the present study confirmed that regardless of surgical adjunct, radiotherapy is strongly associated with improved survival."

Credit: 
Impact Journals LLC

New model predicts which patients with kidney disease may develop heartbeat irregularities

Highlights

* A new model that incorporates a type of artificial intelligence can accurately predict which individuals with chronic kidney disease face a high risk of developing atrial fibrillation.

* Results from the study will be presented online during ASN Kidney Week 2020 Reimagined October 19-October 25.

Washington, DC (October 24, 2020) -- A new model that uses machine learning, which is a type of artificial intelligence, may help predict which patients with kidney disease are at especially high risk of developing heart beat irregularities. The findings come from a study that will be presented online during ASN Kidney Week 2020 Reimagined October 19-October 25.

Atrial fibrillation (AF)--an irregular, often rapid heart rate--is common in patients with chronic kidney disease (CKD) and is associated with poor kidney and cardiovascular outcomes. Researchers conducted a study to see if a new prediction model could be used to identify patients with CKD at highest risk of experiencing AF. The team compared a previously published AF prediction model with a model developed using machine learning (a type of artificial intelligence) based on clinical variables and cardiac markers.

In an analysis of information on 2,766 participants in the Chronic Renal Insufficiency Cohort (CRIC), the model based on machine learning was superior to the previously published model for predicting AF.

"The application of such a model could be used to identify patients with CKD who may benefit from enhanced cardiovascular care and also to identify selection of patients for clinical trials of AF therapies," said lead author Leila Zelnick, PhD (University of Washington, in Seattle)

Credit: 
American Society of Nephrology

Discovery of pH-dependent 'switch' in interaction between pair of protein molecules

image: Upper panel: the different interaction sites observed by NMR under different pH conditions. Lower panel: the crystal structures of the complex under two pH conditions have different binding-sites

Image: 
FENG Yingang

All biological processes are in some way pH-dependent. Our human bodies, and those of other organisms, need to maintain specific- and constant- pH regulation in order to function. Changes in pH can have serious biological consequences or, as researchers at the Qingdao Institute of Bioenergy and Bioprocess Technology (QIBEBT), Chinese Academy of Sciences (CAS) found, serious benefits.

The findings are published on Oct. 23 in the journal Science Advances.

Cellulosomes are extracellular complexes consisting of multiple enzymes, which are associated with the cell's surface. Within the cellulosome cellular structure, the protein molecules dockerin and cohesin were the focus of this study.

"Cellulosomes are complex nanomachines in nature and have great values in biofuel production and biotechnology. This study is an example of the complexity and diversity of cellulosomes," said study author FENG Yingang, Professor, Metabolomics Group.

Changes in pH have previously been shown to result in 'on-off' switches within protein functions, many of which occur naturally and are essential for life processes. Biotechnical innovations can utilize this relevant phenomenon to develop sensors or switches using biomolecules that are pH-dependent.

The latest discovery, on the cellulosome assembly of the bacterium Clostridium acetobutylicum, takes this prospect further by switching between two functional sites, rather than simply 'on' or 'off'. This opens additional possibilities.

"Our study not only revealed an elegant example of biological regulation but also provides a new approach for developing pH-dependent protein devices and biomaterials for biotechnological application," said FENG.

Researchers found that changing the pH from 4.8 to 7.5 resulted in the cohesin-binding sites on the dockerin molecule switching from one site to the other. This type of switching between two functional sites has not been noted for any interaction between proteins previously.

Nuclear magnetic resonance (NMR) and isothermal titration calorimetry (ITC) were used to describe the distinct features of this interaction. Researchers additionally noted that the affinity, or the attraction between the molecules, was found to change along with the pH. This property is considered unusual when compared to other cohesin-dockerin interactions and is unique, thus far, to C. acetobutylicum bacteria.

These, and future discoveries like it, can potentially be used to create more complex biological switches in synthetic biology and further developments in the fields of biotechnology.

"Next, we will continue to elucidate the structure and regulation of cellulosomes, which could provide interesting novel discoveries and new strategies to increase the efficiency of lignocellulose-based biofuel production," FENG said. "Our ultimate goal is to promote sustainable and economical lignocellulose bioconversion and bioenergy production."

Credit: 
Chinese Academy of Sciences Headquarters

New test method to standardize immunological evaluation of nucleic acid nanoparticles

image: Schematic summary of the experimental flow.

Image: 
Melina Richardson (Afonin Lab, UNC Charlotte)

Therapeutic nucleic acids - lab-created segments of DNA or RNA, designed be used to block or modify genes, control gene expression or regulate other cellular processes - are a promising but still emerging area of biomedical treatment, with several drugs already in use and many more in trials. Nucleic acid nanoparticles (NANPs) are programmable assemblies made exclusively of nucleic acids with a number of therapeutic nucleic acid sequences embedded in their structure in a specific configuration, designed for the packaging and delivery of a number of intercellular or extracellular treatments simultaneously, to cause multiple, therapeutic actions human cells.

Perhaps predictably for a new class of drugs, this promising new form of treatment has often run into difficulties in clinical testing. Recurring problems have kept many products under development from being approved for use, and have had a discouraging effect on continuing research. The foremost of these difficulties have been adverse immune reactions in response to the delivery of NANP-based formulations.

In a paper in Nature Protocols, nanotechnology researchers Marina Dobrovolskaia from the Frederick National Laboratory for Cancer Research, and Kirill Afonin from the University of North Carolina at Charlotte, describe the development of a reproduceable protocol that accurately assesses the qualitative and quantitative immune properties of different NANPs when used to deliver therapeutic nucleic acids.

"Ten to twenty percent of all drugs are withdrawn during clinical trials due to immunotoxicity - nucleic acid therapies are not an exception," said Afonin, whose research, among other things, focuses on NANP development and understanding immune responses to NANP's. "This is especially true for NANPs because therapeutic use of nucleic acids is a relatively young area."

"There are lots of unknown immune characteristics of NANP's that can preclude them from entering clinical trials. This inhibits research in the field, because researchers know that after billions of dollars in testing expense you may still have a drug fail because of an adverse immune reaction in trials," he noted. "So, this is the key: how can we predict carefully the immune stimulation of a drug before we put it in a patient?"

The protocol proposed in the paper is a detailed step-by-step process for assessing inflammatory properties of any given NANP design when administered to humans, using human peripheral blood mononuclear cells ("white blood cells") as a test model. The in vitro experiments performed in the paper used cells freshly drawn and isolated from the blood of over 100 healthy human donors, though the paper notes that as few as three donors could be adequate to account for individual genetic diversity in immune cells.

"Aiming for a broad sample in our studies, we used more than 100 donors and the blood was drawn over different periods of the year, so it was a very heterogeneous pool of blood cells," Afonin noted.

"This protocol is reproduceable and it uses the most accurate model," he said. "It's more predictive of cytokine storms than animal models, which is, frankly, amazing. This also makes it affordable for more researchers, because they don't have to work with animals."

A reliable and accurate standardized protocol for assessing human immune response to different particle designs can be of great value in supporting research in NANPs, the paper argues: "In order to further advance the translation of NANPs from bench to clinic, the field is in great need of reliable experimental protocols for the assessment of both safety and efficacy of these novel nanomaterials."

"This is important because there are hundreds of researchers working on NANPs and everyone has their own preferred formulation," Afonin said. "The problem is that they all also use different protocols. When you read their publications, it is difficult to say which formulation is better because the conditions that they have tested them under are completely different - there is no harmony."

While a toxic immunological response might preclude a specific NANP design from entering clinical trials, the paper notes that in some therapies, some of the specific immune responses caused by some NANPs may, in fact, be useful and desired.

The protocol measures both the quantitative nature of the cell's immune reaction - the scale of the immune response - and the qualitative nature of it - what exact kind of chemical response(s) the immune reaction causes.

"The 'quality' being measured here is what kind of interferons or cytokines will be produced in reaction to the specific NANP," he said. "Both quality and quantity are crucial questions. And sometimes the immune response is not bad or undesired - by using this protocol, we can assess the quality and quantity of the immune response of a specific NANP so it can be used - as a vaccine adjuvant, for example."

Afonin is confident that the protocol produces highly accurate results because of the extensive experimentation that went into its design.

"The steps of this protocol have been thought through and validated for more than 60 different NANP designs, generated both by my lab and by other people in the field - a very representative sample," Afonin emphasized. "Our goal is to harmonize testing and make something that will be a milestone for future research."

Credit: 
University of North Carolina at Charlotte