Culture

A small twist leads to a big reaction

image: These act as advanced catalysts to speed up the hydrolysis of amide bonds.

Image: 
© 2020 Fujita et al.

In proteins, amino acids are held together by amide bonds. These bonds are long-lived and are robust against changes in temperature, acidity or alkalinity. Certain medicines make use of reactions involving amide bonds, but the bonds are so strong they actually slow down reactions, impeding the effectiveness of the medicines. Researchers devised a way to modify amide bonds with a twist to their chemical structure that speeds up reactions by 14 times.

The amide bonds that hold amino acids in place to form proteins ordinarily have a flat shape, said to be planar. This planar arrangement is known to give amide bonds their incredible resilience to change. So if a reaction requires a breakdown of amide bonds, it's going to have a hard time unless it gets some help.

One such reaction is called hydrolysis, a chemical breakdown due to the presence of water. This is used by some medicines, for example antibiotics, within the body. Hydrolysis is also used in the lab as a tool for chemical analysis, amongst other things. So if hydrolysis can be made faster, it could benefit research in these areas. And this is exactly what Professor Makoto Fujita and his team at the Department of Applied Chemistry have done.

"Amide bonds can be hydrolyzed in time given sufficient heat or pH balance, but this can be inefficient or expensive at scale," said Fujita. "For many years we have fabricated self-assembling molecular cage structures which can confine and modify structures within them. In this case, we confined amide bonds in molecular cages which then applied a slight twist of 34 degrees to their otherwise planar structures. And the results are encouraging."

What Fujita and his team discovered was that given this slight twist in their structures, amide bonds were able to be hydrolyzed at a much faster rate with certain configurations, increasing their reaction rate by up to 14 times. This could be of great benefit to various medical researchers in the fields of drug development and more besides.

Credit: 
University of Tokyo

Balancing the economy while saving the planet

If you make your bio-product 100% sustainable it may be way too expensive to produce. If you make it less environmentally friendly, you may, at some point, end up having a feasible product that can compete on market terms. But is it still sustainable? This balancing game is very real to lots of companies producing bio-chemicals; that is chemicals produced from biomass instead of petroleum which chemicals are conventionally made from.

Now, a group of scientists specializing in so-called techno-economic analysis (TEA) and life cycle assessments (LCAs) have come up with a framework to ease this balancing. The framework is important to make informed decisions, explains Adjunct and first author behind a recent study in Trends in Biotechnology, Ólafur Ögmundarson:

"By combining economic and environmental impacts in a monetary single score we can measure the trade-offs between these two indicators when assessing sustainability of biochemicals or biochemical processes. This is necessary, because biochemicals are not per default sustainable just because they are bio-based," he says.

Ólafur Ögmundarson, now Adjunct at University of Iceland, is former Senior Sustainable-Innovation Manager at The Novo Nordisk Foundation Center for Biosustainability at DTU, where he and his DTU co-workers performed most of this research.

Shifting the burden

With this framework, research and development units - both at universities and companies - can assess the true sustainability of biochemicals and optimize the performance both from an environmental and an economic viewpoint.

Basically, the framework can be used to find hotspots, or especially problematic steps, in the production process and help the company shift the burden on the balance to find the golden spot where economy and sustainability outweigh each other.

"There is a lot to gain by applying both methodologies for a truly sustainable future, because we all need chemicals and their different applications. We just need them to be sustainable!"

Useful in early phase of development

This framework makes it possible to develop sustainable bio-chemicals by choosing the right biomass input, fermentation process, downstream processes etc. for a given bio-compound.

Without assessing both the environmental and economic sustainability from an early stage of development, companies may end up with some high-damage steps that could, or should, be eliminated.

"The framework might show that at an early stage of development it is possible to optimise a certain production step to get the highest return on investment and simultaneously cause the lowest environmental impacts. This framework is a way to assess the full impact of a product or process - not just looking at either economy or sustainability, which is unfortunately rather common today."

Money talks

In order to make a TEA-LCA assessment and ending up with a monetary single score for a given bio-product, the researchers had to translate environmental impacts into a monetary value in order to compare economy and sustainability factors directly. This is imperative, Ólafur Ögmundarson explains:

"Let us take an example. A new molecule is being developed in a laboratory. It shows great market potential, and the economic indicator is, therefore, good. This is despite that a lot of energy, toxic chemicals and water are used in the production process. Because these inputs are cheap, they don't have an impact on the economic indicator. But they have huge negative environmental impact. So, the environmental indicators have to be converted into money to count equally in the framework."

The devil is in the detail

Unfortunately, this framework is not 'plug-and-play' to ordinary people and still require a lot of knowledge about TEA and LCA.

"To create an online tool for everyone to use would, of course, be ideal, but it is also not easy. The devil is in the detail. We need to know the estimated materials needed and energy inputs and outputs for the different process steps. So, based on the current setup of the framework, individual cases must be assessed independently."

Going forward, the researchers hope to develop this framework further. This includes exploring other ways of monetizing environmental damages and expand which costs are included in the calculations.

"We need to find a way to include the social pillar of sustainability in the framework; worker's rights, working conditions, salary etc. and we would welcome collaboration on developing the framework further. This applies to both researchers and companies," he concludes.

Credit: 
Technical University of Denmark

Gap between rich, poor neighborhoods growing in some cities

image: Housing prices rebounded in Bexley after the Great Recession, but didn't in Linden. The results show deepening polarization of neighborhood values in Columbus.

Image: 
Ohio State University, Center for Urban and Regional Analysis

New research provides insight into how housing prices and neighborhood values have become polarized in some urban areas, with the rich getting richer and the poor becoming poorer.

The results of the study, done in Columbus, Ohio, suggest that some of the factors long thought to impact neighborhood values - such as the distance to downtown, nearby highways, or attractions such as city parks - no longer matter much to changing housing prices in an area.

Instead, what drives neighborhood values are the unique, local amenities and characteristics of each area, such as local businesses, schools, crime rates and social networks.

And these features are self-reinforcing over time, said Jinhyung Lee, lead author of the study and graduate student in geography at The Ohio State University.

"Over 15 years, we see the divide between rich and poor neighborhoods getting deeper and wider in Columbus," Lee said.

The results suggest that government officials need to provide direct investments in low-valued neighborhoods to break the self-reinforcing negative effects, he said.

The study, led by researchers in Ohio State's Center for Urban and Regional Analysis (CURA), was published online recently in the journal Geographical Analysis.

Researchers used a high-resolution housing transactions database provided by the firm CoreLogic that allowed them to analyze nearly 480,000 home sales in the Columbus area between 2000 and 2015 to see how housing values have changed in specific neighborhoods.

Traditionally high-valued neighborhoods, which include suburbs (Upper Arlington, Grandview Heights, Bexley), became even more prosperous over the 15 years. Low-valued neighborhoods in the city (Linden, Franklinton) lost value.

The study captured the impact of the Great Recession on the value of houses in the Columbus area.

Housing prices dropped significantly in all parts of the city between 2008 and 2011, as they did throughout the United States. But the recovery did not happen equally throughout the Columbus area.

"Areas that traditionally had high housing prices regained much of the recession-induced loss, while other areas did not," Lee said.

"This unequal recovery made the polarized neighborhood values in Columbus even worse."

Overall, high housing prices were clustered near the center of the city and in suburban areas, Lee said.

"In contrast, the areas between the city center and suburban areas had low housing prices, resulting in a donut-shaped housing price landscape," he said.

The researchers calculated how far each neighborhood was from major Columbus amenities, including downtown, the nearest rivers, Ohio State's campus, the Columbus Zoo and the closest city-maintained park.

Analysis of the data showed that the distance from these features didn't shape patterns of neighborhood value over time, as some long-standing theories indicated they might, Lee said.

"This suggests that the reasons why neighborhoods are becoming more polarized has more to do with what is going on each individual neighborhood," he said.

Results showed that the location of major highways in Columbus, particularly U.S. Interstate 71, shaped polarization. Many of the richer neighborhoods are clustered west of I-71, with the poorer neighborhoods to the east.

"This is consistent with work by public policy and urban history scholars documenting that highways were constructed to purposefully cut through poorer, minority neighborhoods and avoid more affluent ones," Lee said.

"It has served to further reinforce patterns of economic segregation."

The findings suggest that low-value neighborhoods are unlikely to improve over time on their own, according to Lee.

"Our research underscores the need for direct investments in neighborhoods to spark a development process," he said.

For example, governments can invest in job market training programs in low-value neighborhoods to offset the self-reinforcing negative effects that can make these neighborhoods worse off over time, he said.

Lee said he expects similar polarizing trends exist in neighborhood values in cities across the nation. But the exact ways they get there may differ depending on factors like the size of the cities and their levels of decentralization.

Credit: 
Ohio State University

Chinese pterodactyl wings its way to the United Kingdom

image: The attached image shows Wightia declivirostris flying over an oxbow lake in the valley of the ancient Wessex River that flowed from Devon to the Isle of Wight.

Image: 
Megan Jacobs

The first ever specimen of a pterodactyl, more commonly found in China and Brazil, has been found in the United Kingdom.

A fossil hunter recently discovered a peculiar shaped fragment of fossil bone while out walking his dog in Sandown Bay on the Isle of Wight.

Not sure what it was, he passed it to University of Portsmouth Palaeontology student Megan Jacobs, who thought it might be the jaw bone from a pterodactyl. Further research proved she was right.

However, this was no ordinary pterodactyl jaw. This one lacked teeth and was remarkably similar to a bizarre group of pterosaurs called 'tapejarids'. They are better known from China and Brazil and have never previously been found in the UK.

Just last year a team from the University of Portsmouth discovered as similar specimen in North Africa (Morocco) which they named Afrotapejara.

The new specimen from the Isle of Wight has been named Wightia declivirostris.

Megan Jacobs said: "Although only a fragment of jaw, it has all the characteristic of a tapejarid jaw, including numerous tiny little holes that held minute sensory organs for detecting their food, and a downturned, finely pointed beak.

"Complete examples from Brazil and China show that they had large head crests, with the crest sometime being twice as big as the skull. The crests were probably used in sexual display and may have been brightly coloured."

The researchers determined that the Isle of Wight example seemed more closely related to the Chinese tapejarids rather than the Brazilian examples.

Co-author of the study Professor David Martill, a palaeontologist from the University of Portsmouth, said: "This new species adds to the diversity of dinosaurs and other prehistoric reptiles found on the Island, which is now one of the most important places for Cretaceous dinosaurs in the world."

The finder has kindly donated the specimen to Dinosaur Isle Museum at Sandown, where it is hoped it will go on display in the future.

Credit: 
University of Portsmouth

Biophysicists reveal how optogenetic tool works

image: Left: KR2 rhodopsin pentamer in its active state in the cell membrane (two horizontal disks). Right: sodium binding site in the active center of the protein. The distances to the oxygen atoms coordinating sodium are given in angstroms, or ten-billionths of a meter. The black grid is the electron density map. The violet sphere denotes a sodium ion.

Image: 
Kirill Kovalev

An international research team has for the first time obtained the structure of the light-sensitive sodium-pumping KR2 protein in its active state. The discovery provides a description of the mechanism behind the light-driven sodium ion transfer across the cell membrane. The paper came out in Nature Communications.

KR2 is a member of a very large family of microbial rhodopsins -- light-sensitive proteins present in the cell membrane of archaea, bacteria, viruses, and eukaryotes. These proteins have a wide range of functions, including light-driven transport of ions across the membrane. Such ion channels and pumps are the primary tools of optogenetics, a booming field in biomedicine with a focus on controlling cells in the body by illuminating them with light.

Optogenetics came to prominence due to its contributions to minimally invasive techniques for brain research and neurodegenerative disorder treatments addressing Alzheimer's, Parkinson's, and other diseases. Beyond that, optogenetics enables reversing vision and hearing loss and restoring muscle activity.

Despite its many successes, further development of optogenetics is complicated by the limited number of available proteins suitable for cell activation and inhibition. For example, the most widely used optogenetic tool, channelrhodopsin 2, whose structure was originally reported in Science by MIPT researchers and graduates, can transport both sodium, potassium, and calcium ions, as well as protons. The protein's low selectivity leads to undesirable side effects on cells. As a result, optimizing the protocols for using optogenetic tools is currently costly and time-intensive.

The search for new, more selective proteins is a priority for optogenetics. One of the candidates, the KR2 rhodopsin discovered in 2013, is a unique tool that selectively transports only the sodium ions across the membrane under physiological conditions. Understanding how KR2 works is crucial for optimizing the functional characteristics of that protein and using it as the basis for new optogenetic tools.

MIPT biophysicists published the first structures of KR2 in its various forms in 2015 and 2019. Among other things, they showed that the protein organizes into pentamers in the membrane, and that such behavior is vital to its functioning.

However, all the models described so far have looked at the protein in its inactive, or ground state. Yet it is only in the active state -- after illumination -- that the protein actually transports sodium. To understand how the KR2 pump works, the researchers have now obtained and described its high-resolution structure in the active state.

"We began by using the traditional approach, activating KR2 in pregrown protein crystals by illuminating them with a laser and getting a snapshot of the active state by rapidly freezing the crystals at 100 kelvins," said the study's first author, MIPT doctoral student Kirill Kovalev. "We got lucky, because such manipulations may well destroy the crystals. To avoid this, we had to fine-tune the laser wavelength and power and find the optimal exposure time."

Producing the large number of high-quality KR2 rhodopsin crystals necessary for the experiments has been made possible by the unique equipment of the MIPT Research Center for Molecular Mechanisms of Aging and Age-Related Diseases.

The most significant finding of the study is identifying the amino acid residues of the protein that bind the sodium ion inside the KR2 molecule. They are the factor that determines the rhodopsin selectivity toward a particular type of ions. In addition to that, a high-resolution structure for the protein's active state at 2.1 angstroms -- 21 hundred-billionths of a meter -- has revealed the precise configuration of the sodium ion binding site at the protein's active center. For the first time, the team showed that the binding site of KR2 has become optimized for sodium ions in the course of rhodopsin evolution. This means that the active state structure obtained in the study is best-suited for the rational design of next-generation KR2-based optogenetic tools.

"In the course of our work, we also obtained the active-state KR2 structure at room temperature," Kovalev added. "To achieve this, we had to update the well-known protocols for collecting crystallographic data. Besides, we employed a synchrotron radiation source to leverage the serial crystallography techniques, which are growing popular right now."

The room temperature KR2 structure confirmed that the protein model produced from a low-temperature snapshot is correct. This provided a direct demonstration that cryogenic freezing did not affect the rhodopsin's internal structure.

The structures reported in the paper have allowed the scientists to provide a first-ever description of active light-driven sodium ion transport across the cell membrane. Specifically, the study shows that sodium transport most likely involves a hybrid mechanism comprised by relay proton transport and passive ion diffusion through polar cavities in the protein. The mechanism proposed by the researchers has been confirmed via functional studies of mutated KR2 forms and molecular dynamics simulations of sodium ion release from the protein.

"Ion transport across the cell membrane is a fundamental biological process. That said, sodium ion transport should be enabled by a mechanism distinct from that involved in proton transport," explains Valentin Gordeliy, the director for research at the Grenoble institute for Structural Biology and the scientific coordinator of the MIPT Research Center for Molecular Mechanisms of Aging and Age-Related Diseases. "For the first time, we see how a sodium ion is bound inside the rhodopsin molecule and understand the mechanism for ion release into the intercellular space."

The biophysicists are convinced that their findings not only reveal the fundamental principles underlying ion transport across the membrane but will be of use to optogenetics. MIPT is continuing the development of optimized KR2 protein forms to expand the toolkit for brain research and neurodegenerative disease therapies.

Credit: 
Moscow Institute of Physics and Technology

Methodology for credibility assessment of historical global LUCC datasets

A study of the methodology for credibility assessment of historical global LUCC datasets has been published in SCIENCE CHINA Earth Sciences. The corresponding author is professor Fang Xiuqi of Beijing Normal University.

Accurate historical global land use/cover datasets are essential for a better understanding of the impacts of LUCC on global change. However, there are not only evident inconsistencies in current historical global land use/cover datasets, but inaccuracies in the data in these global dataset revealed by historical record-based reconstructed regional data throughout the world. Assessing the credibility of existing global land cover datasets is a precondition for improving data quality. However, it is difficult to assess the credibility of historical global land cover data applying currently used methods for assessing contemporary data. This is because the actual past land cover data (referred to as the "true value") that serves as the baseline of the credibility assessment is not directly accessible and needs to be reconstructed in most cases. Moreover, historical and natural records available for land cover reconstruction are very limited, and a widely accepted method for such an assessment remains to be developed.

This study, therefore, propose a methodological framework for credibility assessment of the historical global land use/cover datasets that addresses temporal as well as spatial changes in the amount and distribution of land cover and outlined four approaches based on the accuracy, rationality and likelihood assessments illustrated through five case studies, including accuracy assessments of HYDE cropland cover data in Germany over the last 1000 years and in the North China Plain over the last 300 years, rationality assessments of HYDE cropland cover data in Northeast China over the last 1000 years and in the coastal plain adjoining the Bohai Sea over the last 7000 years, and a likelihood assessment based on the consistency of cropland cover data in China derived from 10 modern global land cover datasets.

(1) Accuracy assessment based on regional quantitative reconstructed land cover data. The accuracy assessment is a quantitative credibility assessment that uses quantitatively reconstructed regional land cover data derived from historical or natural records as the baseline.

(2) Rationality assessment based on regional historical facts. This qualitative assessment is to assess the extent of rationality through a comparison of the temporal and spatial conformity of the land cover data from the global dataset with the historical facts relating to regional development.

(3) Rationality assessment based on expertise. This qualitative assessment is to assess the rationality upon which the expertise is based through an examination of how the data, assumptions, and methods of global land cover datasets match with the related spatial and temporal rules of nature and society, such as features of the natural environment or agricultural characteristics.

(4) Likelihood assessment based on the consistency of multiple datasets. This method is employed when it is difficult to determine the credibility of more than one existing dataset. Considering each global land cover dataset as an expert determination regarding the actual land cover, the likelihood of the credibility of the land cover data for a given spatial or temporal unit is assessed by measuring the degree of consistency of the data on the unit derived from multiple datasets.

Credit: 
Science China Press

Material and genetic resemblance in the Bronze Age Southern Levant

image: General of Megiddo

Image: 
© Dr Israel Finkelstein, Tel Aviv University

A team around Ron Pinhasi at the University of Vienna carried out a detailed analysis of ancient DNA of individuals from the Bronze Age Southern Levant known as 'Canaanites', to provide insights on the historical and demographic events that shaped the populations of that time and area. The scientists aimed at answering three basic questions: How genetically homogenous were the people from the Bronze Age Southern Levant, what were their plausible origins with respect to earlier peoples, and how much change in ancestry has there been in the region since the Bronze Age?

The team extracted and studied the DNA of people from five archaeological sites in the Bronze Age Southern Levant. They all share the "Canaanite" material culture and seem to be descending from two sources: People who lived in the region at earlier times and people who arrived from the area of the Caucasus-Zagros Mountains. These populations mixed at roughly equal proportions.

The data shows strong genetic resemblance, including a component from populations related to Chalcolithic Zagros and Early Bronze Age Caucasus introduced by a gene flow lasting at least until the late Bronze Age and affecting modern Levantine population architecture. These groups also harbor ancestry from sources that cannot fully be modeled with available data, highlighting the critical role of post-Bronze-Age migrations into the region over the past 3,000 years. The study provides evidence that the movement of Caucasus/Zagros people is already evident 4,500 years ago and likely started even earlier. This movement continued (although not necessarily continuously) throughout the Bronze Age.

"Populations in the Southern Levant during the Bronze Age were not static. Rather, we observe people movements over long periods of time - not necessarily continuously - from the northeast of the Ancient Near East into the region. The Canaanites are culturally and genetically similar. In addition, this region has witnessed many later population movements, with people coming from the northeast, from the south and from the west", says Ron Pinhasi.

From the viewpoint of archaeology and history of the Ancient Near East, the team was surprised to see the strength of the Caucasus/Zagros component in the population of the Bronze Age, and that migration from this area continued as late as the second millennium BCE. According to archaeological findings, the Bronze Age Southern Levant was divided into city-states, which present similar material culture. Now it can be concluded that similarity between these populations extends also to genetics, showing that it is a case of cultural unity associated with shared ancestry. "Our results provide a comprehensive genetic picture of the primary inhabitants of the Southern Levant during the second millennium BCE", says Pinhasi.

Credit: 
University of Vienna

Previously claimed memory boosting font 'Sans Forgetica' does not actually boost memory

image: Text in Sans Forgetica. Sans Forgetica is licensed under the Creative Commons Attribution-Non Commercial License (CC BY-NC; https:// creativecommons.org/licenses/by-nc/3.0/)

Image: 
Sans Forgetica is licensed under the Creative Commons Attribution-Non Commercial License (CC BY-NC; https:// creativecommons.org/licenses/by-nc/3.0/)

A font called Sans Forgetica was designed to enhance people's memory for information displayed in that font--compared to reading information in an ordinary font, such as Arial.

But scientists from the University of Warwick and the University of Waikato in New Zealand have discovered that Sans Forgetica does not enhance memory.

These scientists carried out four experiments comparing San Forgetica's alleged powers to those of ordinary fonts and found Sans Forgetica did not help.

It was previously claimed that the font Sans Forgetica could enhance people's memory for information, however researchers from the University of Warwick and the University of Waikato, New Zealand, have found after carrying out numerous experiments that the font does not enhance memory.

The Sans Forgetica font has received much press coverage, after researchers in Australia claimed they had designed a new font that would boost memory by making information that appeared in the new font feel more difficult to read - and therefore remembered better.

The original team carried out a study on 400 students, and found that 57% remembered facts written in Sans Forgetica, whereas 50% remembered facts written in Arial.

But a team of scientist led by the University of Waikato, New Zealand, and involving the University of Warwick, has just published their new findings in the paper 'Disfluent difficulties are not desirable difficulties: the (lack of) effect of Sans Forgetica on memory', in the journal Memory. After four experiments, they found no evidence of memory-boosting effects.

The four experiments included:

Establishing the extent to which material written in Sans Forgetica feels difficult to process

Comparing people's memory for information displayed in Sans Forgetica and Arial

Analysing the extent to which Sans Forgetica boosted people's memory for information in educational text

Testing people's understanding of concepts presented in either Sans Forgetica or Arial.

Across the four experiments with 882 people, this scientific team found that in Experiment One, Sans Forgetica feels harder to read compared to Arial.

In Experiment Two, they found that when they showed people pairs of words in Sans Forgetica or Arial, people recalled fewer Sans Forgetica pairs than Arial pairs.

In Experiment Three, they found that when people were shown some educational information in Sans Forgetica and Arial, and were then tested on what they could recall of the information, there was no evidence that Sans Forgetica improved their performance.

Finally, in Experiment Four, they found that when testing people's understanding of educational passages presented in Sans Forgetica or Arial, people had equal understanding of information presented in Sans Forgetica and Arial, and there was no proof that Sans Forgetica improved their understanding.

Dr Kimberley Wade, from the Department of Psychology comments:

"After conducting four peer-reviewed experiments into Sans Forgetica and comparing it to Arial, we can confidently say that Sans Forgetica promotes a feeling of disfluency, but does not boost memory like it is claimed to.

"In fact, it seems like although Sans Forgetica is novel and hard to read, its effects might well end there."

Andrea Taylor, from the University of Waikato, New Zealand adds:

"Our findings suggest we should encourage students to rely on robust, theoretically-grounded techniques that really do enhance learning, rather than hard-to-read fonts."

Credit: 
University of Warwick

Featured research from NUTRITION 2020 LIVE ONLINE

Press materials are now available for NUTRITION 2020 LIVE ONLINE, a dynamic virtual event showcasing new research findings and timely discussions on food and nutrition. The online meeting will be held June 1-4, 2020.

NUTRITION 2020 LIVE ONLINE is hosted by the American Society for Nutrition (ASN), the preeminent professional organization for nutrition research scientists and clinicians around the world. ASN's flagship meeting, Nutrition 2020, was canceled due to the impacts of COVID-19.

If you have reporter access to EurekAlert!, login to view embargoed releases via the links in the reporter-only section. You can find even more press releases and multimedia content by visiting our Virtual Newsroom. If you do not have access to EurekAlert!, request access to press materials.

NUTRITION 2020 LIVE ONLINE is free for all registrants. Simply create an account and get ready to enjoy streaming and on-demand content starting June 1.

Explore the full schedule, virtual abstract presentations, and on demand content to see all the exciting research topics that will be covered at NUTRITION 2020 LIVE ONLINE.

Materials are embargoed until June 1, 12 p.m. EDT, unless otherwise noted.

Leaders Call for 'Moonshot' on Nutrition Research (6/2, 2:30 p.m. EDT)
Science and policy experts urge coordinated research in face of national nutrition crisis

Experts Debate Saturated Fat Consumption Guidelines for Americans (6/3, 10:30 a.m. EDT)
Researchers weigh the evidence on saturated fat and heart disease

Dieting? Studies Weigh In on Opportunities and Risks
Progress and pitfalls in understanding the best way to lose excess weight

Study Pinpoints Top Sources of Empty Calories for Children and Teens
Children of all ages are consuming high amounts of added sugars and solid fats

Do Warning Labels Help People Choose Healthier Drinks?
Researchers examined more than 20 studies to find out if sugary drink warnings work

Playing Video Games Linked with Unhealthy Behaviors for College Men
Findings point to importance of educating gamers about healthy eating and exercise

Researchers Identify Seasonal Peaks for Foodborne Infections
New analysis approach could help identify when and where to conduct food safety inspections

Eating Whole Grains Could Help Lower Diabetes Risk
Large analysis looks at which types of carbohydrates affect risk of developing type 2 diabetes

Credit: 
American Society for Nutrition

As hospitals walk the tightrope of patient data-sharing, one system offers a new balance

image: Many medical centers are walking a tightrope between selling access to the patient data they hold, and respecting patients' rights and wishes. A new framework could help them.

Image: 
University of Michigan

Every major medical center in America sits on a gold mine. The data they hold about their patients and research participants could be worth millions of dollars to companies that would explore it for clues that could lead to new medicines, medical technologies, health apps and more.

Such efforts would take partnerships between industry and academic institutions -- which are already essential to medical innovation -- to a new level.

Before COVID-19 struck, major health systems had started selling the "mining rights" to troves of their health data and stored materials -- including details about patients' DNA found in samples of their blood or tissue. Current law allows this, as long as names and identifying details are stripped from patients' or research participants' individual records and samples before turning them over.

Now that the pandemic has squeezed hospitals' finances further, and increased the need for research on a grand scale, more medical centers may seek income from such "big data" agreements with industry partners. That's especially true for those whose patients also volunteer for in-house research studies.

But a new framework published in the New England Journal of Medicine could help them do so more responsibly, going beyond the minimum legal requirements and respecting patients by giving them more say in how their individual data may be used.

It was written by a team from Michigan Medicine, the University of Michigan's academic medical center -- one of the first to adopt such a framework. The authors lay out an approach already applied to thousands of U-M patients and research study volunteers, and dozens of projects.

"We believe our approach provides an ethical way to advance medical discovery and innovation while also respecting the trust patients and research participants put in the University of Michigan," says first author Kayte Spector-Bagdady, J.D., M.Bioethics, chief of the research ethics service of the Center for Bioethics & Social Sciences in Medicine and faculty at Michigan Medicine.

She wrote the piece with fellow members of a special U-M committee that oversees the university's process, including Sachin Kheterpal, M.D., M.B.A., associate dean for research information technology and a co-leader of U-M Precision Health, Ray Hutchinson, M.D., active emeritus professor of pediatrics and former associate dean for regulatory affairs, and Erin Kaleba, M.P.H., director of the office that oversees clinical research data.

Special consent

The crux of the system, launched in 2018, is an easy-to-understand informed consent document that research participants can choose to sign, in addition to the forms that they sign to take part in a U-M-run research project. The additional consent focuses on sharing their information, and any samples taken from them, outside the university.

They must first discuss the special outside-sharing consent form with research staff, who assess each participant's understanding of what giving the additional consent means.

The critical passage in the form reads: "You give permission to share your samples and information with researchers around the world including those working for companies. Researchers and their organizations may potentially benefit from the sale of the data or discoveries. You will not have rights to these discoveries or any proceeds from them."

More than half of research volunteers asked for such consent have given it. Once they do so, it opens up the possibility (with additional legal and ethical steps) for companies, foundations, medical specialty societies and nongovernmental agencies to access their samples and data to move innovation forward.

If their samples are being sought for a project with a specific company, they will be told about the project and company, though their consent applies to all approved industry use. They are told they can revoke their consent in future, stopping their data from being shared further.

But, if they don't consent, the samples of tissue and blood taken during their care and research participation, and the contents of their health record, will be marked as off-limits for sharing with industry. U-M teams may still use it for academic research, under a broader consent document and ethics board approvals.

A dedicated gatekeeper

The authors are all part of the other crucial part of U-M's approach: a committee that must review, approve and track any projects that involve patient data or specimen sharing with companies.

The article in NEJM lays out the decision process followed by the Michigan Medicine Human Data & Biospecimen Release Committee. It must review all proposals involving transfer of data or human materials to non-academic entities, mostly through partnerships between a U-M researcher and an outside company.

There are a few exceptions. Aggregated summaries of data, which do not disclose individual participant information, do not need to be reviewed. Data and specimens collected under industry-sponsored clinical trials that already include shared of information with the company that sponsored the study, do not need committee review either.

The committee, which meets every other week, has on average of three new proposals to review each time. Only a few have been rejected outright - mainly because the project proposed to use samples acquired before the new consent process, and there was no easy way to reach back to the people those samples had come from to ask for their consent.

The U-M framework does allow for the committee to grant exceptions, in rare cases, to the usual process.

For instance, if researchers and an outside partner are studying an "orphan" disease that affects few people, the committee weighs the importance of finding new treatment and prevention options against the individual participant's right to consent to industry use.

Past research has found that people who enroll in research are willing to accept some level of risk to themselves to help others with the same condition.

Though industry data-sharing doesn't carry risk of physical harm like a clinical trial might, it does carry a small risk that health data could be "re-identified" if matched with other types of available data sources, for instance in a databank of DNA from people who have taken ancestry DNA tests.

The committee even requires this level of consent when academic organizations are partnering with a commercial platform, such as an industry-supported disease registry.

Even in the face of COVID-19, and the pressing need to seek answers to a global pandemic, the framework is crucial, says Marschall Runge, M.D., Ph.D., U-M executive vice president for medical affairs and dean of the U-M Medical School.

"The temptation has never been greater to take shortcuts around health data protections to vie for huge federal grants or to develop and monetize intellectual property," says Runge. "That is why we have adopted our approach, and we hope it will serve as an example for others."

Credit: 
Michigan Medicine - University of Michigan

Those with IDD more likely to die from COVID-19, study shows

Syracuse, N.Y. - A new study published recently in ScienceDirect by researchers from Syracuse University and SUNY Upstate Medical University shows that people with intellectual and developmental disabilities (IDD) are more likely to die from COVID-19 than those without IDD.

According to the researchers, the disparity is likely related to a higher prevalence of comorbid diseases among those with IDD, and/or a higher percentage of people with IDD are living in congregate residential settings.

Their study, "Intellectual and Developmental Disability and COVID-19 Case-Fatality Trends: TriNetX Analysis," was published by ScienceDirect's Disability and Health Journal. The study included 30,282 people who were identified as COVID-19 positive in the TriNetX COVID-19 Research Network Platform.

"More attention is needed to this vulnerable health population in order to ensure their safety and well-being during this pandemic, including careful attention to the impact of public policies such as PPE prioritization and funding streams on the ability of residential service providers to guarantee quality care during this time," said researcher Scott Landes, an associate professor of sociology at Syracuse University's Maxwell School of Citizenship and Public Affairs and a research affiliate for the Lerner Center for Public Health Promotion.

The study was conducted by Landes and three researchers from SUNY Upstate Medical Center in Syracuse, N.Y.: Dr. Margaret Turk, professor of physical medicine and rehabilitation; Dr. Margaret Formica, associate professor of public health and preventative medicine and associate professor of urology; and Katherine Goss from the Disability & Health Research Team. Here is a more detailed look at their findings:

Every individual in this study had COVID-19, so rates are case-fatality rates that gave the researchers an idea of the severity of the disease among both groups. Among ages 0-17, for every 100 individuals with COVID-19, 1.6 with IDD died and less than one without IDD died. Among ages 18-74, for every 100 individuals with COVID-19, 4.5 with IDD died compared to 2.7 without IDD. Rates were similar for those 75 and over - for every 100 individuals with COVID-19, 21.1 with IDD died and 20.7 without IDD died.

"Based upon the case fatality rates we report among those ages 18-74, if 100,000 individuals with IDD contract COVID-19 - which is entirely possible in light of the estimates of the size of this population and the cumulative incidence rates we are seeing in our research - we would expect 4,500 to die," Landes said. "Comparatively, among 100,000 individuals without IDD, we would expect 2,700 to die. That would be an excess of 1,800 IDD deaths and in my mind that is unacceptable."

The researchers also found that individuals with IDD had a higher prevalence of comorbid circulatory, respiratory, and endocrine diseases across all age groups. While they could not test causality in this data, it is possible this partly explains the differences they found in case-fatality rates. Some of this difference may also be due to the higher percentage of individuals with IDD who reside in congregate settings - a characteristic the researchers could not account for in the study but are continuing to investigate.

Credit: 
Syracuse University

Mapping immune cells in brain tumors

image: Previously, the composition of the tumor tissue in brain tumors, specifically regarding to immune cells, was not explored with sufficient detail.

Image: 
Image: UZH

The removal of a malignant brain tumor is something of a balancing act between removing as much tumor tissue as possible at the same time as protecting the healthy tissue. Since cancer cells infiltrate healthy brain tissue, it is often not possible to remove brain tumors completely during surgery. After an operation that removes as much of the tumor as possible, the prognosis can be improved by subsequent radiotherapy and chemotherapy, but a cure with conventional treatments is difficult to achieve.

Hope through immunotherapy

A team of researchers at the University of Zurich (UZH) and the University Hospital Zurich (USZ) has now found what types of immune cells are present in what numbers in different types of brain tumors. These very precise "tumor maps" are essential to gain a better understanding of the individual immune components in the tumor and to develop targeted immunotherapies that activate an immune defense reaction.

"Our immune system is very precise and efficient. The immune defense can eliminate individual tumor cells, while protecting healthy cells," explains Burkhard Becher of the Institute of Experimental Immunology at the University of Zurich. Immunotherapy can have astonishing success in treating some types of cancers - but with malignant brain tumors, immunotherapy has thus far yielded disappointing results. One of the reasons for this is previously, that the composition of the tumor tissue in brain tumors, specifically regarding to immune cells, was not explored with sufficient detail.

High-dimensional mass cytometry and complex computer algorithms

To characterize the immune cells in malignant brain tumors, the researchers analyzed tissue from the neurosurgery operating theaters of USZ using a method established at UZH called high-dimensional mass cytometry. This technology makes it possible to analyze millions of different cell types at the same time at the single cell level. The cells are characterized using numerous proteins on their surface and in the cell interior, which vary according to the cell type. The huge amount of data is then processed with complex, self-learning computer algorithms. "For every brain tumor, our technology gives an individual signature of the immune cells present. Similarities and differences between patients and tumor types can thus be compared," says Burkhard Becher.

Immune cell composition depends of the type of tumor

The study shows that it is primarily the tumor type which determines the kind, frequency and distribution of immune cells present in individual brain tumors. "Gliomas, which develop directly in the brain, look different from metastases of other tumors in the body which have spread into the brain. In gliomas too we can clearly differentiate between various sub-groups through the specific composition of the immune cells," add Ekaterina Friebel and Konstantina Kapolou, both PhD candidates in the collaborating research groups.

According to Marian Christoph Neidert, neurosurgeon at USZ, the results are not only helpful for better understanding the immunological mechanisms in brain tumors: "They also offer a basis for the development of immunotherapies that are tailored to the various types of brain tumors." The tumor tissue investigated in the study came from patients treated at the USZ Brain Tumor Center. However, further research work is still required before brain tumor patients will be able to benefit from the immunological findings.

Credit: 
University of Zurich

New study finds cannibalism in predatory dinosaurs

image: Big theropod dinosaurs such as Allosaurus and Ceratosaurus ate pretty much everything -- including each other, according to a new study.

Image: 
PLOS ONE

Big theropod dinosaurs such as Allosaurus and Ceratosaurus ate pretty much everything--including each other, according to a new study, "High Frequencies of Theropod Bite Marks Provide Evidence for Feeding, Scavenging, and Possible Cannibalism in a Stressed Late Jurassic Ecosystem," published last month in the journal PLOS ONE.

"Scavenging, and even cannibalism, is pretty common among modern predators," said lead author Stephanie Drumheller, a paleontologist in the University of Tennessee, Knoxville's Department of Earth and Planetary Sciences. "Big theropods, like Allosaurus, probably weren't particularly picky eaters if it meant they got a free meal."

Researchers surveyed more than 2,000 bones from the Jurassic Mygatt-Moore Quarry, a 152-million-year-old fossil deposit in western Colorado, looking for bite marks. They found more than they were expecting.

There were theropod bites on the large-bodied sauropods whose gigantic bones dominate the assemblage, bites on the heavily armored Mymoorapelta, and lots of bites on theropods, too, especially the common remains of Allosaurus. There were hundreds of them, in frequencies far above the norm for dinosaur-dominated fossil sites.

Some were on meaty bones like ribs, but researchers discovered others on tiny toe bones, far from the choicest cuts. Pulled together, the data paints a picture of an ecosystem where dinosaur remains lay out on the landscape for months at a time--a stinky prospect, but one that gave a whole succession of predators and scavengers a turn at eating.

But why were there so many bites on the Mygatt-Moore bones? That question is a little harder to answer, at least without similar surveys from other dinosaur sites for comparison.

The Mygatt-Moore Quarry itself is a little unusual.

Volunteer members of the public have excavated most of the fossils found at the quarry. Julia McHugh, curator of paleontology with the Museums of Western Colorado and a co-author of the study, decided to continue this tradition of outreach by bringing students into the lab to help with the project. Now two of them, Miriam Kane and Anja Riedel, are co-authors on the new study as well.

"Mygatt-Moore is such a unique place," McHugh said. "Science happens here alongside hands-on STEM education with our dig program and volunteers."

Having so many marks on hand let the researchers really dig into details that are sometimes harder to study in smaller collections. For example, theropod teeth are serrated, and once in a while the tooth shape is reflected in the bite marks they make. Another co-author, Domenic D'Amore of Daemen College, had earlier figured out a way to translate those striated tooth marks into body size estimates.

"We can't always tell exactly what species were marking up the Mygatt-Moore bones, but we can say many of these marks were made by something big," D'Amore said. "A few may have been made by theropods larger than any found at the site before."

For more than 30 years, researches and others have worked the Mygatt-Moore Quarry intensively, but even after all that time, each season brings new discoveries in the field and in the lab. This snapshot of dinosaur behavior is proof that old bones can still hold scientific surprises.

Credit: 
University of Tennessee at Knoxville

Smart sponge could clean up oil spills

video: Watch the sponge in action.

Image: 
Northwestern University

EVANSTON, Ill. -- A Northwestern University-led team has developed a highly porous smart sponge that selectively soaks up oil in water.

With an ability to absorb more than 30 times its weight in oil, the sponge could be used to inexpensively and efficiently clean up oil spills without harming marine life. After squeezing the oil out of the sponge, it can be reused many dozens of times without losing its effectiveness.

"Oil spills have devastating and immediate effects on the environment, human health and economy," said Northwestern's Vinayak Dravid, who led the research. "Although many spills are small and may not make the evening news, they are still profoundly invasive to the ecosystem and surrounding community. Our sponge can remediate these spills in a more economic, efficient and eco-friendly manner than any of the current state-of-the-art solutions."

The research was published yesterday (May 27) in the journal Industrial Engineering and Chemical Research.

Dravid is the Abraham Harris Professor of Materials Science and Engineering at Northwestern's McCormick School of Engineering. Vikas Nandwana, a senior research associate in Dravid's laboratory, is the paper's first author.

Oil spill clean-up is an expensive and complicated process that often harms marine life and further damages the environment. Currently used solutions include burning the oil, using chemical dispersants to breakdown oil into very small droplets, skimming oil floating on top of water and/or absorbing it with expensive, unrecyclable sorbents.

"Each approach has its own drawbacks and none are sustainable solutions," Nandwana said. "Burning increases carbon emissions and dispersants are terribly harmful for marine wildlife. Skimmers don't work in rough waters or with thin layers of oil. And sorbents are not only expensive, but they generate a huge amount of physical waste -- similar to the diaper landfill issue."

The Northwestern solution bypasses these challenges by selectively absorbing oil and leaving clean water and unaffected marine life behind. The secret lies in a nanocomposite coating of magnetic nanostructures and a carbon-based substrate that is oleophilic (attracts oil), hydrophobic (resists water) and magnetic. The nanocomposite's nanoporous 3D structure selectively interacts with and binds to the oil molecules, capturing and storing the oil until it is squeezed out. The magnetic nanostructures give the smart sponge two additional functionalities: controlled movement in the presence of an external magnetic field and desorption of adsorbed components, such as oil, in a simulated and remote manner.

The OHM (oleophilic hydrophobic magnetic) nanocomposite slurry can be used to coat any cheap, commercially available sponge. The researchers applied a thin coating of the slurry to the sponge, squeezed out the excess and let it dry. The sponge is quickly and easily converted into a smart sponge (or "OHM sponge") with a selective affinity for oil.

Vinayak and his team tested the OHM sponge with many different types of crude oils of varying density and viscosity. The OHM sponge consistently absorbed up to 30 times its weight in oil, leaving the water behind. To mimic natural waves, researchers put the OHM sponge on a shaker submerged in water. Even after vigorous shaking, the sponge release less than 1% of its absorbed oil back into the water.

"Our sponge works effectively in diverse and extreme aquatic conditions that have different pH and salinity levels," Dravid said. "We believe we can address a giga-ton problem with a nanoscale solution."

"We are excited to introduce such smart sponges as an environmental remediation platform for selectively removing and recovering pollutants present in water, soil and air, such as excess nutrients, heavy metal contaminants, VOC/toxins and others," Nandwana said. "The nanostructure coating can be tailored to selectively adsorb (and later desorb) these pollutants."

The team also is working on another grade of OHM sponge that can selectively absorb (and later recover) excess dissolved nutrients, such as phosphates, from fertilizer runoff and agricultural pollution. Stephanie Ribet, a Ph.D. candidate in Dravid's lab and paper coauthor is pursuing this topic. The team plans to develop and commercialize OHM technology for environmental clean-up.

Credit: 
Northwestern University

Survey identifies learning opportunities related to health impacts of climate change

An international survey of Global Consortium on Climate and Health Education (GCCHE) membership found that the majority of members--health professions schools and programs, including medical, nursing, and public health--offer learning opportunities related to the health impacts of climate change, yet many also encountered challenges in instituting or developing curricula. The results of the survey provide a baseline assessment of the state of climate-health education internationally among health professions institutions. Results of the survey by Columbia University Mailman School of Public Health researchers appear in the journal JAMA Network Open.

The survey suggests there exist a range of educational offerings on climate-health, including sessions, courses, programs, or post-doctoral positions. Some schools have offered climate-health education for several years, some are just now adding content, and others do not include any content on the subject. While many schools are discussing adding climate-health educational offerings, there are still considerable gaps in offerings at many institutions as well as challenges that extend beyond the institutional level, such as political and funding priorities that might lead to lack of staff time and materials to support the training.

Conducted in 2017 and 2018, the survey was completed by 84 health professions institutions internationally. Among respondents, 63% offer climate-health education, most commonly as part of a required core course (76%). Sixty-one of 82 respondents (74%) reported additional climate-health offerings are under discussion, 42 of 59 (71%) encountered some challenges trying to institute the curriculum, and most have received a positive response to adding content mainly from students (39 of 58 (67%)), faculty (35 of 58 (60%)), and administration (23 of 58 (40%)).

The article's authors write that opportunities exist to facilitate the integration of climate-health curricula, such as working with students, faculty, and members of administration that are interested in this topic. In order to facilitate this integration, institutions can look to online resources, groups, and networks to provide guidance and information to develop curricula.

"We suggest that health professions schools include this content in their curricula and that awareness as well as financial support, resources, and expertise increase to help in its uptake," write study authors Brittany Shea, MA, Kim Knowlton, DrPH, and Jeffrey Shaman, PhD. "Climate change may be affecting health in a variety of ways with increasing consequences. Health professionals, including those in public health, nursing, and medical services, should be educated on how to prevent, mitigate, and respond to factors associated with climate change that may be associated with health in a negative way."

Brittany Shea is project director for GCCHE. Kim Knowlton is assistant professor of environmental health sciences and senior scientist with the Natural Resources Defense Council. Jeffrey Shaman directs GCCHE and the Columbia Mailman School Climate and Health Program; he is a professor of environmental health sciences.

Credit: 
Columbia University's Mailman School of Public Health