Culture

Cell growth: Intricate network of potential new regulatory mechanisms has been decoded

image: Structure and interactions of the EGF receptors. The research work is decoding a new network of factors that can regulate interactions with the juxtamembrane segment.

Image: 
HHU / Manuel Etzkorn

In the cover article of the Cell Press journal Structure, the authors - among them Dr. Manuel Etzkorn (HHU/FZJ) and Prof. Dr. Michael Famulok (Bonn) - now describe how the interface functions and what substances can interact with it.

One of the things that control cell growth are proteins in the cell membrane. In this regard, EGF receptors ('EGF' stands for Epidermal Growth Factor) form a central interface between the cell and its environment. This is why disruption to this system is a frequent cause of cancers, which arise from incorrectly controlled cell growth.

Many drugs have a direct impact on EGF receptors. These drugs work by focusing on two key areas: The first is the sensory domain, which reaches out of the cell and interacts directly with messenger substances that bind to the cell externally. The second is the kinase domain, which is located inside of the cell and transmits the signal. In certain cancers, the cells have already developed a resistance to active ingredients that target these two domains.

The EGF receptor additionally comprises one other domain: the juxtamembrane (JM) segment between the external sensory domain and the kinase domain. We know that molecules also interact with this segment and can influence the transmission of signals as a result. But so far we know very few interaction partners. It is also unclear exactly how these interactions take place.

Researchers from HHU and FZJ as well as the University of Bonn have now identified a network of interaction partners for the JM segment. They also obtained high-resolution insights into the molecular architecture underlying the interaction. This means that receptor's third domain can now also become more significant for the development of new active ingredients. In particular, it offers a new therapeutic approach for cancers that have become resistant to current active ingredients.

"Our research findings are initially of fundamental nature; they show new possibilities for influencing the EGF receptor system under defined laboratory conditions. This means that we are opening the door to developing new drugs, but there is still a very long way to go to a new therapy", says Dr. Manuel Etzkorn from the Biomolecular NMR Center, which is jointly run by the Institute of Physical Biology at HHU and the Institute of Complex Systems at FZJ. In this newly published study, his team focused on shedding light on the aspects of biological structure. The colleagues in Bonn under the leadership of Prof. Dr. Michael Famulok from the LIMES Institute and the Center of Advanced European Study and Research initiated the project and carried out the biochemical and molecular biological characterisation of the systems studied.

Credit: 
Heinrich-Heine University Duesseldorf

Directed evolution of endogenous genes opens door to rapid agronomic trait improvement

image: (a) STEME-mediated C >T and A >G base-editing strategy. (b) Distribution of edited DNA sequencing reads in rice protoplasts for STEME-1. (c) Procedure for mutating the OsACC CT domain via STEME using groups of individual sgRNAs. (d) Effects of STEME-induced mutations on herbicide resistance

Image: 
IGDB

A research team led by Profs. GAO Caixia and LI Jiayang from the Institute of Genetics and Developmental Biology of the Chinese Academy of Sciences have engineered five saturated targeted endogenous mutagenesis editors (STEMEs) and generated de novo mutations to facilitate the directed evolution of plant genes. Their study was published in Nature Biotechnology on Jan. 13.

Heredity and variation are the basis of organismic evolution. Random mutagenesis by physical or chemical methods has long been applied to improve traits in plants, but it is labor-intensive and time-consuming.

In higher organisms, especially in plants, a target gene is usually transferred into a bacterial or yeast cell to generate the required diversity for selection, but once a target gene is no longer in situ, the functional consequences of such a change may not be the same as in the native context. Moreover, most important agronomic traits cannot be selected in bacteria or yeast.

"To establish powerful tools for directly inducing saturated targeted mutations and selection in plants will accelerate the development of agronomic traits and important functional genes," said Prof. GAO Caixia.

The researchers fused cytidine deaminase with adenosine deaminase to obtain four STEMEs. All four STEMEs efficiently produced simultaneous C>T and A>G conversions using only a sgRNA.

They also produced the fifth dual cytosine and adenine base editor - STEME-NG - to expand the targeting scope. With only 20 sgRNAs in rice protoplasts, STEME-NG can produce near-saturated mutagenesis for a 56-amino-acid portion of the rice acetyl-coenzyme A carboxylase gene (OsACC).

In a proof-of-concept experiment, the researchers used STEMEs to direct the evolution of OsACC gene in rice plants. They sprayed the regenerated rice seedlings with haloxyfop as the selection pressure. The scientists then identified three novel (P1927F, W2125C, and S1866F) and one known (W2125C) amino acid substitutions for herbicide resistance. These mutations were found to affect the haloxyfop-binding pocket directly or indirectly, based on the homology model of the CT domain of yeast ACC.

The development of STEME paves the way for directed evolution of endogenous plant genes in situ, which is important for breeding via molecular design.

Moreover, this STEME process might also be applicable beyond plants. For example, it may be useful for screening drug resistance mutations, altering cis elements on noncoding regions and correcting pathogenic SNVs in cell lines, yeast or animals.

Credit: 
Chinese Academy of Sciences Headquarters

Risk of lead exposure linked to decreased brain volume in adolescents

image: The cortex, visible here as folds, forms the outer layer of the brain and is important for information processing. The study led by Dr. Elizabeth Sowell of Children's Hospital Los Angeles shows that the cortex is adversely affected by high risk of lead exposure in children from lower income families. Image courtesy of Eric Kan of CHLA.

Image: 
Eric Kan of Children's Hospital Los Angeles

Though leaded gas and lead-based paint were banned decades ago, the risk of lead exposure is far from gone. A new study led by Elizabeth Sowell, PhD, shows that living in neighborhoods with high risk of lead exposure is associated with differences in brain structure and cognitive performance in some children. Her findings, published by Nature Medicine, also show a deeper trend - children in lower income families may be at increased risk.

Dr. Sowell and her team at The Saban Research Institute of Children's Hospital Los Angeles hypothesized that children in lower income families could be particularly vulnerable to the effects of living in high lead-risk environments. Their previous findings show that the socioeconomic status of families affects brain development. Here, they examined the association of lead exposure risk with cognitive scores and brain structure in more than 9,500 children.

Dr. Sowell's laboratory is part of the Adolescent Brain Cognitive Development (ABCD) Study, which has enrolled nearly 12,000 children from 21 sites across the United States. ABCD follows participants from the age of 9-10 into adulthood, collecting health and brain development information. It is the largest and most comprehensive study of its kind. The wealth of data collected through ABCD allows investigators like Dr. Sowell to ask questions about factors that affect adolescent brains.

Their results showed that an increased risk of lead exposure was associated with decreases in cognitive performance and in the surface area and volume of the cortex - the surface of the brain, responsible for initiating conscious thought and action. But this was not true for children from mid- or high-income families.

No amount of lead is safe. Even at very low levels, cognitive deficits have been attributed to lead exposure. More than 72,000 neighborhoods in the United States have been assigned risk estimates for lead exposure, based on the age of homes and poverty rates. Though new houses haven't used lead-based paint since 1978, many older homes still contain lead hazards.

"Professional lead remediation of a home can cost $10,000," says Dr. Sowell, who is also a Professor of Pediatrics at the Keck School of Medicine of USC. "So, family income becomes a factor in lead exposure." Indeed, as her study reveals, the associations between lead risk and decreases in cognitive performance and brain structure are more pronounced in lower income families.

"We were interested in how lead exposure influences brain anatomy and function," says Andrew Marshall, PhD, a postdoctoral research fellow in Dr. Sowell's lab and first author of the publication. "Cognition is affected by low-level lead exposure, but there weren't any published studies about brain structure in these children."

Decreased cognitive scores and structural brain differences were only observed in lower-income families. "What we're seeing here," says Dr. Marshall, "is that there are more pronounced relationships between brain structure and cognition when individuals are exposed to challenges like low income or risk of lead exposure." The ABCD study has not yet examined blood lead levels in these children, but the authors of this publication showed that risk of lead exposure is predictive of blood lead levels. Further studies are needed to determine the precise cause for these differences, such as whether lead exposure itself or other factors associated with living in a high lead-risk environment is contributing to this association, but the study unveils a clear correlation between family income and the effects of living in high lead-risk census tracts.

However, Dr. Sowell emphasizes that income and risk of lead exposure do not define a child. "It's absolutely not a foregone conclusion that these risks make you less intellectually capable," she says. "Many children who live in low-income, high-risk areas will be successful." Her goal is to promote awareness of how environmental toxins affect children. Understanding what our children face is the first step in helping them.

"Even though lead levels are reduced from three decades ago in the environment, it's still a highly significant public health issue," says Dr. Sowell. Despite this, there are kids in high-risk environments that do not show these deficits, indicating that it is possible to mitigate lead effects.

"The take home point is that this can be fixed," she says. "Lead does not have to be in the environment. We can remove it and really help kids get healthier."

Credit: 
Children's Hospital Los Angeles

AI can detect low-glucose levels via ECG without fingerprick test

image: Dr. Leandro Pecchia with the new technology.

Image: 
University of Warwick

Tracking sugar in the blood is crucial for both healthy individuals and diabetic patients. Current methods to measure glucose requires needles and repeated fingerpicks over the day. Fingerpicks can often be painful, deterring patient compliance.

A new technique developed by researchers at the University of Warwick uses the latest findings of Artificial Intelligence to detect hypoglycaemic events from raw ECG signals, via wearable sensors

The technology works with an 82% reliability, and could replace the need for invasive finger-prick testing with a needle, which could be particularly useful for paediatric age patients

A new technology for detecting low glucose levels via ECG using a non-invasive wearable sensor, which with the latest Artificial Intelligence can detect hypoglycaemic events from raw ECG signals has been made by researchers from the University of Warwick.Dr Leandro Pecchia with the new technology from the University of Warwick.

Currently Continuous Glucose Monitors (CGM) are available by the NHS for hypoglycaemia detection (sugar levels into blood or derma). They measure glucose in interstitial fluid using an invasive sensor with a little needle, which sends alarms and data to a display device. In many cases, they require calibration twice a day with invasive finger-prick blood glucose level tests.

However, Dr Leandro Pecchia's team at the University of Warwick have today, the 13th January 2020 published results in a paper titled 'Precision Medicine and Artificial Intelligence: A Pilot Study on Deep Learning for Hypoglycemic Events Detection based on ECG' in the Nature Springer journal Scientific Reports proving that using the latest findings of Artificial Intelligence (i.e., deep learning), they can detect hypoglycaemic events from raw ECG signals acquired with off-the-shelf non-invasive wearable sensors.

Two pilot studies with healthy volunteers found the average sensitivity and specificity approximately 82% for hypoglycaemia detection, which is comparable with the current CGM performance, although non-invasive.

Dr Leandro Pecchia from the School of Engineering at the University of Warwick comments:

"Fingerpicks are never pleasant and in some circumstances are particularly cumbersome. Taking fingerpick during the night certainly is unpleasant, especially for patients in paediatric age.

"Our innovation consisted in using artificial intelligence for automatic detecting hypoglycaemia via few ECG beats. This is relevant because ECG can be detected in any circumstance, including sleeping."

The figure shows the output of the algorithms over the time: the green line represents normal glucose levels, while the red line represents the low glucose levels. The horizontal line represents the 4mmol/L glucose value, which is considered the significant threshold for hypoglycaemic events. The grey area surrounding the continuous line reflects the measurement error bar.

The Warwick model highlights how the ECG changes in each subject during a hypoglycaemic event. The figure below is an exemplar. The solid lines represent the average heartbeats for two different subjects when the glucose level is normal (green line) or low (red line). The red and green shadows represent the standard deviation of the heartbeats around the mean.

A comparison highlights that these two subjects have different ECG waveform changes during hypo events. In particular, Subject 1 presents a visibly longer QT interval during hypo, while the subject 2 does not.

The vertical bars represent the relative importance of each ECG wave in determining if a heartbeat is classified as hypo or normal.

From these bars, a trained clinician sees that for Subject 1, the T-wave displacement influences classification, reflecting that when the subject is in hypo, the repolarisation of the ventricles is slower.

In Subject 2, the most important components of the ECG are the P-wave and the rising of the T-wave, suggesting that when this subject is in hypo, the depolarisation of the atria and the threshold for ventricular activation are particularly affected. This could influence subsequent clinical interventions.

This result is possible because the Warwick AI model is trained with each subject's own data. Intersubjective differences are so significant, that training the system using cohort data would not give the same results. Likewise, personalised therapy based on our system could be more effective than current approaches.

Dr Leandro Pecchia comments:

"The differences highlighted above could explain why previous studies using ECG to detect hypoglycaemic events failed. The performance of AI algorithms trained over cohort ECG-data would be hindered by these inter-subject differences."

"Our approach enable personalised tuning of detection algorithms and emphasise how hypoglycaemic events affect ECG in individuals. Basing on this information, clinicians can adapt the therapy to each individual. Clearly more clinical research is required to confirm these results in wider populations. This is why we are looking for partners."

Credit: 
University of Warwick

Study: 'Value instantiation' key to luxury brands' and social responsibility

image: Although luxury brands and social responsibility seem fundamentally inconsistent with each other, the two entities can coexist in the mind of the consumer, provided the brand can find someone -- typically, a celebrity -- who successfully embodies the two conflicting value sets, says new research co-written by Carlos Torelli, a professor of business administration and James F. Towey Faculty Fellow at Illinois.

Image: 
Photo by Gies College of Business

CHAMPAIGN, Ill. -- Luxury brands and corporate social responsibility initiatives make for unlikely bedfellows - the former with their self-enhancement values, the latter with its ethos of self-transcendence.

Although the values of haute couture designer handbags and clean drinking water campaigns would seem to clash, "value instantiation," which promotes the integration of disparate values, can help luxury brands thread the needle and counteract the adverse effects of value incompatibility, said Carlos Torelli, a professor of business administration and the James F. Towey Faculty Fellow at Illinois.

One of the most notable trends in marketing has been a pivot toward values-based marketing, which channels consumers' desires to make communities and society a better place to live. Notably, 85% of the top 50 global luxury brands - Prada, Tiffany and Rolex, for example - are involved in socially responsible activities such as philanthropy, environmental sustainability and ethical business practices.

But when luxury brands promote their corporate social responsibility agendas, they are, paradoxically, blending opposing values into their marketing strategies, which can result in negative responses from consumers, Torelli said.

"The two are fundamentally inconsistent with each other. If you want to pursue power, status and self-enhancement, it's hard to do that concurrently with focusing on others," he said. "Promoting two values simultaneously can lead consumers to experience a sense of unease or disfluency, resulting in unfavorable responses to the brand's marketing and product or service offerings. It's a mishmash of values."

But when consumers are exposed to an example of engaging in philanthropic activities while also striving for self-enhancement - say, when luxury brand managers promote the charity work of successful Hollywood celebrities such as Matt Damon, Angelina Jolie and Brad Pitt - they are more likely to consider that the two seemingly incompatible values can be pursued simultaneously, and envision themselves engaging in philanthropic activities while also purchasing luxury brands, the researchers say.

"Values are somewhat abstract; we need to make them concrete sometimes by giving an example of someone who embodies them," said Torelli, also the executive director of Executive and Professional Education at the Gies College of Business.

Across two studies, Torelli and his co-authors employed different approaches for value instantiation. In the first study, they exposed participants to the philanthropic activities of self-enhancement-driven celebrities. In the second, the researchers encouraged participants to visualize themselves engaging in philanthropic activities while pursuing self-enhancement values.

The results of both studies point to value instantiation as an effective tool in offsetting the harmful effects of integrating social responsibility appeals while selling luxury goods.

"It's a way for luxury brands to cope with trying to be more pro-social but not backfiring because of the incompatibility that the two things bring about," Torelli said.

The effect was particularly evident among the core consumer segment of luxury brands, who strongly pursue self-enhancement values, and thus would normally respond most negatively to social responsibility appeals, according to the researchers.

"It may seem disingenuous for a luxury brand to tout its altruism through its corporate social responsibility activities, but in marketing and branding, the trend for social responsibility has been growing dramatically over the last 20 years," Torelli said. "It used to be that a corporation was a profit-maximizing entity that didn't worry about anything else. But eventually, that's not optimal because firms have a responsibility and an interest in not behaving badly - in not contributing to pollution or damaging the environment, and not exploiting labor."

The research ultimately provides strategic guidance for luxury brands that wish to incorporate corporate social responsibility initiatives into their brand platform, Torelli said.

"For just about any business, there's more pressure now to be a partner in their communities - whether it's on a local scale or a global scale," Torelli said. "Some companies are doing it, and some feel left behind. Some do it right and some do it wrong. Ultimately, we provide a new conceptual framework for understanding how brands can overcome value incompatibility. Value incompatibility is at the heart of many struggles for 'old' brands to reposition themselves and make them more relevant for new consumers and the younger generation."

Torelli's co-authors are Deborah Roedder John, of the University of Minnesota; Alokparna (Sonia) Basu Monga, of Rutgers University; and Ji Kyung Park, of the University of Delaware.

The paper was published in the journal Marketing Letters.

Credit: 
University of Illinois at Urbana-Champaign, News Bureau

Leviathan polymer brush made with E. coli holds bacteria at bay

image: Microscopic monster: A fortuitous moment in the lab led to the creation of polymer brushes 100 times the customary size. The brushes are visible under the microscope, whereas polymer brushes are usually detected with atomic force microscopes or other non-optical means. The faint, green line at the bottom in this photo is a surface the bristles have grown on. The bristles are seen as a green-black mass reaching up to the red dots from the surface.

Image: 
Georgia Tech / Allison Carter

A lab goof with an enzyme taken from bacteria has led to the creation of the Leviathan of polymer brushes, emerging biocompatible materials with the potential to repel infectious bacteria.

Polymer brushes are surfaces normally covered with nanoscale bristles made of polymers, spaghetti-like molecular chains that are synthesized chemically. But in a new study, a team led by researchers at the Georgia Institute of Technology stumbled onto a biological technique to improve on the brushes by growing the bristles into giants 100 times the usual length.

"We were putting the enzyme onto a surface to observe it for a totally different experiment, but we put too much on the surface too densely, and - boom - we ended up with the thickest, longest polymer brush we'd ever seen or heard of," said Jennifer Curtis, who led the study and is an associate professor in Georgia Tech's School of Physics. "They were so big you could actually see them under an optical microscope instead of having to feel them with an atomic force microscope or use other methods needed for more customary polymer brushes."

The researchers diverted attention from the original study to pursue the freakishly large new brush.

To bacteria encroaching on them, the brush's bristles are a virtually impenetrable, squishy thicket that keeps microbes out in lab observations. It hinders the spread of biofilms, bacterial colonies that join together to form a tough material that makes killing the bacteria difficult.

Biofilm bulwark

"The human immune system has a hard time with biofilms. Antibiotics don't work very well on them either. In water filtration, biofilms can stick tenaciously, too. If you have a hyaluronan brush on a surface, a biofilm can't stick to it," Curtis said.

Hyaluronan, the compound in the bristles, is a polysaccharide, a chain of sugar molecules, and is naturally widespread in and around our cells. It is also known to many from its use in cosmetic moisturizers.

The enzyme that makes the hyaluronan bristles on the brush is hyaluronan synthase, and it circumvents more tedious chemical synthesis by effortlessly extruding extremely long bristles. The enzymes also can replace bristles when they break off, something chemically synthesized brushes cannot do, which limits those brushes' durability. Still, use of the synthase is unorthodox.

"Brush people say, 'What are these enzymes doing here?' because they're looking for chemistry, and biologists wonder what the brush has to do with biology," Curtis said.

The team published the new study, Self-regenerating giant hyaluronan polymer brushes, in the journal Nature Communications in December 2019. The research was funded by the National Science Foundation.

Engineered E. coli

The researchers engineered bacteria to overabundantly produce the enzyme by inserting hyaluronan synthase genes from the bacteria Streptococcus equisimilis into E. coli then they harvested the enzyme.

"We shattered the bacteria into a bunch of non-living gooey fragments then adhered their membrane to surfaces, and the synthase extruded the brushes," Curtis said.

The enzymes can be switched on and off, and adjusting salt concentration or pH in the solution around the brushes makes the bristles extend to a straight form or curl up into a retracted form. Functional additives like antibacterials could be embedded in brushes.

Something like a catheter could conceivably one day be coated with brushes to remain bacteria-free, and the thickness of the wiggly brushes would also act as a lubricant by preventing frictive contact with the surface beneath them. Some human cells key to the healing process are actually able to sink through the bristles, which could have potential for medicine.

"For a chronic wound that won't heal, you may be able to design a bandage that encourages new cell growth but keeps bacteria out," Curtis said.

Biophysics research

The researchers' fortuitous detour into the giant brush has expanded possibilities for their original intent of studying enzymatic hyaluronan in isolation.

"We constantly deal with the coupling of biochemistry, chemical signaling, and mechanics, so having something that isolates the mechanics from the signaling so we can focus on just the mechanics is really useful," Curtis said.

Credit: 
Georgia Institute of Technology

How retailers can make more money in online auctions

COLLEGE PARK, Md. - To get more participants in online auctions and drive up the winning bid prices, two things matter: how long an auction is active and the day of the week it closes, finds researchers from the University Maryland, Stanford University, University of Pennsylvania and Indiana University.

Forthcoming as "Managing Market Thickness in Online B2B Markets" in Management Science, the study focuses on online business-to-business (B2B) auction platforms used by retailers to sell their unsold or returned inventory to discount stores and wholesale liquidators.

Co-authors Wedad Elmaghraby (Maryland), Kostas Bimpikis (Stanford), Ken Moon (Penn) and Wenchang Zhang (Indiana) looked at how shortening or lengthening auctions impacted the number of buyers bidding in online auctions. They found that adjusting the listing policy (supply) - specifically, the days on which auctions close/end and how long they remain open - influences whether potential buyers place bids or wait and can increase sellers' revenues by several percentage points.

"On one hand, increasing supply by having more auctions end on a given day can profitably incentivize more participation in auctions, says Elmaghraby, professor of operations management and management science in Maryland's Robert H. Smith School of Business. "But auctions ending the same day cannibalize each other and actually drive down prices that day."

The researchers used data from a leading online platform that manages auctions to liquidate excess or returned inventory for more than 30 big-box retailers, including Costco, Walmart, Sears and Home Depot. The auctions work like an eBay auction, lasting from one to four days, with a winner-take-all bundle of similar products - electronics, household appliances, furniture or apparel - up for grabs. For the study, the researchers honed in on iPhone auctions.

They culled through auction data to design the optimal listing policy. They found that concentrating auctions' ending times to certain days of the week leads to a 7.3 percent increase in a platform's revenues.

The researchers also say the auction platform can boost revenues if it implements a recommendation system to selectively inform sellers and bidders of the number of auctions currently open on the platform. They say the right system would increase the level of competition among bidders on days that the incoming supply is higher than average, once again helping to profitably match supply with demand.

Credit: 
University of Maryland

Reducing the risk of blood clots in artificial heart valves

image: Time evolution (from left to right) of systolic turbulent blood flow past a bileaflet mechanical heart valve.

Image: 
H. Zolfaghari, ARTORG Center, University of Bern

Most people are familiar with turbulence in aviation: certain wind conditions cause a bumpy passenger flight. But even within human blood vessels, blood flow can be turbulent. Turbulence can appear when blood flows along vessel bends or edges, causing an abrupt change in flow velocity. Turbulent blood flow generates extra forces which increase the odds of blood clots to form. These clots grow slowly until they may be carried along by the bloodstream and cause stroke by blocking an artery in the brain.

Mechanical heart valves produce turbulent blood flows

Patients with artificial heart valves are at a higher risk of clot formation. The elevated risk is known from the observation of patients after the implantation of an artificial valve. The clotting risk factor is particularly severe for the recipients of mechanical heart valves, where the patients must receive blood thinners every day to combat the risk of stroke. So far, it is unclear why mechanical heart valves promote clot formation far more than other valve types, e.g. biological heart valves.

A team of engineers from the Cardiovascular Engineering Group at the ARTORG Center for Biomedical Engineering Research at the University of Bern has now successfully identified a mechanism that can significantly contribute to clot formation. They used complex mathematical methods of hydrodynamic stability theory, a subfield of fluid mechanics, which has been used successfully for many decades to develop fuel-efficient aircrafts. This is the first translation of these methods, which combine physics and applied mathematics, into medicine.

Using complex computer simulations on flagship supercomputers at the Centro Svizzero di Calcolo Scientifico in Lugano, the research team was able to show that the current shape of the flow-regulating flaps of the heart valve leads to strong turbulence in the blood flow. "By navigating through the simulation data, we found how the blood impinges at the front edge of the valve flaps, and how the blood flow quickly becomes unstable and forms turbulent vortices," explains Hadi Zolfaghari, first author of the study. "The strong forces generated in this process could activate the blood coagulation and cause clots to form immediately behind the valve. Supercomputers helped us to capture one root cause of turbulence in these valves, and hydrodynamic stability theory helped us to find an engineering fix for it."

The mechanical heart valves which were used in the study consist of a metal ring and two flaps rotating on hinges; the flaps open and close in each heartbeat to allow blood to flow out of the heart but not back in again. In the study, the team also investigated how the heart valve could be improved. It showed that even a slightly modified design of the valve flaps allowed the blood to flow without generating instabilities which lead to turbulence - more like a healthy heart. Such a blood flow without turbulence would significantly reduce the chance of clot formation and stroke.

Life without blood thinners?

More than 100,000 people per year receive a mechanical heart valve. Because of the high risk of clotting, all these people must take blood thinners, every day, and for the rest of their lives. If the design of the heart valves is improved from a fluid mechanics point of view, it is conceivable that recipients of these valves would no longer need blood thinners. This could lead to a normal life - without the lasting burden of receiving blood thinner medication. "The design of mechanical heart valves has hardly been adapted since their development in the 1970s," says Dominik Obrist, head of the research group at the ARTORG Center. "By contrast, a lot of research and development has been conducted in other engineering areas, such as aircraft design. Considering how many people have an artificial heart valve, it is time to talk about design optimizations also in this area in order to give these people a better life."

Research group Cardiovascular Engineering

The ARTORG´s Cardiovascular Engineering (CVE) group studies cardiovascular flows and diseases, such as valvular heart disease and heart attack. Its research aims to improve the long-term durability and biocompatibility of therapeutic devices and implants and to develop novel diagnostic tools for clinical practice. CVE translational research projects address immediate clinical needs that were identified together with clinical partners in Angiology, Cardiology and Cardiovascular Surgery at Inselspital, who are closely integrated in the project teams from start to finish. The team operates an experimental flow lab with modern measurement technology and a computational lab to model flows in the heart and blood vessels. Its experimental facilities include high-speed cameras and laser-based methods for three-dimensional flow quantification. The group develops and uses custom-tailored computer models and supercomputers to study biomedical flow systems with fluid-structure interaction.

Credit: 
University of Bern

Yale-led team finds parents can curb teen drinking and driving

Binge drinking by teenagers in their senior year of high school is a strong predictor of dangerous behaviors later in life, including driving while impaired (DWI) and riding with an impaired driver (RWI), according to a new Yale-led study.

But researchers also found that what teens believe their parents know about their leisure activities and who their friends are -- and whether the parents approve or disapprove of alcohol use -- can have life-saving effects.

"There is great prevention power in intentional parenting, and a strong, reliable, mutual relationship here can make all the difference in the world, including helping to identify the development of youth alcohol/drug use disorder and the need for specialized treatment services for addiction," said lead author Federico Vaca, M.D., M.P.H., professor of emergency medicine and director of the Yale Developmental Neurocognitive Driving Simulation Research Center.

Motor vehicle crashes are the leading cause of death for teens and young adults, and nearly a quarter of these are alcohol-related crashes.

Conducted with researchers from the National Institutes of Health (NIH) and Colorado State University, the study appeared in the journal Pediatrics.

Researchers analyzed data from the NEXT Generation Health Study, a national longitudinal study of high schoolers run by the NIH and others that followed 2,785 young people over the course of seven years.

They found that the protective effect of parental monitoring and teen awareness of parents' attitudes about alcohol lasted as much as four years after leaving high school.

"As kids get older, we tend to step away from them," Vaca said. "We think: 'They've got this.' But if kids think we approve or disapprove of them drinking, that can have a powerful effect. This is a really valuable opportunity to bolster parenting practices like parent monitoring and parent support for not using alcohol, but with a focus on intentionally strengthening the teen-parent relationship."

By 12th grade, 42% of young people have had an alcoholic drink in the past month, and 25% have had at least one binge-drinking episode. Generally, for women, binge drinking involves consuming four or more alcoholic drinks in two hours; for men, it's five or more drinks.

Extreme bingeing, a growing concern, refers to drinking up to 15 or more alcoholic drinks on a single occasion.

The researchers found that young people who binge in 12th grade were, two years later, six times more likely to drive while impaired than someone who did not binge drink, and, four years later, more than twice as likely to drive while intoxicated.

These teens were also more likely to ride with an impaired driver and to experience alcohol-related blackouts and extreme binge drinking in subsequent years.

But the study showed parents can have a positive influence.

According to the findings, if teens in 12th grade knew that parents disapproved of drinking, it decreased the odds of their driving while impaired by 30% four years later, and of riding with an impaired driver by 20% one year later. Parental support for not using alcohol also reduced later odds of blackout by 20%.

"A key take-home message here is: Just because kids are getting older, it doesn't mean parents should stop inquiring about where they are going, who they will be with, and how they are spending their money," said Vaca. "Parents should continue to be intentional about their relationships with their teens, staying connected and mindful about how their teen spends his or her free time. This could make the all the difference."

Credit: 
Yale University

Prolonged ECG monitoring of ED patients with syncope is safe alternative to hospitalization

image: A prospective observational multicenter cohort study of 242 patients 18 years and older, presenting to the ED with syncope.

Image: 
KIRSTY CHALLEN, B.SC., MBCHB, MRES, PH.D., LANCASHIRE TEACHING HOSPITALS, UNITED KINGDOM

DES PLAINES, IL -- Prolonged cardiac rhythm monitoring will improve arrhythmia diagnostic yield among non-low-risk emergency department patients with syncope. That is the finding of a study published in the January 2020 issue of Academic Emergency Medicine (AEM), a journal of the Society for Academic Emergency Medicine (SAEM).

The lead author of the study is Monica Solbiati MD, PhD, Department of Clinical and Community Sciences, University of Milan, Milan, Italy.

The multicenter study by Dr. Solbiati et al. found that while the overall diagnostic accuracy of emergency department electrocardiographic monitoring of non-low-risk patients syncope is imperfect, the sensitivity of prolonged telemetry (>12 hours) is high. Although the optimal duration of ECG monitoring has not been defined, the results support the use of a minimum of 12-hour monitoring as a safe alternative to hospitalization for the management of non-low-risk patients with syncope.

The study is the first designed specifically to assess the diagnostic accuracy of ECG monitoring in non-low-risk patients with syncope; the findings confirm the crucial role of telemetry in the ED management of patients with syncope. Further studies are needed to verify the safety and effectiveness of this strategy in terms of reducing unnecessary hospitalization and costs.

Venkatesh Thiruganasambandamoorthy, CCFP-EM, MSc, associate professor in the Departments of Emergency Medicine and Epidemiology at the University of Ottawa as well a Scientist at the Ottawa Hospital Research Institute commented:

"This study provides additional evidence that prolonged cardiac rhythm monitoring will improve arrhythmia diagnostic yield among non-low-risk emergency department patients with syncope."

Dr. Thiruganasambandamoorthy is internationally recognized for his research on emergency department syncope. He is recipient of several peer reviewed grants, won several excellence awards, and is co-author of several guidelines, consensus/position statements nationally and internationally.

Credit: 
Society for Academic Emergency Medicine

EU project RES URBIS shows the viability of bioplastic generation with urban biowaste

image: The image shows a sample of the used waste, the taken product from bacteria action, as well as several obtained bioplastic products in the project.

Image: 
J. Mata/UB

In a circular economy, the city waste being turned into resources, is of great importance considering more than 70% of the inhabitants in Europe live in urban areas and produce a great amount of biowaste coming from the treatment of their waste waters. The European project RES URBIS (Resources from Urban Bio-waste), showed that different biowaste produced in an urban environment can be treated within the same chain of valorisation and can obtain products with biological origins, such as bioplastic, with a higher economic value to the classic compost and biogas. The project confirmed the technical and economic viability of this process.

The experimental part of the project was carried out in two pilot plants, located in Lisbon (Portugal) and Treviso (Italy), and in five laboratories -one of them in the Faculty of Chemistry of the UB. It produced a total of 30 kg of polyhydroxyalkanoates (PHA), the basic polymer to create bioplastic with volatile fatty acids from waste decomposition. This PHA was obtained through three new extraction methods carried out within the project, and later, processed by the industrial entities of the consortium to obtain commercial-use bioplastic.

"The results of the project were very positive. We obtained film samples of bioplastic to use them as an interlayer with adjacent film, with a great commercial potential. These bioplastics can be used as long-lasting goods and biocomposites with fibres produced with waste from parks and gardens", says Joan Mata, professor from the Department of Chemical Engineering and Analytical Chemistry, who leads the participation of the University of Barcelona in the project. "Also -he adds-, the conducted analysis show that the legislation states".

Regarding commercialization of these bioplastics, the team considered the European regulatory frame on the potential risks for health and environment of chemical products (REACH-CLP), and although there is still a lot to do on the definition of the final condition of the product known as waste final, "the scenario for the commercialization of the product is highly favourable", notes Mata.

More efficient refineries with a lower environmental impact

The analysis of the life cycle of these bioplastics showed that the materials and energy used by PHA production through the presented biorefinery in the RES URBIS project have a lower environmental impact than the one generated by the plastic production with fossil origin.

The RES URBIS technological chain improved the plants on anaerobic digestion of biowaste. Its economic analysis in the analysed scenario -among which is the Metropolitan Area of Barcelona- shows the production of PHA is viable after a price of 3€/kg and even one less if considering the most favourable conditions of the process. This price, compared to the price of the current commercialized PHA obtained from specific cultures of cereals with a 4-5€/kg cost, shows the economic viability of the process.

"The following step will be to get funding through the EU and the private sector to build a demonstration plant", says Mata.

Credit: 
University of Barcelona

WSU study aims to prevent adverse drug reactions in dogs

image: Researchers Stephanie Martinez and Michael Court pose with their dogs Otis (left), Seamus (center), and Matilda (right). Matilda is a carrier of a mutation found by Martinez and Court, which results in less of the enzyme used to break down many popular anesthetics.

Image: 
WSU

If not identified before surgery, a rare genetic mutation could result in your dog being exposed to dangerously high levels of anesthetic agents.

Scientists at Washington State University's College of Veterinary Medicine initially discovered the mutation in greyhounds and more recently in other common dog breeds.

The research group, a member of the Program in Individualized Medicine (PrIMe), published its findings last week in Scientific Reports.

For years, veterinarians have known that some greyhounds struggle to break down certain drugs, which results in potentially life-threatening and prolonged recovery periods following anesthesia.

The previously unknown genetic mutation that the WSU researchers uncovered in greyhounds causes less of CYP2B11, the enzyme that breaks down these drugs, to be made.

Not surprisingly, the mutation was also found in several other dog breeds that are closely related to the greyhound including borzoi, Italian greyhound, whippet, and Scottish deerhound.

However, when the research team extended their survey to more than 60 other breeds, using donated samples from the WSU Veterinary Teaching Hospital DNA Bank, they were surprised by what they found.

According to the study, funded by the American Kennel Club's Canine Health Foundation, some popular dog breeds, including golden retrievers and Labrador retrievers, may also struggle to break down the commonly used anesthetics, midazolam, ketamine, and propofol.

"We started with a condition we thought was specific to greyhounds and affected a relatively small number of dogs," said Stephanie Martinez, postdoctoral research associate and lead author on the study. "It now appears that there could be a lot more dogs affected by this mutation -- dogs from breeds that we wouldn't have expected."

The study found about one in 50 golden retrievers and one in 300 Labrador retrievers may have low amounts of CYP2B11. According to the American Kennel Club, Labrador retrievers are the most popular breed of dog in the U.S., closely followed by golden retrievers, ranked third.

Even mixed-breed dogs were not spared; although the prevalence was much lower at only one in 3,000 dogs.

"While the mutation is not that common in most breeds - outside of greyhounds and other related breeds - because some of these other breeds are so popular, a relatively large number of dogs in this country could be affected." Martinez said.

Michael Court, the study principal investigator and veterinary anesthesiologist who began studying slow anesthetic drug breakdown in greyhounds over 20 years ago, said, "Although we have developed special anesthesia protocols that work very safely in greyhounds - the nagging question was - should we be using these same protocols in other dog breeds?"

Court and Martinez are now moving forward to create a simple cheek swab test that could be used by dog owners and their veterinarians to detect the mutation and determine an individual dog's sensitivity to the problematic anesthetic drugs.

"We also suspect that dogs with the mutation may have trouble breaking down drugs - other than those used in anesthesia." Court said. "The challenge now is to provide accurate advice to veterinarians on what drugs and drug dosages should be used in affected patients."

The research team is currently seeking volunteer golden retrievers and greyhounds to participate in a one-day study at the WSU Veterinary Teaching Hospital to continue their study of drug breakdown in these dog breeds.

Credit: 
Washington State University

Study: Humanity's footprint is squashing world's wildlife

image: Deforested landscape, Madagascar

Image: 
Julie Larsen Maher/WCS

NEW YORK (January 13, 2020) - A new study says that the planet's wildlife is increasingly under the boot of humanity.

Using the most comprehensive dataset on the "human footprint," which maps the accumulated impact of human activities on the land's surface, researchers from WCS, University of Queensland, and other groups found intense human pressures across the range of a staggering 20,529 terrestrial vertebrate species.

Of that figure, some 85 percent or 17,517 species have half their ranges exposed to intense human pressure, with 16 percent or 3,328 species entirely exposed.

The analysis found that threatened terrestrial vertebrates and species with small ranges are disproportionately exposed to intense human pressure. The analysis suggests that there are an additional 2,478 species considered 'least concern' that have considerable portions of their range overlapping with these pressures, which may indicate their risk of decline

The Human Footprint looks at the impact of human population (population density, dwelling density), human access (roads, rail), human land-uses (urban areas, agriculture, forestry, mining, large dams) and electrical power infrastructure (utility corridors). These human pressures are well known to drive the current species extinction crisis.

Though their findings are sobering, the authors say that the results have the potential to improve how species' vulnerability is assessed with subsequent benefits for many other areas of conservation. For example, the data can aid current assessments of progress against the 2020 Aichi Targets - especially Target 12, which deals with preventing extinctions, and Target 5, which deals with preventing loss of natural habitats.

Said the paper's lead author, Christopher O'Bryan of the University of Queensland: "Our work shows that a large proportion of terrestrial vertebrates have nowhere to hide from human pressures ranging from pastureland and agriculture all the way to extreme urban conglomerates."

Said senior author James Watson of WCS and the University of Queensland: "Given the growing human influence on the planet, time and space are running out for biodiversity, and we need to prioritize actions against these intense human pressures. Using cumulative human pressure data, we can identify areas that are at higher risk and where conservation action is immediately needed to ensure wildlife has enough range to persist. "

Credit: 
Wildlife Conservation Society

Historical housing disparities linked with dangerous climate impacts

Extreme heat kills more people in the United States than any other type of hazardous weather and will likely become even deadlier due to climate change. However, extreme heat does not affect all people equally. Surface temperatures in different neighborhoods within a single city can vary by a whopping 20 degrees (F), making some people more at risk of experiencing dangerous temperatures.

A new study by researchers at the Science Museum of Virginia and Portland State University, with assistance from a student at Virginia Commonwealth University, is one of the first to link historical housing policies across the United States to inequitable heat exposure.

"We found that those urban neighborhoods that were denied municipal services and support for home ownership during the mid-20th century now contain the hottest areas in almost every one of the 108 cities we studied," said Vivek Shandas, professor of urban studies and planning at Portland State University. "Our concern is that this systemic pattern suggests a woefully negligent planning system that hyper-privileged richer and whiter communities. As climate change brings hotter, more frequent and longer heat waves, the same historically underserved neighborhoods--often where lower-income households and communities of color still live--will, as a result, face the greatest impact."

Jeremy Hoffman of the Science Museum of Virginia, and Nicholas Pendleton, a former student at Virginia Commonwealth University, also contributed to the study, which was published in the journal Climate on Monday, January 13th. The researchers examined the relationship between summertime surface temperatures, which were derived from satellite imagery, and historical housing policies, specifically 'redlining,' in 108 cities in the United States.

Neighborhoods with less green space and more concrete and pavement are hotter on average, creating 'heat islands.' In an earlier study of Portland, Oregon, Shandas and colleagues found that lower-income households and communities of color tend to live in heat islands. They found similar effects in other cities, and they wanted to know why.

To explore this question, they looked at the relationship between 'redlining' and surface heat. Beginning in the 1930s, discriminatory housing policies categorized some neighborhoods--designated with red lines--as too hazardous for investment. Thus, residents in 'redlined' neighborhoods were denied home loans and insurance. These areas continue to be predominantly home to lower-income communities and communities of color. While the practice of redlining was banned in 1968, this study aimed to assess the legacy effects of such policies within the context of rising temperatures.

The study found formerly redlined neighborhoods are hotter than all other neighborhoods in 94% of the 108 cities studied. In particular, the researchers found that redlined neighborhoods across the country are about 5 degrees Fahrenheit warmer, on average, than non-redlined neighborhoods. However, in some cities the differences are much more stark. For example, the cities of Portland, OR, Denver, CO and Minneapolis, MN showed the largest heat differences between redlined and non-redlined areas--as much as 12.6 degrees Fahrenheit.

"The patterns of the lowest temperatures in specific neighborhoods of a city do not occur because of circumstance or coincidence. They are a result of decades of intentional investment in parks, green spaces, trees, transportation and housing policies that provided 'cooling services,' which also coincide with being wealthier and whiter across the country," said Shandas. "We are now seeing how those policies are literally killing those most vulnerable to acute heat."

"I think anyone living in these neighborhoods today will tell you that it's hot during a heat wave," said Hoffman. "But that's not really the point. They are not only experiencing hotter heat waves with their associated health risks but also potentially suffering from higher energy bills, limited access to green spaces that alleviate stress and limited economic mobility at the same time. Our study is just the first step in identifying a roadmap toward equitable climate resilience by addressing these systemic patterns in our cities."

There are ways to mitigate the effects of extreme heat on potentially vulnerable populations through urban planning, and the researchers want this study to lead to changes in the way we design our cities and neighborhoods.

"Having worked with dozens of cities to support the creation of heat mitigation plans, we want to recognize that all neighborhoods are not made equal," Shandas said. "Nevertheless, by recognizing and centering the historical blunders of the planning profession over the past century, such as the exclusionary housing policies of 'redlining,' we stand a better chance for reducing the public health and infrastructure impacts from a warming planet."

Credit: 
Portland State University

Can solar geoengineering mitigate both climate change and income inequality?

image: Malian local greenhouse production of food crops, including tomatoes, cucumbers, papayas, melons, and peppers.

Image: 
Anastasia Sogodogo/USAID

New research from the University of California San Diego finds that solar geoengineering--the intentional reflection of sunlight away from the Earth's surface--may reduce income inequality between countries.

In a study recently published in Nature Communications, researchers examine the impacts of solar geoengineering on global and country-level economic outcomes. Using a state-of-the-art macroeconomic climate impacts assessment approach, the paper is the first to look at the economic impacts of climate projections associated with solar geoengineering.

While de-carbonizing the world's emissions sources continues to pose a large challenge, solar geoengineering, which is process where incoming sunlight is intentionally reflected to cool rising temperatures, could help avoid the worst consequences of global warming. This analysis is the first to project the response of Gross Domestic Product (GDP) to the specific pattern of cooling solar geoengineering produces.

The methodology estimates the historical relationship between climate, represented as mean annual temperature and precipitation, and country-level growth in economic production, measured as GDP per capita. This estimated climate-economy relationship is then applied to project and compare economic outcomes across four different climate scenarios for the next century - if global temperatures stabilize naturally; if temperatures continue to rise; if temperatures were stabilized as a result of geoengineering; and if temperatures were over-cooled from geoengineering efforts.

"While precipitation has little to no effect on GDP growth in our results, there is a relationship for temperatures," said first author Anthony Harding, a visiting graduate student with UC San Diego's School of Global Policy and Strategy from the Georgia Institute of Technology. "Applying these historical relationships for different models, we find that if temperatures cooled there would be gains in GDP per capita. For some models, these gains are up to 1,000 percent over the course of the century and are largest for countries in the tropics, which historically tend to be poorer."

In an economic model projecting a solar-geoengineered decrease in the average global temperature of around 3.5 degrees Celsius, the cooler climate would increase average incomes in developing tropical countries, such as Niger, Chad and Mali by well over 100 percent over the course of the century, compared to a model where warming continues to occur. For the U.S. and countries in Southern Europe, the same model showed a more moderate increase of about 20 percent. While the effects for each individual country can vary across models, the changes in temperature associated with solar geoengineering consistently translate into a 50 percent reduction of global income inequality.

Similar to previous studies which have explored the relationship between hot weather and low productivity, the findings in Nature do not reveal the mechanisms for why this correlation occurs.

"We find hotter, more populous countries are more sensitive to changes in temperature - whether it is an increase or a decrease," said Harding. "Those hotter countries are typically also poorer countries. With solar geoengineering, we find that poorer countries benefit more than richer countries from reductions in temperature, reducing inequalities. Together, the overall global economy grows."

A fundamental step in understanding the potential risks and rewards of solar geoengineering

Harding and corresponding-author Kate Ricke, assistant professor with UC San Diego's School of Global Policy and Strategy and Scripps Institution of Oceanography, highlight that there are many unknowns about the impacts solar geoengineering intervention efforts would have on the Earth's atmosphere, a cause of concern for scientists and policymakers.

However, predicting the economic impacts of solar geoengineering is a fundamental step towards understanding the risk tradeoff associated with the new field of study, which is advancing rapidly. Many emerging technologies have recently been developed to manipulate the environment and partially offset some of the impacts of climate change.

"There is a problem with solar geoengineering science in that there has been a lot of work on the physical aspects of it, however there is a gap in research understanding policy-relevant impacts," Ricke said. "Our finding of consistent reduction in inter-country inequality can inform discussions of the global distribution of impacts of solar geoengineering, a topic of concern in geoengineering ethics and governance debates."

While the economic models used in the study do not reveal the impacts solar geoengineering has on income inequality within countries' borders, the research results on GDP growth provide incentive for additional work on the global governance of solar geoengineering.

The authors write, "Our findings underscore that a robust system of global governance will be necessary to ensure that any future decisions about solar geoengineering deployment are made for collective benefit."

Credit: 
University of California - San Diego