Culture

Interplay between MicroRNAs and targeted genes in cellular homeostasis of adult zebrafish

image: These are new insights into microRNA function in liver and gut tissues.

Image: 
Dr. Gary Hardiman et al., Bentham Science Publishers

Cellular pathways represent the intricate metabolic connections between the plethora of signaling pathways, and energetic-sensors in organs. To maintain energy balance in body organs these metabolic pathways must act uniformly at any given point in time. This is to ensure organ functionality under specific metabolic changes. A new class of non-coding RNAs, MicroRNA, has recently been discovered which has proven the process of metabolic homeostasis to be more complex than it was previously thought.

The objective of the research is to understand damages resulted by toxins to the liver and the intestine as well as its relation to MiRNome. Baseline Characterization in healthy animal tissue undergoing cellular homeostasis is required for initiating transcriptome process. For the experiment, researchers dissected wildtype fish (Zebrafish) and isolated their liver and gut; from these organs, small RNA was extracted. These organs were chosen because the liver is the main site for detoxification and gut is the primary site for contaminant exposure. From the RNA samples, mRNA and miRNA libraries were constructed and put under high throughout sequencing. Following sequencing, differential analysis was performed comparing liver mRNA contents with those in the gut. An "miRNA matrix" containing miRNA sequence and mRNA sequence was then constructed.

The results from the analysis have provided new information regarding microRNA function in these tissues. The miRNome and transcriptome of both liver and gut tissues were characterized and the related miRNAs were identified. Studying the miRNA matrix revealed two types of miRNA in liver and gut. miRNAs unique to each organ regulate fundamental cellular process important for both organs while those common to both tissues regulate biological processed specific to either the liver or the gut.

Credit: 
Bentham Science Publishers

Study: Early career choices appear to influence personality

image: In a study that tracked young adults over a period of six years, University of Illinois psychology professor Brent Roberts and his colleagues found that early life career choices are associated with shifts in personality.

Image: 
Photo by L. Brian Stauffer

CHAMPAIGN, Ill. -- In the state of Baden-Wurttemberg, Germany, 16-year-old students in middle-track schools decide whether to stay in school to pursue an academic career or enroll in a vocational training program. A new study offers evidence that the path they choose influences their personality years later.

The research is reported in the journal Psychological Science.

"We wanted to understand whether choosing different career paths would result in different patterns of personality development," said University of Illinois psychology professor Brent Roberts, who led the study with researchers from the University of Tubingen in Germany.

"We know from our prior research that entering the labor market is associated with increases in personality traits like conscientiousness and emotional stability," Roberts said. "But we seldom have the opportunity to compare groups of people at the same age who choose different paths."

The team focused on two groups of 16-year-olds in Baden-Wurttemberg. The first chose to enter apprenticeships or other vocational training programs; the second continued in school and entered the labor market after attending higher education.

At the beginning of the study, and again six years later, researchers asked participants to rate themselves on multiple measures that included personality traits and vocational interests.

The team used a technique called propensity score matching to align the traits of the two groups of study subjects.

"In this approach, you do everything you can to equate the two groups at the start of the study," said study co-author Ulrich Trautwein, of the University of Tubingen. "This approximates an experimental design that tries to equate groups through random assignment. Many social scientists believe this method allows you to make stronger causal inferences from correlational data."

The study revealed that, after six years, self-reported conscientiousness increased more among those who pursued vocational training and employment than among their peers in academia. Those on the vocational track also expressed less interest in engaging in scientific, business or entrepreneurial activities.

"This means that those who didn't continue their education were losing interest in jobs that normally are fostered by going to college," Roberts said.

The new research adds to mounting evidence that personality is not immutable, but changes throughout life, Roberts said. The changes are often subtle, but meaningful. The evidence suggests many of those changes are the result of one's life choices.

"This study provides the strongest evidence we have yet that the path you choose may change your personality," Roberts said.

Credit: 
University of Illinois at Urbana-Champaign, News Bureau

Graphic warnings snuff out cigarettes' appeal to kids

ITHACA, N.Y. - New research from Cornell University suggests graphic warning labels on cigarette ads have the same anti-smoking effect as similar warning labels on cigarette packs.

The labels - which contain images such as bleeding, cancerous gums and lips - also cancel out the effect of ads that prompt children to think of smoking as cool, rebellious and fun, according to the research.

"This study suggests the value of graphic warning labels extends beyond just getting people to have more negative feeling about smoking," said lead author Jeff Niederdeppe, associate professor of communication, who wrote the paper with a team of Cornell-affiliated researchers. "It also seems to have the added benefit of reducing the influence of 'social cue' ads that entice young people to want to smoke in the first place."

The paper, "Using Graphic Warning Labels to Counter Effects of Social Cues and Brand Imagery in Cigarette Advertising," was published in Health Education Research.

Researchers studied the graphic warning labels' effect on 451 adult smokers and 474 middle schoolers in rural and urban low-income communities in the Northeast. Each participant was randomly assigned a set of six ads. Some saw ads with social cues - such as a group of smiling people taking a selfie with a graphic warning label covering 20 percent of the ad. Other groups saw ads with various combinations of text-only warnings, graphic warnings, the current surgeon general warning, brand imagery and social cues.

Using Cornell's mobile media lab, researchers tracked study participants' eyes to measure what parts of the ad they looked at and for how long. After viewing the ads, participants reported the degree to which they felt negative emotions, including anger, fear and sadness. The graphic warning label drew viewers' attention away from ads and toward the warning, regardless of whether the warning was graphic or text only, more than the current surgeon general warning.

The graphic warning labels also aroused more negative feelings than the text-only labels and reduced the children's perceptions that cigarette brands are attractive and exciting.

"That's important, because there's pretty good evidence that the visceral reactions to these warnings are a main driver of their effectiveness," Niederdeppe said. "These ads are trying to create a positive brand image, and the graphic warning labels help suppress that."

The study also found participants felt the same levels of negative emotion whether they looked at a graphic warning label covering 20 percent of a full page ad or 50 percent of a much smaller cigarette pack.

"We were pleasantly surprised that the levels of negative emotion were equivalent between those two conditions," Niederdeppe said. "It suggests that 20 percent coverage on an advertisement is a high enough threshold to create the negative emotion."

The Food and Drug Administration, which funded the study through its Center for Tobacco Products, will consult this research as it considers revising the current surgeon general warnings - text-only warnings that have not been changed in nearly 40 years.

Credit: 
Cornell University

Vitamin C may reduce harm to infants' lungs caused by smoking during pregnancy

image: Vitamin c may reduce harm in infants' lungs in women who smoke while pregnant.

Image: 
ATS

Dec. 7, 2018--Vitamin C may reduce the harm done to lungs in infants born to mothers who smoke during their pregnancy, according to a randomized, controlled trial published online in the American Thoracic Society's American Journal of Respiratory and Critical Care Medicine.

In "Oral Vitamin C (500 mg/day) to Pregnant Smokers Improves Infant Airway Function at 3 Months: A Randomized Trial," Cindy T. McEvoy, MD, MCR, and her co-authors report that at three months of age, the infants whose mothers took 500 mg of vitamin C in addition to their prenatal vitamin had significantly better forced expiratory flows (FEFs). FEFs measure how fast air can be exhaled from the lung and are an important measure of lung function because they can detect airway obstruction.

The researchers also discovered an association between the infant FEFs and a genetic variant some of the mothers possessed that appeared to amplify the negative impact of nicotine on the babies before they were born. Other studies have linked this genetic factor, specifically for the α5 nicotinic acetylcholine receptor, to increased risk of lung cancer and obstructive lung disease.

"Smoking during pregnancy reflects the highly addictive nature of nicotine that disproportionately affects the most vulnerable populations," said Dr. McEvoy, lead study author and professor of pediatrics at Oregon Health & Science University. "Finding a way to help infants exposed to smoking and nicotine in utero recognizes the unique dangers posed by a highly advertised, addictive product and the lifetime effects on offspring who did not choose to be exposed."

In a previous study, the authors had shown that 72 hours after birth, babies of mothers who smoked had better lung function if their mothers were randomized to vitamin C (500 mg/day) during their pregnancies compared to those born to mothers who smoked and were randomized to placebo.

That study used passive methods to measure lung function, and the authors note that FEFs provide a more direct assessment of airway function and are similar to methods used to diagnose lung disease in adults and older children.

In the current study, 251 pregnant women who smoked were randomly assigned at 13 to 23 weeks of gestation to either receive vitamin C (125 women) or a placebo (126 women). Smoking was defined as having had one or more cigarettes in the last week. All participants received smoking cessation counseling throughout the study, and about 10 percent of the women quit smoking during the study.

The researchers wrote that study results support the hypothesis that oxidative stress caused by cigarette smoking reduces the amount of ascorbic acid, a component of vitamin C, available to the body. At the time they enrolled in the study, the women had lower levels of ascorbic acid than have been reported among women who do not smoke.

Those levels rose in study participants who received vitamin C to become comparable to women who do not smoke.

Infants in this study will continue to be followed to track their lung function and respiratory outcomes. The authors believe that future trials of vitamin C supplementation in pregnant smokers should determine whether the benefits are greater if the supplementation starts earlier and continues postnatally in the babies themselves.

Summing up the findings of the current study, Dr. McEvoy said that a relatively low dosage of vitamin C may present "a safe and inexpensive intervention that has the potential to help lung health of millions of infants worldwide."

However, she added, helping mothers quit smoking should remain the primary goal for health professionals and public health officials. "Although vitamin C supplementation may protect to some extent the lungs of babies born to mothers who smoke during pregnancy, those children will still be at greater risk for obesity, behavioral disorders and other serious health issues," she said.

Credit: 
American Thoracic Society

In times of low unemployment, nursing home quality suffers

WASHINGTON -- The low unemployment rate in the U.S. -- which fell to a 49 year-low in September and October -- is good news to many people, but perhaps not to residents of nursing homes. A Georgetown University Medical Center (GUMC) study found that quality of care in nursing homes improves during periods of recession and worsens when the economy is good.

The reason, according to the investigation, published in The Gerontologist, is likely linked to how the strength of the economy affects the ability of nursing homes to maintain adequate staffing levels and retain employees, researchers say.

The study found that when staffing is tight during low unemployment rates, nursing homes are more likely to suffer from symptoms of inadequate care. Nursing home care is highly labor intensive and mainly delivered by nurses and nurse aides. Most nursing home residents have cognitive dysfunction or physical impairment, and require 24/7 care and providing this care can be physically and mentally taxing, making it difficult for nursing homes to hire and retain staff.

"During economic downturns, many people are willing to take positions with work environments they may not prefer because there aren't many options. But when the economy is good, there are plenty of employment opportunities and taking a nursing home job may not be that attractive," says the study's principal investigator, Sean Shenghsiu Huang, PhD, assistant professor in the Department of Health Systems Administration at GUMC's School of Nursing & Health Studies.

This is among the first known studies to examine whether fluctuations in business cycles (economic expansions and recessions) affects quality of nursing home care, nursing staff levels, and turnover/retention of staff. The study examines more than a decade of records.

Data from 2001 through 2015 were drawn from multiple data sources, such as state annual recertification of all Medicare and Medicaid certified nursing homes (about 15,000 nursing homes), and county-level unemployment rates from the Bureau of Labor Statistics. These data include two economic expansions and contractions. Statistical models were estimated to determine the effect the unemployment rate had on nursing home quality and staffing outcomes.

Researchers found that higher unemployment rates are associated with a statistically significant improvement in quality. Nursing homes were found to be more compliant with health regulations during period of higher unemployment. And nursing home residents, on average, were less likely to have pressure ulcers, be physically restrained, or have significant weight loss -- all measures of care quality.

"It is clear from our data that as unemployment rates increased, nursing home quality was higher as fewer residents would develop pressure ulcers, be restrained, and experience weight loss," Huang says. "This is likely due to nursing home staff. Higher unemployment rates are linked to higher nursing staff levels. In these recessions, nursing homes were better able to retain their staff and reduce turnover."

The study also found that when unemployment rates were low, nursing homes have lower nursing staff levels, higher employee turnover, and lower staff retention rates. Because most care is provided by nurses and nurse aides, maintaining adequate and stable workforce is important for delivering high quality of care, researchers say.

For example, high turnover of staff inhibits the ability of nursing homes to consistently assign staff to the same resident, a practice that is associated with quality care. However, nursing homes have high turnover rates. Given today's low unemployment rates, it will be challenging to maintain or even attempt to lower the turnover rates, say the investigators.

Any efforts to address this issue need to target helping nursing homes with workforce challenges they face during economic expansions, the authors say. "The solution lies with changes to federal and state policy, such as measures to increase reimbursement for nursing home care with the goal of paying staff enough to make these positions attractive," Huang says. "In general, the work environment offered by nursing homes are not considered desirable -- and this situation, especially in today's economy, needs to be addressed through better compensation and benefits."

Any effort to improve the compensation and benefits of nursing home workers would require efforts from federal and state policymakers as almost three quarters of nursing home residents are funded by Medicare and Medicaid, he says.

"Policymakers and researchers have long been concerned about nursing home quality, and this study suggests strong action is needed now." Huang says.

Credit: 
Georgetown University Medical Center

Elucidating protein-protein interactions & designing small molecule inhibitors

Proteins are polymers of amino acids that are linked by peptide bonds and they are one of the major classes of bio-molecules existing in a living organism. Proteins are involved in various cellular functions and events like cell signaling, cell adhesion, metabolic reactions, in the generation of immune response and many more.

Proteins are known to interact physico-chemically with other proteins for maintaining regular physiological functions. These interactions are known as protein-protein interactions (PPIs). Any form of malfunctioning in proteins will lead to a disturbance in the homeostasis and, ultimately, all the physiological functions of the body can be disturbed. In reaction to several environmental and biochemical stimuli, a normal protein can be mutated and start interacting with another protein leading to the formation of anomalous PPIs. Similarly, a pathogenic protein may also be formed which becomes an integral part of pathogenesis leading to detrimental effects on the host cell. In brief, protein-protein interactions (PPIs) are considered as the critical property of cell sustenance.

From the past several years, experimental and computational approaches have been applied on PPIs for the determination of their interactions, abnormalities and various causes which lead to diseases or disruption in cell functionality.

This detailed review by Krishna Mohan Poluri and colleagues gives readers an insight into methods that help researchers understand the processes in which PPIs are involved in various clinical diseases and how these PPIs can be detected through computational means. The review covers computational methods used to predict, store and visualize PPIs. This is followed by an explanation of the decoding processes in these computational approaches to determine factors such as the source of infectious agents or specific disease states that affect PPIs. They then focused on computational methods used for designing novel small molecule inhibitors that can disrupt protein-protein interactions and act efficiently against the effects of debilitating diseases. The researchers have provided useful charts which list different computational tools and databases. These charts can help readers select their methods for their research projects. A summary of the challenges faced by researchers investigating PPIs through computational methods is also presented in their concluding remarks.

Credit: 
Bentham Science Publishers

Harnessing the power of 'spin orbit' coupling in silicon: Scaling up quantum computation

image: This is an artists impression of spin-orbit coupling of atom qubits.

Image: 
CQC2T. Illustration: Tony Melov

Australian scientists have investigated new directions to scale up qubits - utilising the spin-orbit coupling of atom qubits - adding a new suite of tools to the armory.

Spin-orbit coupling, the coupling of the qubits' orbital and spin degree of freedom, allows the manipulation of the qubit via electric, rather than magnetic-fields. Using the electric dipole coupling between qubits means they can be placed further apart, thereby providing flexibility in the chip fabrication process.

In one of these approaches, published in Science Advances, a team of scientists led by UNSW Professor Sven Rogge investigated the spin-orbit coupling of a boron atom in silicon.

"Single boron atoms in silicon are a relatively unexplored quantum system, but our research has shown that spin-orbit coupling provides many advantages for scaling up to a large number of qubits in quantum computing" says Professor Rogge, Program Manager at the Centre for Quantum Computation and Communication Technology (CQC2T).

Following on from earlier results from the UNSW team, published last month in Physical Review X, Rogge's group has now focused on applying fast read-out of the spin state (1 or 0) of just two boron atoms in an extremely compact circuit all hosted in a commercial transistor.

"Boron atoms in silicon couple efficiently to electric fields, enabling rapid qubit manipulation and qubit coupling over large distances. The electrical interaction also allows coupling to other quantum systems, opening up the prospects of hybrid quantum systems," says Rogge.

Another piece of recent research by Professor Michelle Simmons' team at UNSW has also highlighted the role of spin orbit coupling in atom-based qubits in silicon, this time with phosphorus atom qubits. The research was recently published in npj Quantum Information.

The research revealed surprising results. For electrons in silicon--and in particular those bound to phosphorus donor qubits--spin orbit control was commonly regarded as weak, giving rise to seconds long spin lifetimes. However, the latest results revealed a previously unknown coupling of the electron spin to the electric fields typically found in device architectures created by control electrodes.

"By careful alignment of the external magnetic field with the electric fields in an atomically engineered device, we found a means to extend these spin lifetimes to minutes," says Professor Michelle Simmons, Director, CQC2T.

"Given the long spin coherence times and the technological benefits of silicon, this newly discovered coupling of the donor spin with electric fields provides a pathway for electrically-driven spin resonance techniques, promising high qubit selectivity," says Simmons.

Both results highlight the benefits of understanding and controlling spin orbit coupling for large-scale quantum computing architectures.

Commercialising silicon quantum computing IP in Australia

Since May 2017, Australia's first quantum computing company, Silicon Quantum Computing Pty Limited (SQC), has been working to create and commercialise a quantum computer based on a suite of intellectual property developed at the Australian Centre of Excellence for Quantum Computation and Communication Technology (CQC2T). Its goal is to produce a 10-qubit prototype device in silicon by 2022 as the forerunner to a commercial scale silicon-based quantum computer.

As well as developing its own proprietary technology and intellectual property, SQC will continue to work with CQC2T and other participants in the Australian and International Quantum Computing ecosystems, to build and develop a silicon quantum computing industry in Australia and, ultimately, to bring its products and services to global markets.

Credit: 
Centre for Quantum Computation & Communication Technology

Study upends timeline for Iroquois history

ITHACA, N.Y. - New research from Cornell University raises questions about the timing and nature of early interactions between indigenous people and Europeans in North America.

Until now, it's been assumed that the presence of European trade goods, such as metals and glass beads, provide a timeline for the indigenous peoples and settlements in the 15th and 16th centuries. New research suggests this may be a mistake in cases where there was not direct and intensive exchange between the two groups of people.

Radiocarbon dating and tree-ring evidence shows that three major indigenous sites in Ontario, Canada, conventionally dated 1450-1550 - because there were no or very few European goods recovered - are in fact 50-100 years more recent. This dates the sites to the worst period of the Little Ice Age, around 1600.

"This seems extraordinary: Given this was only 400 years ago, how can we have been wrong by as much as 25 percent?" said first author Sturt Manning, professor of classical archaeology.

Manning is lead author of "Radiocarbon Re-dating of Contact-era Iroquoian History in Northeastern North America," published in Science Advances. Other Cornell authors include Cornell Tree-Ring Laboratory senior researcher Carol Griggs and doctoral student Samantha Sanft.

Previously, dates in the early contact period were based on the absence - and then presence - of types of glass beads and other European trade goods in excavated sites, along with shifts in material culture, such as changes in ceramic designs. Dates were assigned according to "time transgression," the assumption that when European goods are found in one place with a confirmed date associated with them, that same date can be used for other places where those trade goods are found.

"But goods don't get distributed evenly within societies or across distances," said Manning. "This is a vast area with complex local societies and economies, so the concept that everybody gets the same thing necessarily all at once is a bit ludicrous in retrospect."

The team's chronological findings dramatically rewrite how history has been understood in the region. The period of first European contact, rather than following major changes in Iroquoian society, can now be seen to coincide with those changes. This suggests that contact-era transformations happened much later than previously thought.

"Of course, we've dated only one site sequence, and there are many more," Manning said. "What this paper really shows is we now need to reassess all those site sequences where there's not a clear historical link or association."

Credit: 
Cornell University

Computers can 'spot the difference' between healthy brains and people with Dissociative Identity Disorder

Machine-learning and neuroimaging techniques have been used to accurately distinguish between individuals with Dissociative Identity Disorder (DID) and healthy individuals, on the basis of their brain structure, in new research part funded by the NIHR Maudsley Biomedical Research Centre and published in the British Journal of Psychiatry.

Researchers performed MRI (magnetic resonance imaging) brain scans on 75 female study participants -- 32 with independently confirmed diagnoses of DID and 43 who were healthy controls. The two groups were carefully matched for demographics including age, years of education and ancestry.

Using machine-learning techniques to recognise patterns in the brain scans, the researchers were able to discriminate between the two groups with an overall accuracy of 73%, significantly higher than the level of accuracy you would expect by chance.

This research, using the largest ever sample of individuals with DID in a brain imaging study, is the first to demonstrate that individuals with DID can be distinguished from healthy individuals on the basis of their brain structure.

DID, formerly known as 'multiple personality disorder', is one of the most disputed and controversial mental health disorders, with serious problems around under diagnosis and misdiagnosis. Many patients with DID share a history of years of misdiagnoses, inefficient pharmacological treatment and several hospitalisations.

It is the most severe of all dissociative disorders, involving multiple identity states and recurrent amnesia. Dissociative disorders may ensue when dissociation is used as a way of surviving complex and sustained trauma during childhood, when the brain and personality are still developing.

Dr Simone Reinders, Senior Research Associate at the Department of Psychological Medicine, Institute of Psychiatry, Psychology & Neuroscience, King's College London led the multi-centre study involving two centres from the Netherlands, the University Medical Centre in Groningen and the Amsterdam Medical Centre, and one from Switzerland, the University Hospital in Zurich.

Commenting on the research, Dr Reinders said: "DID diagnosis is controversial and individuals with DID are often misdiagnosed. From the moment of seeking treatment for symptoms, to the time of an accurate diagnosis of DID, individuals receive an average of four misdiagnoses and spend seven years in mental health services.

"The findings of our present study are important because they provide the first evidence of a biological basis for distinguishing between individuals with DID and healthy individuals. Ultimately, the application of pattern recognition techniques could prevent unnecessary suffering through earlier and more accurate diagnosis, facilitating faster and more targeted therapeutic interventions."

Credit: 
King's College London

More patient family-provider communication could mean fewer errors

BOSTON (December 6, 2018) - New research from Boston Children's Hospital finds that harmful medical errors decreased by 38 percent following intervention to improve communication between healthcare providers and patients and families. The study is led by Alisa Khan, MD, MPH, a pediatric hospitalist and researcher at Boston Children's Hospital and Harvard Medical School and is published in British Medical Journal today.

The intervention, Patient and Family Centered I-PASS, changes verbal and written communication during morning rounds, emphasizing family engagement, structured communication and health literacy. Patient and Family Centered I-PASS occurs at the bedside with patients and families present and actively engaged and the medical team minimizing medical jargon and encouraging families to share concerns, ask questions and "read back" their understanding of the meeting.

"Our study suggests that engaging families in hospital communication doesn't just feel like the right thing to do, it is the right thing to do. It can actually make care safer," says Khan. "This study also provides an opportunity for nurses to be fully engaged in the design and implementation of a family-centered team communication intervention."

In addition to improvements in safety, the research finds that multiple aspects of the patient experience improve. Families and nurses are significantly more engaged and rounds do not take any longer or involve less education of resident-physicians and medical students.

"Our prior research has shown that improving communication between providers improves safety. The present study suggests that improving communication with families may also be a critical means of improving patient safety, one that has previously been overlooked," said senior author Christopher Landrigan, MD, MPH, Research Director of Inpatient Pediatrics at Boston Children's Hospital and Professor of Pediatrics at Harvard Medical School. "Family centered rounds are increasingly considered a standard of care in pediatrics, but until now, evidence that they could actually improve safety was limited," Landrigan added.

Credit: 
Boston Children's Hospital

3-D printed glucose biosensors

image: Arda Gozen, assistant professor, WSU School of Mechanical and Materials Engineering, in the Manufacturing Processes and Machinery Lab.

Image: 
WSU

PULLMAN, Wash. - A 3D-printed glucose biosensor for use in wearable monitors has been created by Washington State University researchers.

The work could lead to improved glucose monitors for millions of people who suffer from diabetes.

Led by Arda Gozen and Yuehe Lin, faculty in the School of Mechanical and Materials Engineering, the research has been published in the journal Analytica Chimica Acta.

People with diabetes most commonly monitor their disease with glucose meters that require constant finger pricking. Continuous glucose monitoring systems are an alternative, but they are not cost effective.

Researchers have been working to develop wearable, flexible electronics that can conform to patients' skin and monitor the glucose in body fluids, such as in sweat. To build such sensors, manufacturers have used traditional manufacturing strategies, such as photolithography or screen printing. While these methods work, they have several drawbacks, including requiring the use of harmful chemicals and expensive cleanroom processing. They also create a lot of waste.

Using 3D printing, the WSU research team developed a glucose monitor with much better stability and sensitivity than those manufactured through traditional methods.

The researchers used a method called direct-ink-writing (DIW), that involves printing "inks" out of nozzles to create intricate and precise designs at tiny scales. The researchers printed out a nanoscale material that is electrically conductive to create flexible electrodes. The WSU team's technique allows a precise application of the material, resulting in a uniform surface and fewer defects, which increases the sensor's sensitivity. The researchers found that their 3D-printed sensors did better at picking up glucose signals than the traditionally produced electrodes.

Because it uses 3D printing, their system is also more customizable for the variety of people's biology.

"3D printing can enable manufacturing of biosensors tailored specifically to individual patients" said Gozen.

Because the 3D printing uses only the amount of material needed, there is also less waste in the process than traditional manufacturing methods.

"This can potentially bring down the cost," said Gozen.

For large-scale use, the printed biosensors will need to be integrated with electronic components on a wearable platform. But, manufacturers could use the same 3D printer nozzles used for printing the sensors to print electronics and other components of a wearable medical device, helping to consolidate manufacturing processes and reduce costs even more, he added.

"Our 3-D printed glucose sensor will be used as wearable sensor for replacing painful finger pricking. Since this is a noninvasive, needleless technique for glucose monitoring, it will be easier for children's glucose monitoring," said Lin.

The team is now working to integrate the sensors into a packaged system that can be used as a wearable device for long-term glucose-monitoring.

Credit: 
Washington State University

Pot withdrawal eased for dependent users

New Haven, Conn. -- A new drug can help people diagnosed with cannabis use disorder reduce withdrawal symptoms and marijuana use, a new Yale-led study published Dec. 6 in the journal Lancet Psychiatry shows.

The double-blind, placebo-controlled study shows marijuana use declined among subjects who were administered the new drug -- a fatty acid inhibitor that acts upon endocannabinoid metabolic receptors in the brain -- compared to those receiving a placebo. Subjects who took the drug also reported fewer withdrawal symptoms and exhibited better sleep patterns, which are disrupted in cannabis-dependent individuals attempting to quit.

"With an increase of marijuana legalization efforts, it is reasonable to expect an increase in demand for treatment, and right now we don't have any medications to help individuals trying to quit," said Deepak Cyril D'Souza, professor of psychiatry at Yale and corresponding author of the study.

Cannabis use becomes a disorder when the person cannot stop using the drug even though it interferes with many aspects of his or her life. Cannabis use disorder (CUD) is marked by social and functional impairment, risky use, tolerance, and withdrawal symptoms, according to DSM-5, the statistical manual of mental health disorders developed by the American Psychiatric Association. Withdrawal symptoms are marked by craving for marijuana, irritability, anger, depression, insomnia, and decrease in appetite and weight. In 2015, about 4 million people in the United States met the diagnostic criteria for a cannabis use disorder, and almost 150,000 voluntarily sought treatment for their cannabis use.

According to recent national data, approximately one-third of all current cannabis users meet diagnostic criteria for CUD.

For the new study, D'Souza and colleagues recruited male daily cannabis users. Seventy subjects completed the trial, with 46 receiving the drug and the remainder receiving placebo. All subjects in the study underwent forced withdrawal on the inpatient research unit for the first week and continued receiving treatment for three weeks as outpatients after release from the hospital.

A reduction in cannabis use was confirmed by both self-report and urine drug testing.

Researchers have tried many different drugs in an attempt to reduce cannabis withdrawal symptoms and increase abstinence in those trying to quit, but none have been consistently successful or well tolerated, D'Souza said.

The new drug works by inhibiting fatty acid amide hydrolase (FAAH), the enzyme that degrades anandamide, a brain chemical that acts on brain cannabis receptors. Anandamide is an endocannabinoid present in the human body that is produced naturally by the brain.

"Anandamide is to cannabis as endorphins are to heroin," D'Souza said.

A larger multicenter trial of the new drug funded by the U.S. National Institute of Drug Abuse is currently underway.

Credit: 
Yale University

Protecting cell powerhouse paves way to better treatment of acute kidney injury

image: Drs. Zheng Dong and Qingqing Wei in the Cellular Biology Anatomy lab at the Medical College of Georgia at Augusta University

Image: 
Phil Jones, Senior Photographer, Augusta University

AUGUSTA, Ga. (Dec. 6, 2018) - For the first time, scientists have described the body's natural mechanism for temporarily protecting the powerhouses of kidney cells when injury or disease means they aren't getting enough blood or oxygen.

Powerhouses, called mitochondria, which provide fuel for our cells, start to fragment, likely as one of the first steps in the kidney cell damage and death that often result from an acute kidney injury, says Dr. Zheng Dong, cellular biologist in the Department of Cellular Biology and Anatomy at the Medical College of Georgia at Augusta University.

Dong and his colleagues appear to have delineated the natural mitochondrial protection pathway in kidney cells and say it's a logical therapeutic target for treating acute kidney injury.

"We know there is a natural protective mechanism. Maybe we need to upregulate it," says Dong, also a senior research career scientist and director of research development at the Charlie Norwood Veterans Affairs Medical Center in Augusta. Dong is senior author of the study published in the Journal of Clinical Investigation.

In fact, drugs that target at least one key part of the pathway already have been studied in patients experiencing anemia - a deficiency in the red blood cells that carry oxygen - because of chronic kidney disease.

The scientists started by examining a large number of microRNAs, small RNAs known to regulate gene expression. They found one, microRNA-668, consistently elevated in both patients with an acute kidney injury as well as animal models of the condition, which is common in patients in intensive care, particularly older patients.

Mitochondrial fission and fusion are polar opposites but their balance is key to a healthy cell powerhouse. They are governed by two distinct classes of proteins, which emerging evidence suggests are regulated by microRNAs.

Proteins like mitochondrial Protein 18 KDa, or MTP18, for example, have already been implicated in powerhouse fission, at least in periods of stress. The MCG scientists and their collaborators have now confirmed it's a direct target of microRNA-668.

But the pathway has at least one earlier point of action: hypoxia-inducible factor-1, or HIF-1, a transcription factor that increases when oxygen levels decrease to help cells adjust by controlling expression of genes that can protect them.

They found that ischemic acute kidney injury induces HIF-1, which upregulates microRNA-668, which suppresses MTP18 and the result is kidney cell protection.

One key discovery was an HIF-1 binding site on the gene that promotes microRNA-668, and the related finding that too little HIF-1 reduced the expression of microRNA-668.

"The microRNA-668 gene is a new targeted gene for HIF-1, which may help explain some of HIF-1's protective function," Dong says.

When scientists restricted microRNA-668, more kidney cells died. Conversely, giving a mimic of microRNA-668 - to increase its presence - protected kidney cells. More microRNA-668 also meant less MTP18 and vice versa.

"We don't know what MTP18 does normally, but now we know what it does when stressed," Dong says. "It induces fragmentation of the mitochondria."

They've shown that increased levels of microRNA-668 can prevent most of that damage so the cell can keep functioning ideally until blood and oxygen are restored. "This is like a temporary mechanism for cell survival," Dong says.

One way physicians might one day improve the odds for mitochondrial and kidney survival, may be a class of drugs called PHD inhibitors, which have already been studied in chronic kidney disease. PHD - prolyl hydroxylase - is a protein that induces the degradation of protective HIF-1 and Dong suspects PHD inhibitors could benefit patients with acute kidney injury as well. A microRNA-668 mimic, similar to that used in the studies, might one day be another option.

Right now there aren't any targeted therapies for acute kidney injury, says Dong, rather supportive therapies like hydration, possible short-term dialysis and addressing the injury cause.

With an acute kidney injury, kidney function deteriorates in a few hours or days. It can result from a literal blow to the kidney, in a fall or car accident, or from dehydration in an overzealous student athlete. In the face of general good health, most patients recover fully and quickly, Dong says.

However, acute kidney injury mostly occurs in people who already have another medical problem like diabetes. In fact, most are in the hospital when it happens, with problems like bleeding or shock, failure of other organs like the heart, even an overdose of over-the-counter nonsteroidal anti-inflammatories for problems like a cold or flu, according to the National Kidney Foundation.

Dong's lab was the first to show that as a class of molecules, microRNAs could play an important role in reducing acute kidney injury. They reported in 2010 that deletion of a key enzyme for microRNA production from kidney tubules made mice resistant to ischemia-induced acute kidney injury, suggesting an important destructive role for at least some microRNAs.

Subsequent work by Dong and colleagues led to identification of specific microRNAs with significant changes in expression in the face of ischemic acute kidney injury. Those studies found some microRNAs definitely promote fission but others seem to help protect kidney cells.

Just what microRNA-668 does has been largely unknown other than another recent report implicating it in protecting human breast cancer cells from radiation therapy.

Credit: 
Medical College of Georgia at Augusta University

Unknown treasure trove of planets found hiding in dust

image: The Taurus Molecular Cloud, pictured here by ESA's Herschel Space Observatory, is a star-forming region about 450 light-years away. The image frame covers roughly 14 by 16 light-years and shows the glow of cosmic dust in the interstellar material that pervades the cloud, revealing an intricate pattern of filaments dotted with a few compact, bright cores -- the seeds of future stars.

Image: 
ESA/Herschel/PACS, SPIRE/Gould Belt survey Key Programme/Palmeirim <em>et al</em>. 2013

"Super-Earths" and Neptune-sized planets could be forming around young stars in much greater numbers than scientists thought, new research by an international team of astronomers suggests.

Observing a sampling of young stars in a star-forming region in the constellation Taurus, researchers found many of them to be surrounded by structures that can best be explained as traces created by invisible, young planets in the making. The research, published in the Astrophysical Journal, helps scientists better understand how our own solar system came to be.

Some 4.6 billion years ago, our solar system was a roiling, billowing swirl of gas and dust surrounding our newborn sun. At the early stages, this so-called protoplanetary disk had no discernable features, but soon, parts of it began to coalesce into clumps of matter - the future planets. As they picked up new material along their trip around the sun, they grew and started to plow patterns of gaps and rings into the disk from which they formed. Over time, the dusty disk gave way to the relatively orderly arrangement we know today, consisting of planets, moons, asteroids and the occasional comet.

Scientists base this scenario of how our solar system came to be on observations of protoplanetary disks around other stars that are young enough to currently be in the process of birthing planets. Using the Atacama Large Millimeter Array, or ALMA, comprising 45 radio antennas in Chile's Atacama Desert, the team performed a survey of young stars in the Taurus star-forming region, a vast cloud of gas and dust located a modest 450 light-years from Earth. When the researchers imaged 32 stars surrounded by protoplanetary disks, they found that 12 of them - 40 percent - have rings and gaps, structures that according to the team's measurements and calculations can be best explained by the presence of nascent planets.

"This is fascinating because it is the first time that exoplanet statistics, which suggest that super-Earths and Neptunes are the most common type of planets, coincide with observations of protoplanetary disks," said the paper's lead author, Feng Long, a doctoral student at the Kavli Institute for Astronomy and Astrophysics at Peking University in Bejing, China.

While some protoplanetary disks appear as uniform, pancake-like objects lacking any features or patterns, concentric bright rings separated by gaps have been observed, but since previous surveys have focused on the brightest of these objects because they are easier to find, it was unclear how common disks with ring and gap structures really are in the universe. This study presents the results of the first unbiased survey in that the target disks were selected independently of their brightness - in other words, the researchers did not know whether any of their targets had ring structures when they selected them for the survey.

"Most previous observations had been targeted to detect the presence of very massive planets, which we know are rare, that had carved out large inner holes or gaps in bright disks," said the paper's second author Paola Pinilla, a NASA Hubble Fellow at the University of Arizona's Steward Observatory. "While massive planets had been inferred in some of these bright disks, little had been known about the fainter disks."

The team, which also includes Nathan Hendler and Ilaria Pascucci at the UA's Lunar and Planetary Laboratory, measured the properties of rings and gaps observed with ALMA and analyzed the data to evaluate possible mechanisms that could cause the observed rings and gaps. While these structures may be carved by planets, previous research has suggested that they may also be created by other effects. In one commonly suggested scenario, so-called ice lines caused by changes in the chemistry of the dust particles across the disc in response to the distance to the host star and its magnetic field create pressure variations across the disk. These effects can create variations in the disk, manifesting as rings and gaps.

The researchers performed analyses to test these alternative explanations and could not establish any correlations between stellar properties and the patterns of gaps and rings they observed.

"We can therefore rule out the commonly proposed idea of ice lines causing the rings and gaps," Pinilla said. "Our findings leave nascent planets as the most likely cause of the patterns we observed, although some other processes may also be at work."

Since detecting the individual planets directly is impossible because of the overwhelming brightness of the host star, the team performed calculations to get an idea of the kinds of planets that might be forming in the Taurus star-forming region. According to the findings, Neptune-sized gas planets or so-called super-Earths - terrestrial planets of up to 20 Earth masses - should be the most common. Only two of the observed disks could potentially harbor behemoths rivaling Jupiter, the largest planet in the solar system.

"Since most of the current exoplanet surveys can't penetrate the thick dust of protoplanetary disks, all exoplanets, with one exception, have been detected in more evolved systems where a disk is no longer present," Pinilla said.

Going forward, the research group plans to move ALMA's antennas farther apart, which should increase the array's resolution to around five astronomical units (one AU equals the average distance between the Earth and the sun), and to make the antennas sensitive to other frequencies that are sensitive to other types of dust.

"Our results are an exciting step in understanding this key phase of planet formation," Long said, "and by making these adjustments, we are hoping to better understand the origins of the rings and gaps."

Credit: 
University of Arizona

Food system organizations must strengthen their operations to safeguard against potential threats

image: Ten factors identified through semi-structured interviews with food system stakeholders in Baltimore, MD that may impact food system organizational resilience, mapped along the resilience curve.

Image: 
Johns Hopkins Center for a Livable Future

Philadelphia, December 6, 2018 - Food systems face growing threats as extreme weather events become more common and more extreme due to climate change. Events such as Hurricane Harvey in Texas and Hurricane Maria in Puerto Rico in 2017 have drawn attention to the havoc natural disasters can wreak. A new study from the Johns Hopkins Bloomberg School of Public Health, published in the Journal of the Academy of Nutrition and Dietetics, highlights characteristics of organizations involved in the food system that may lead them to be more prepared to respond to such disasters, and opportunities for local, state, and federal organizations to improve resilience across the urban food system.

Businesses and organizations involved in growing, distributing, and supplying food must be able to withstand and rebound from acute disruptions such as civil unrest and cyber attacks, as well as those with more gradual impact, such as drought, sea level rise, or funding cuts. Policymakers and researchers are in the early stages of considering ways to improve resilience to both natural and human-generated threats across the food system.

Amelie Hecht, a doctoral candidate in the Department of Health Policy and Management at the Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, USA, wanted to explore the following issues: what factors may be associated with organization-level food system resilience; how might these factors play out in disaster response; and how do they relate to organizations' confidence in their ability to withstand disruptive events?

The research was performed as part of a larger project led by Roni A. Neff, PhD, Assistant Professor, Center for a Livable Future, Department of Environmental Health & Engineering, Department of Health Policy and Management. Dr. Neff and colleagues interviewed representatives of 26 businesses and organizations in Baltimore that supply, distribute, and promote access to food. The organizations were asked about how they have tried to prevent, minimize, and respond to the effects of disruptive events like snowstorms and civil unrest in the past, and how they plan to address similar challenges in the future.

Researchers identified several factors that influence how resilient an organization is during times of emergency. They found that the organizations able to recover more quickly had ten characteristics in common: formal emergency planning; staff training; reliable staff attendance; redundancy of food supply, food suppliers, infrastructure, location, and service providers; insurance; and post-event learning after a disruptive event. Organizations that were large, well-resourced, and affiliated with national or government partners tended to display more of these characteristics.

The authors conclude that a more resilient food system is needed in order to ensure all people have safe and reliable access to food following both acute and longer-term crises. They highlight several critical areas for targeted intervention by local, state, and federal governments, such as creating opportunities for smaller, less-resourced organizations to share information and pool resources. Further research is needed to add to an emerging understanding of the factors that contribute to resilience in order to help food system organizations, researchers, and government officials identify vulnerabilities in their regional food systems and strategies to improve food system resilience in the face of ongoing and growing threats.

Credit: 
Elsevier