Culture

Researchers report a new way to produce curvy electronics

image: A team of researchers led by University of Houston engineer Cunjiang Yu has reported a new way to manufacture curvy, three-dimensional electronics.

Image: 
University of Houston

Contact lenses that can monitor your health as well as correct your eyesight aren't science fiction, but an efficient manufacturing method - finding a way to produce the curved lenses with embedded electronics - has remained elusive.

Until now. A team of researchers from the University of Houston and the University of Colorado Boulder has reported developing a new manufacturing method, known as conformal additive stamp printing, or CAS printing, to produce the lenses, solar cells and other three-dimensional curvy electronics. The work, reported in the journal Nature Electronics, demonstrates the use of the manufacturing technique to produce a number of curvy devices not suited to current production methods. The work is also highlighted by the journal Nature.

"We tested a number of existing techniques to see if they were appropriate for manufacturing curvy electronics," said Cunjiang Yu, Bill D. Cook Associate Professor of Mechanical Engineering at the University of Houston and corresponding author on the paper. "The answer is no. They all had limitations and problems."

Instead, Yu, who is also a principal investigator with the Texas Center for Superconductivity at UH, and his team devised a new method, which they report opens the door to the efficient production of a range of curvy electronic devices, from wearables to optoelectronics, telecommunications and biomedical applications.

"Electronic devices are typically manufactured in planar layouts, but many emerging applications, from optoelectronics to wearables, require three-dimensional curvy structures," the researchers wrote. "However, the fabrication of such structures has proved challenging due, in particular, to the lack of an effective manufacturing technology."

Existing manufacturing technologies, including microfabrication, don't work for curved, three-dimensional electronics because they are inherently designed to produce two-dimensional, flat electronic devices, Yu said. But increasingly, there is a need for electronic devices that require curvy, 3-D shapes, including smart contact lenses, curved imagers, electronic antennas and hemispherical solar cells, among other devices.

These devices are small - ranging in size from millimeters to centimeters - with accuracy within a few microns.

Recognizing that, Yu and the other researchers proposed the new fabrication method, conformal additive stamp printing, or CAS printing.

CAS printing works like this: An elastomeric, or stretchy, balloon is inflated and coated with a sticky substance. It is then used as a stamping medium, pushing down on pre-fabricated electronic devices to pick up the electronics and then print them onto various curvy surfaces. In the paper, the researchers describe using the method to create a variety of curvy devices, including silicon pellets, photodetector arrays, small antennas, hemispherical solar cells and smart contact lenses.

The work was performed using a manual version of the CAS printer, although the researchers also designed an automated version. Yu said that will make it easy to scale up production.

Credit: 
University of Houston

Monkeys like alcohol at low concentrations, but probably not due to the calories

image: Professor Matthias Laska at the Department of Physics, Chemistry and Biology (IFM) at Linköping University

Image: 
Linköping University

Fruit-eating monkeys show a preference for concentrations of alcohol found in fermenting fruit, but do not seem to use alcohol as a source of supplementary calories, according to a study by researchers from Linköping University, Sweden, and the Universidad Veracruzana, Mexico. The findings do not support the idea that human alcoholism originated from a predilection of primates for alcohol-containing overripe fruit.

When overripe fruit is fermented by microbes, alcohol is produced. Some research has suggested that fruit-eating monkeys use this dietary ethanol as a source of supplementary calories. The researchers behind the new study, which is published in Chemical Senses, set out to test this idea.

In a first experiment performed at a field station in Mexico, the researchers presented eight spider monkeys with varying concentrations of ethanol naturally found in fermenting fruit (0.5-3 per cent) and tap water as the alternative. They found that the animals were able to detect ethanol at concentrations as low as 0.5 per cent. In comparison, the detection threshold of humans for this alcohol is 1.34 per cent. The monkeys preferred all ethanol concentrations up to 3 per cent over water.

"These results demonstrate that fruit-eating spider monkeys are extraordinarily sensitive to the taste of ethanol. We also found that they prefer this alcohol when presented at naturally occurring concentrations found in fermenting fruit", says Professor Matthias Laska at the Department of Physics, Chemistry and Biology (IFM) at Linköping University.

In a second experiment, the spider monkeys were given the choice between a sugar solution spiked with ethanol and an equally concentrated sugar solution without ethanol. Here, the animals clearly preferred the ethanol-spiked sugar solution. However, when presented with an ethanol-spiked sugar solution and a higher-concentration sugar solution without ethanol, the animals clearly preferred the pure sugar alternative, even when the sugar-ethanol mixture contained three times more calories.

A similar experiment was performed in which the spider monkeys were given the choice between puréed fruit spiked with ethanol and puréed fruit without ethanol. The tests with sugar solutions and with puréed fruit that were either spiked with ethanol or not suggest that sweetness, and thus carbohydrate content, may be more important for the preferences displayed by the spider monkeys than the calories provided by ethanol.

"The findings, therefore, do not support the idea that dietary ethanol is used by fruit-eating primates as a source of supplementary calories. Similarly, the findings do not support the idea that a predilection of non-human primates for alcohol-containing overripe fruits reflects the evolutionary origin of human alcoholism", says Matthias Laska.

Credit: 
Linköping University

Experts focus on food insecurity and its far-reaching consequences, particularly in vulnerable populations

video: JAND Editor-in-Chief Linda Snetselaar, Ph.D., RDN, LD, FAND, highlights articles from a special issue of the Journal of the Academy of Nutrition and Dietetics on food insecurity.

Image: 
<i>Journal of the Academy of Nutrition and Dietetics</i>

Philadelphia, September 25, 2019 - Food insecurity is a lack of consistent access to enough food for an active, healthy life, according to the US Department of Agriculture. The latest issue of the Journal of the Academy of Nutrition and Dietetics, published by Elsevier, focuses on food insecurity in vulnerable populations including children, youth, college students, and older adults; raises awareness of the consequences of food insecurity for health and wellbeing; and presents strategies for addressing this serious problem.

While hunger is a personal, physical sensation of discomfort, food insecurity refers to a lack of availability of nutritionally adequate and safe foods or the inability to acquire acceptable foods in socially acceptable ways.

"Food insecurity has serious consequences for health and wellbeing, which is why the Journal decided to devote an entire issue to this important and timely topic," explains Editor-in-Chief Linda Snetselaar, PhD, RDN, LD, FAND, Department of Epidemiology, College of Public Health, University of Iowa, Iowa City, IA, USA. This collection of articles contributes to understanding how and why food insecurity is associated with poor wellbeing in families and individuals across the life course. It focuses on food insecurity in vulnerable populations including children, youth, college students, and older adults; food banking and food pantries; and connections between food insecurity and other vulnerabilities in families and seniors.

Although the United States is a high-income country, about 12 percent of the US population experienced food insecurity in 2017, including approximately 30 percent of households with children headed by a single woman and 20 percent of households with children headed by a single man. Other vulnerable groups include college students, women living alone, men living alone, black non-Hispanic and Hispanic heads of households, and households with incomes below 185 percent of the federal poverty line. The consequences associated with food insecurity for adults and especially for children include inadequate intake of key nutrients, overweight in women and some girls, less physical activity, symptoms of depression and risk of suicide in adolescents, poor physical and mental health, behavioral difficulties, and delays in academic and social development in children.

"There are many questions to be answered about how and why food insecurity is associated with such pervasive consequences in families and children and other vulnerable groups such as seniors and how clinical, community and public health nutrition professionals can best help prevent these outcomes," notes Edward A. Frongillo, PhD, Department of Health Promotion, Education, and Behavior, University of South Carolina, Columbia, SC, USA, who explores the scope of the problem and highlights the far-reaching consequences associated with food insecurity.

In this issue, key strategies identified for managing food insecurity in vulnerable populations include:

School-age children: A need for ongoing efforts to improve weekend access to nutritious foods during the summer months for children from food-insecure households.

Federal nutrition assistance programs, such as the National School Lunch Program (NSLP) and the School Breakfast Program, provide school-aged children from low-income households with meals or snacks at a free or reduced price. However, data indicate that food insecurity increased by 13 percent during summer months among NSLP program participants. Jiwoo Lee, PhD, RN, School of Nursing, University of Minnesota, Minneapolis, MN, USA, and colleagues analyzed data from a community-based obesity prevention trial in metropolitan Minnesota and found that 8-12-year-old children from food-insecure households reported less energy intake, fewer servings of whole fruits, and more sugar-sweetened beverages on a weekend day, but not a weekday, when compared with children from food-secure households.

College students: An improved understanding of what it means to experience food insecurity in higher education that can inform how universities support students' basic needs.

College students may find themselves experiencing food insecurity as a result of several factors, including insufficient resources to purchase food, lack of grocery stores on campus, inadequate transportation or cooking facilities, and lack of cooking skills. In a California study conducted by Cindy W. Leung, ScD, MPH, Department of Nutritional Sciences, University of Michigan School of Public Health, Ann Arbor, MI, USA, and colleagues, students discussed several themes related to the psychosocial effects of food insecurity: the stress of food insecurity interfering with daily life; a fear of disappointing family; resentment of students in more stable food and financial situations; an inability to develop meaningful social relationships; sadness from reflecting on food insecurity; feeling hopeless or undeserving of help; and frustration directed at the academic institution for not providing enough support. Students also explored how food insecurity affected their academic performance through physical manifestations of hunger and the mental trade-off between focusing on food and focusing on academics.

Older adults: Implications for dietitians, nutritionists, and health counselors about how to address various challenges encountered by low-income older adults.

Older adults are another vulnerable population. Seung Eun Jung, PhD, RD, Department of Human Nutrition and Hospitality Management, The University of Alabama, Tuscaloosa, AL, USA, and colleagues investigated the complex relationships between self-care capacity, depression, food security, and nutritional status among low-income older adults. They found that inability to afford food combined with limited ability to take care of oneself contribute to an increased self-report of depressive symptoms. This process appears to result in a less favorable nutritional health status among low-income older adults.

Food banking: Continuing the transformation with the charitable foods system towards nutrition-focused food banking.

Food banks in the US were initially conceptualized as a community resource for families in need of emergency food assistance, but the charitable food system increasingly serves low-income households on a routine basis, with an estimated 54 percent of clients accessing food assistance for six or more months a year. A sample of 30 food bank executive leaders representing a diverse selection of food banks across the US were interviewed between 2015 and 2017 about specific strategies to support nutrition-focused food banking by Marianna S. Wetherill, PhD, MPH, RDN-AP/LD, Hudson College of Public Health, University of Oklahoma Tulsa Schusterman Center, Tulsa, OK, USA and colleagues. They highlighted four major needs: building a healthier food inventory at the food bank; enhancing partner agency healthy food access, storage and distribution capacity; nutrition education outreach; and expanding community partnerships and intervention settings for healthy food distribution, including health care and schools. As one of the study participants commented, "If you're going to eliminate hunger, it takes more than just the food banks giving out food ... It takes drilling down to the root cause of hunger and the places where the vulnerable populations are going and providing them with manna-the food."

In conclusion Dr. Leung remarks: "Food insecurity is not a new concept, but it's one that has persisted over decades. The face of food insecurity has continued to change in recent years and so we continue to strive to address this with our research. There is a clear link between food insecurity and poor psychological and mental health and academic achievement among college students, and links with disorders such as obesity and diabetes in adults. We need to support and continue to develop Federal programs that could help to reduce food insecurity, such as help with employment opportunities and health insurance."

Credit: 
Elsevier

T. rex used a stiff skull to eat its prey

image: An artist's rendition of the Tyrannosaurus rex with the 3D imaging showing muscle activation in its head.

Image: 
Illustration courtesy of Brian Engh.

A Tyrannosaurus rex could bite hard enough to shatter the bones of its prey. But how it accomplished this feat without breaking its own skull bones has baffled paleontologists. That's why scientists at the University of Missouri are arguing that the T. rex's skull was stiff much like the skulls of hyenas and crocodiles, and not flexible like snakes and birds as paleontologists previously thought.

"The T. rex had a skull that's 6 feet long, 5 feet wide and 4 feet high, and bites with the force of about 6 tons," said Kaleb Sellers, a graduate student in the MU School of Medicine. "Previous researchers looked at this from a bone-only perspective without taking into account all of the connections -- ligaments and cartilage -- that really mediate the interactions between the bones."

Using a combination of imaging, anatomy and engineering analysis, the team observed how the roof of the mouth of the T. rex reacted to the stresses and strains from chewing by applying models of how two present day relatives of T. rex -- a gecko and a parrot -- chew to how the T. rex skull worked.

"Dinosaurs are like modern-day birds, crocodiles and lizards in that they inherited particular joints in their skulls from fish -- ball and socket joints, much like people's hip joints -- that seem to lend themselves, but not always, to movement like in snakes," said Casey Holliday, an associate professor of anatomy in the MU School of Medicine. "When you put a lot of force on things, there's a tradeoff between movement and stability. Birds and lizards have more movement but less stability. When we applied their individual movements to the T. rex skull, we saw it did not like being wiggled in ways that the lizard and bird skulls do, which suggests more stiffness."

In addition to helping paleontologists with a detailed study of the anatomy of fossilized animals, researchers believe their findings can help advance human and animal medicine by providing better models of how joints and ligaments interact.

"In humans, this can also be applied to how people's jaws work, such as studying how the jaw joint is loaded by stresses and strains during chewing," said Ian Cost, the lead researcher on the study. Cost is an assistant professor at Albright College and a former doctoral student in the MU School of Medicine. "In animals, understanding how those movements occur and joints are loaded will, for instance, help veterinarians better understand how to treat exotic animals such as parrots, which suffer from arthritis in their faces."

Credit: 
University of Missouri-Columbia

Adult fly intestine could help understand intestinal regeneration

image: A fly intestine under pathogenic infection. The green cells are new cells produced during intestinal regeneration. The red signal is p38 activation in IECs upon damage.

Image: 
University of Bristol

Intestinal epithelial cells (IECs) are exposed to diverse types of environmental stresses such as bacteria and toxins, but the mechanisms by which epithelial cells sense stress are not well understood. New research by the universities of Bristol, Heidelberg and the German Cancer Research Center (DKFZ) have found that Nox-ROS-ASK1-MKK3-p38 signaling in IECs integrates various stresses to facilitate intestinal regeneration.

The research, published in Nature Communications today [Wednesday 25 September], used the adult fly intestine, which is remarkably like a human intestine, to understand how IECs sense stress or damage, defend themselves and promote epithelial regeneration.

Stress sensing pathways are activated upon a variety of stresses in fly IECs, but how these pathways are activated and how they promote IEC resilience and intestinal regeneration are not known.

The researchers found that NADPH Oxidase (Nox) in IECs produce reactive oxygen species (ROS) upon stress, but it wasn't fully understood how ROS promote intestinal regeneration. The paper has shown that it is partly effected by Ask1-MKK3-p38 signaling in IECs, stimulating their production of intestinal stem cell (ISC) mitogens and ISC-mediated regeneration. p38 was previously found to facilitate mammalian intestinal regeneration when damaged, but how it senses damage was not understood.

The researchers are still unclear how stress is sensed by intestinal epithelial cells but believe it is possible that Nox senses stress. The study also found damage activates stress sensing pathways in fly IECs but how these pathways effect IEC resilience and intestinal repair is not fully clear.

Dr Parthive Patel, Elizabeth Blackwell Institute (EBI) Early Career Fellow in the School of Cellular and Molecular Medicine at the University of Bristol, said: "Our work has potential applications for regenerative medicine. Reactive oxygen species play an important role in tissue regeneration and even in neuronal axon regeneration."

"It is also relevant for diseases that develop from the loss of epithelial integrity such as inflammatory bowel diseases, which increases risk for colorectal cancer. Understanding how tissues sense stress and promote their resilience and repair will provide novel therapeutic strategies."

Credit: 
University of Bristol

University of Alberta researchers developing new 'DNA stitch' to treat muscular dystrophy

image: U of A medical geneticist Toshifumi Yokota is testing a new treatment for Duchenne muscular dystrophy that acts like a stitch to repair a genetic mutation in patients with the debilitating disease.

Image: 
Jordan Carson

A new therapeutic being tested by University of Alberta researchers is showing early promise as a more effective treatment that could help nearly half of patients with Duchenne muscular dystrophy (DMD).

The treatment--a cocktail of DNA-like molecules--results in dramatic regrowth of a protein called dystrophin, which acts as a support beam to keep muscles strong. The protein is virtually absent in those with DMD.

"In muscle, if there is no dystrophin there is no support of muscle membrane, and the muscle cells will become easily damaged or destroyed," said Toshifumi Yokota, a professor of medical genetics at the U of A. "Our DNA-like molecules restore the production of dystrophin so it can support the muscle cell membrane.

"Theoretically, this treatment could treat as many as 47 per cent of patients with Duchenne muscular dystrophy."

The condition is a genetic disorder characterized by progressive muscle degeneration and weakness. The incurable condition causes difficulty in walking and breathing as the muscles and heart are gradually damaged. Many will die from the condition in their 20s or 30s.

In a study published in Molecular Therapy, Yokota and his team describe using a mix of DNA-like molecules--called antisense oligonucleotides--to act like a stitch that could repair a specific gene mutation in patients with DMD. The molecules were tested in both human muscle cells of patients with DMD and in mice containing the human DMD gene.

The U.S. Food and Drug Administration approved the first drug of this class of "DNA stitch," called eteplirsen, in 2016. However, the drug is only applicable to about 13 per cent of DMD patients.

Yokota's team designed the cocktail of molecules to theoretically expand the applicability of the therapy to just under half of all DMD patients. The experimental treatment may also significantly improve the symptoms experienced by patients.

"Our treatment produces a shorter dystrophin protein than the drug being used now. This shorter protein is associated with extremely mild symptoms in some of the muscular dystrophy patients. Some have almost no symptoms at all," explained Yokota.

The researchers are now working to reduce the number of DNA-like molecules in the cocktail to reduce both cost and regulatory hurdles moving forward. They hope to progress the work to a clinical trial in the future.

Credit: 
University of Alberta Faculty of Medicine & Dentistry

New research analyzes video game player engagement

INFORMS journal Information Systems Research New Study Key Takeaways:

New research analyzes video game player engagement and the response to different motivations to increase game play.

The method researchers tested increases game play by 4-8% and translates into better performance for gaming companies.

The biggest driver of video game growth is online game modes.

CATONSVILLE, MD, September 25, 2019 - In the video game industry, the ability for gaming companies to track and respond to gamers' post-purchase play opens up new opportunities to enhance gamer engagement and retention and increase video game revenue.

New research in the INFORMS journal Information Systems Research looks at gamer behavior and how to match their engagement level with different games to ensure they play more often and for longer periods of time.

The study, "'Level Up': Leveraging Skill and Engagement to Maximize Player Game-Play in Online Video Games," conducted by Yan Huang of Carnegie Mellon University, and Stefanus Jasin and Puneet Manchanda, both of the University of Michigan, works to better match players with games to achieve these goals.

For games with in-game purchase features, the more rounds a player plays, the more the player is likely to spend on in-game purchases, leading to better revenue generation; for games with in-game display ads, the more rounds a player plays the more ads the player sees, which leads to higher click through numbers and higher revenue. For games without either, revenue is boosted because engaged players are likely to upgrade to the premium or next version of the game.

The researchers looked at data from 1,309 gamers' playing history over 29 months from a major international video gaming company.

"We find that high, medium, and low engagement state gamers respond differently and have different motivators such as feelings of achievement or the need for a challenge," said Huang, a professor in the Tepper School of Business at Carnegie Mellon University.

"Using our model, we learn the gamers' current engagement state on the fly and exploit that to match the gamer to a round to maximize gameplay. By doing this we see an increase in game play between 4 and 8%," said Jasin, a professor in the Ross School of Business at the University of Michigan.

What the researchers have found is that by considering player engagement state and their state-dependent motivation, it not only increases use, but also leads to increased revenue for gaming companies.

Challenges in the game positively impact gamers with low or medium engagement levels, but negatively affects players with high engagement. Players' curiosity decreases over time, and therefore, tends to move toward the low engagement state. This happens faster for less engaged players.

"Our findings suggest varying game difficulty based on players' engagement level. For multiplayer video games match players to game rounds with stronger or weaker players; single player games can track players' game play activity to infer players' engagement state and adjust the level of difficulty," said Manchanda, a professor in the Ross School of Business at the University of Michigan. "We also suggest introducing surprises, such as new maps or game modes, to offset the decrease in player curiosity as players become familiar with the game."

Credit: 
Institute for Operations Research and the Management Sciences

Nanotechnology improves chemotherapy delivery

image: Michigan State University's Bryan Smith has invented a new way to monitor chemotherapy concentrations, which is more effective in keeping patients' treatments within the crucial therapeutic window.

Image: 
Photo by Derrick Turner

EAST LANSING, Mich. - Michigan State University scientists have invented a new way to monitor chemotherapy concentrations, which is more effective in keeping patients' treatments within the crucial therapeutic window.

With new advances in medicine happening daily, there's still plenty of guesswork when it comes to administering chemotherapy to cancer patients. Too high a dose can result in killing healthy tissue and cells, triggering more side effects or even death; too low a dose may stun, rather than kill, cancer cells, allowing them to come back, in many cases, much stronger and deadlier.

Bryan Smith, associate professor of biomedical engineering, created a process based around magnetic particle imaging (MPI) that employs superparamagnetic nanoparticles as the contrast agent and the sole signal source to monitor drug release in the body - at the site of the tumor.

"It's noninvasive and could give doctors an immediate quantitative visualization of how the drug is being distributed anywhere in the body," Smith said. "With MPI, doctors in the future could see how much drug is going directly to the tumor and then adjust amounts given on the fly; conversely, if toxicity is a concern, it can provide a view of the liver, spleen or kidneys as well to minimize side effects. That way, they could precisely ensure each patient remains within the therapeutic window."

Smith's team, which included scientists from Stanford University, used mice models to pair its superparamagnetic nanoparticle system with Doxorubicin, a commonly used chemotherapy drug. The results, published in the current issue of the journal Nano Letters, show that the nanocomposite combination serves as a drug delivery system as well as an MPI tracer.

MPI is a new imaging technology that is faster than traditional magnetic resonance imaging (MRI) and has near-infinite contrast. When combined with the nanocomposite, it can illuminate drug delivery rates within tumors hidden deep within the body.

As the nanocomposite degrades, it begins to release Doxorubicin in the tumor. Simultaneously, the iron oxide nanocluster begins to disassemble, which triggers the MPI signal changes. It will allow doctors to see more precisely how much medicine is reaching the tumor at any depth, Smith said.

"We showed that the MPI signal changes are linearly correlated with the release of Doxorubicin with near 100-percent accuracy," he said. "This key concept enabled our MPI innovation to monitor drug release. Our translational strategy of using a biocompatible polymer-coated iron oxide nanocomposite will be promising in future clinical use."

Smith has filed a provisional patent for his innovative process. In addition, the individual components of the nanocomposite Smith's team created have already earned FDA approval for use in human medicine. This should help speed FDA approval for the new monitoring method.

As the process moves toward clinical trials, which could potentially begin within seven years, Smith's team will begin testing multicolor MPI to further enhance the process's quantitative capabilities, as well as drugs other than Doxorubicin, he said.

Credit: 
Michigan State University

NASA visualization shows a black hole's warped world

image: This image highlights and explains various aspects of the black hole visualization.

Image: 
NASA's Goddard Space Flight Center/Jeremy Schnittman

This new visualization of a black hole illustrates how its gravity distorts our view, warping its surroundings as if seen in a carnival mirror. The visualization simulates the appearance of a black hole where infalling matter has collected into a thin, hot structure called an accretion disk. The black hole's extreme gravity skews light emitted by different regions of the disk, producing the misshapen appearance.

Bright knots constantly form and dissipate in the disk as magnetic fields wind and twist through the churning gas. Nearest the black hole, the gas orbits at close to the speed of light, while the outer portions spin a bit more slowly. This difference stretches and shears the bright knots, producing light and dark lanes in the disk.

Viewed from the side, the disk looks brighter on the left than it does on the right. Glowing gas on the left side of the disk moves toward us so fast that the effects of Einstein's relativity give it a boost in brightness; the opposite happens on the right side, where gas moving away us becomes slightly dimmer. This asymmetry disappears when we see the disk exactly face on because, from that perspective, none of the material is moving along our line of sight.

Closest to the black hole, the gravitational light-bending becomes so excessive that we can see the underside of the disk as a bright ring of light seemingly outlining the black hole. This so-called "photon ring" is composed of multiple rings, which grow progressively fainter and thinner, from light that has circled the black hole two, three, or even more times before escaping to reach our eyes. Because the black hole modeled in this visualization is spherical, the photon ring looks nearly circular and identical from any viewing angle. Inside the photon ring is the black hole's shadow, an area roughly twice the size of the event horizon -- its point of no return.

"Simulations and movies like these really help us visualize what Einstein meant when he said that gravity warps the fabric of space and time," explains Jeremy Schnittman, who generated these gorgeous images using custom software at NASA's Goddard Space Flight Center in Greenbelt, Maryland. "Until very recently, these visualizations were limited to our imagination and computer programs. I never thought that it would be possible to see a real black hole." Yet on April 10, the Event Horizon Telescope team released the first-ever image of a black hole's shadow using radio observations of the heart of the galaxy M87.

Credit: 
NASA/Goddard Space Flight Center

Humankind did not live with a high-carbon dioxide atmosphere until 1965

COLLEGE STATION, September 25, 2019 — Humans have never before lived with the high carbon-dioxide atmospheric conditions that have become the norm on Earth in the last 60 years, according to a new research study.

Titled "Low CO2 levels of the entire Pleistocene Epoch” and published in Nature Communications today, the study shows that for the entire 2.5 million years of the Pleistocene era, carbon dioxide concentrations averaged 230 parts per million.

Today’s levels are more than 410 parts per million. In 1965, Earth’s carbon dioxide atmospheric concentrations exceeded 320 parts per million, a high-point never reached in the past 2.5 million years, this study shows.

“According to this research, from the first Homo erectus, which is currently dated to 2.1–1.8 million years ago, until 1965, we have lived in a low-carbon-dioxide environment — concentrations were less than 320 parts per million,” said Dr. Yige Zhang, a co-author of the research study and an assistant professor in the Department of Oceanography, in the College of Geosciences, at Texas A&M University.

“So, this current high-carbon-dioxide environment is not only an experiment for the climate and the environment — it’s also an experiment for us, for ourselves,” he said.

Carbon dioxide is a greenhouse gas that contributes to the warming of Earth’s atmosphere and is considered a driver of global climate change, Zhang explained.

“It’s important to study atmospheric CO2 (carbon dioxide) concentrations in the geological past, because we know that there are already climate consequences and are going to be more climate consequences, and one way to learn about those consequences is to look into Earth’s history,” he said. “Then we can see what kind of CO2 levels did we have, what did the climate look like, and what was the relationship between them.”

Mr. Jiawei Da, Dr. Xianqiang Meng, and Dr. Junfeng Ji, all of Nanjing University in China; and Dr. Gen Li, of the California Institute of Technology, co-authored the research.

Ancient Soil Reveals Millions Of Years Of Data

The scientists analyzed soil carbonates from the Loess Plateau in central China, to quantify ancient atmospheric carbon dioxide levels, as far back as 2.5 million years ago. Climate scientists often use ice cores as the ‘gold standard’ in physical climate records, Zhang said, but ice cores only cover the past 800,000 years.

Analyzing pedogenic carbonates found in the ancient soil, or paleosols, from the Loess Plateau, the scientists reconstructed the Earth’s carbon dioxide levels

“The Loess Plateau is an incredible place to look at Eolian, or wind, accumulation of dust and soil,” Zhang said. “The earliest identified dust on that plateau is from 22 million years ago. So, it has extremely long records. The layers of loess and paleosol there contain soil carbonates that record atmospheric carbon dioxide, if we have very careful eyes to look at them.”

“Specifically, carbonates formed during soil formation generally reach carbon isotopic equilibrium with ambient soil CO2, which is a mixture of atmospheric CO2 and CO2 produced by soil respiration. Through the application of a two-component mixing model, we can reconstruct paleo-CO2 levels using carbonates in fossil soils,” said Nanjing University’s Jiawei Da.

Reconstructing A Carbon Dioxide History, For Clues About Our Future

Using those materials and the techniques, the researchers constructed a carbon dioxide history of the Pleistocene.

“Our reconstructions show that for the entire Pleistocene period, carbon dioxide averaged around 230 parts per million, which is the same as the last 800,000 years’ values,” Zhang said.

“Our paleosol-based CO2 estimates are in line with snapshots of early-Pleistocene CO2 retrieved from Antarctic old, blue ice, suggesting that the Earth system has been operating under low CO2 levels throughout the Pleistocene,” said Dr. Junfeng Ji of Nanjing University.

“We evolved in a low-carbon-dioxide environment,” Zhang said, and how humans will evolve and be affected by today’s carbon-dioxide levels is yet to be seen.

Editor's Note: This release has been updated since it was originally published by request of the submitting institution. The
number, 250, has been updated to 230 parts per million.

Credit: 
Texas A&M University

Many patients not receiving first-line treatment for sinus, throat, ear infections

Washington, DC - September 25, 2019 - Investigators have now shown that only half of patients presenting with sinus, throat, or ear infections at different treatment centers received the recommended first-line antibiotics, well below the industry standard of 80 percent. The research is published this week in Antimicrobial Agents and Chemotherapy, a journal of the American Society for Microbiology.

At traditional medical offices only 50 percent of patients with sinus, throat, or ear infections received first-line treatment. In contrast, retail clinics--walk-in clinics at retail stores, supermarkets and pharmacies--provided first-line treatment to 70 percent of these patients, with emergency departments following at 57 percent, and urgent care centers bringing up the rear, at 49 percent. The goal for first line treatments is 80 percent, because roughly 10 percent of patients report allergies to first-line treatment, and others have failed it, said principal investigator Katherine E. Fleming-Dutra, MD, Deputy Director, Office of Antibiotic Stewardship, Division of Healthcare Quality, Promotion, Centers for Disease Control and Prevention

"The high percentage receiving first-line treatment at retail clinics is due to the focus on antibiotic stewardship reported by large retail clinic chains," said Dr. Fleming-Dutra. These chains often use treatment protocols, get audited, and receive feedback on how well they are adhering to the protocol.

"We don't have any data on how widespread or not protocols and audit-and-feedback are in urgent cares, EDs, or medical offices. What we know is that implementing treatment protocols and conducting audit-and-feedback on how well providers are adhering to those protocols are effective antibiotic stewardship interventions," said Dr. Fleming-Dutra.

Evaluating treatment by diagnosis, just 46.5 percent of pharyngitis patients received first-line antibiotics, across all settings and all ages. Among sinusitis patients, 45.6 percent received first-line treatment. More than one quarter of sinusitis patients (27.5%) received macrolide antibiotics, despite counter-recommendations from both the American Academy of Pediatrics and the Infectious Disease Society of America, due to high levels of macrolide resistance in Streptococcus pneumoniae.

The counter-recommendations apply even if the patient is allergic to the first-line treatment, amoxicillin or amoxicillin-clavulanic acid. For those patients, the Infectious Diseases Society of America (IDSA) recommends doxycycline or a respiratory fluoroquinolone.

In the case of acute otitis media (ear infection), across all settings, a high 70 percent of patients received first-line treatment. This was a special case, however, because the researchers evaluated antibiotic selection only in children, as acute otitis media is common among them but rare in adults, said Dr. Fleming-Dutra.

Antibiotic selection is superior in children as compared to adults, across all four settings, with 62 percent of children receiving first-line treatment, versus just 41 percent of adults, according to the report.

Several factors probably account for superior prescribing among children, said Dr. Fleming-Dutra. First, since the mid-1990s, numerous public health efforts, from the Centers for Disease Control and Prevention, as well as from the American Academy of Pediatrics and the Pediatric Infectious Diseases Society, have worked to improve antibiotics prescribing among children.

In addition to boosting first-line treatment among children, where appropriate, there has been a reduction in antibiotic use among children. "We think that the efforts on the part of all of these organizations created a culture of using antibiotics more appropriately in the care of children," said Dr. Fleming-Dutra.

Credit: 
American Society for Microbiology

Investigational drug with immunotherapy may provide new therapeutic opportunity for patients previously treated for kidney and lung cancer

image: This is Aung Naing, M.D.

Image: 
MD Anderson Cancer Center

HOUSTON -- Pegilodecakin, a first-in-class drug currently in clinical trials, has shown positive safety results and may offer a potential new treatment avenue for patients with non-small cell lung cancer (NSCLC) and kidney cancer. The study, led by The University of Texas MD Anderson Cancer Center, demonstrated that the drug, in combination with two leading anti-PD-1 monoclonal antibodies, pembrolizumab and nivolumab, achieved measurable responses for these patients.

Findings from a multi-center Phase Ib study was published in the Sept. 25 online issue of The Lancet Oncology.

"Pegilodecakin with anti-PD-1 monoclonal antibodies had a manageable toxicity profile and promising anti-tumor activity," said Aung Naing, M.D., associate professor of Investigational Cancer Therapeutics. "Our study showed this combination demonstrated favorable response in NSCLC and kidney cancer patients who previously had been treated when compared to those treated with anti-PD-1 monoclonal antibodies alone."

The study was designed to assess the safety, tolerability and maximal tolerated dose of pegilodecakin in combination with pembrolizumab or nivolumab, while also investigating biomarkers to identify patients likely to respond to treatment.

The study, which took place from February 2015 to September 2017, followed 111 kidney cancer, NSCLC and melanoma patients with advanced malignant solid tumors. The most common side effects were anemia, fatigue, low blood platelet counts and high triglycerides.

Objective responses were seen in 43% of NSCLC patients, 40% of kidney cancer patients and 10% of melanoma patients. Patients received pegilodecakin with pembrolizumab or nivolumab until disease progression, toxicity necessitating treatment discontinuation, patient withdrawal of consent or study end. Patients continued to receive combination therapy or pegilodecakin alone after disease progression if the investigator determined that the patient would continue to benefit.

Pegilodecakin is comprised of recombinant interleukin-10 (IL-10) which is linked to a molecule called polyethylene glycol (PEG). IL-10 is a protein that regulates activity of various immune cells, and high concentrations of IL-10 activate an immune response against cancer cells. The attachment of PEG to IL-10 increases its size, which prevents or delays its breakdown to prolong the time it circulates in the body.

The drug works by stimulating the survival, proliferation and "killing" potential of CD8+ T cells, known for their ability to recognize and destroy cancer cells. Increasing the amount of CD8+ T cells within the tumor is thought to improve prognosis and survival of the patient. The immune stimulatory effect of pegilodecakin complements the action of anti-PD-1 monoclonal antibodies that blocks the immune suppressive effect on T cells.

"The activity of pegilodecakin in combination with anti-PD-1 monoclonal antibodies introduces a new class of drugs to the treatment of advanced solid tumors," said Naing. "Future randomized trials hopefully will determine the tolerability and clinical benefits of pegilodecakin as a single agent and in combinations in a range of cancers."

Credit: 
University of Texas M. D. Anderson Cancer Center

Prediction system significantly increases palliative care consults

Palliative Connect, a trigger system developed at Penn Medicine and powered by predictive analytics, was found to be effective at increasing palliative care consultations for seriously ill patients, according to a new study from researchers in the Perelman School of Medicine at the University of Pennsylvania. After the system was implemented, palliative care consultation increased by 74 percent. The study was published this month in the Journal of General Internal Medicine.

"There's widespread recognition of the need to improve the quality of palliative care for seriously ill patients, and palliative care consultation has been associated with improved outcomes for these patients," said the study's lead author, Katherine Courtright, MD, an assistant professor of Pulmonary, Allergy and Critical Care, and Hospice and Palliative Medicine.

According to the Center to Advance Palliative Care, palliative care is specialized medical care focused on providing relief from the symptoms and stress of a serious illness to improve the quality of life for patients and families. Palliative care is appropriate for patients of any age and during any stage of illness, which distinguishes it from hospice care, which takes place near the end of life.

Palliative Connect draws on clinical data from the electronic health record and uses machine learning to develop a score based on 30 different factors of a person's likely prognosis over six months--the timeframe doctors are asked to use when making a decision on whether palliative care consultation would be beneficial.

The study team, which was also led by study senior author Nina O'Connor, MD, the chief of Palliative Care at Penn Medicine, evaluated Palliative Connect over an eight-week period between December 2017 and February 2018. In that span, 134 patients who'd been admitted to a Philadelphia hospital were compared to a similar population of 138 patients selected from a time before Palliative Connect was applied.

In the group of patients from when Palliative Connect was applied, consultations increased significantly, climbing from 22 to 85. Additionally, patients were seen by palliative care earlier in their hospital stay - an average of a day and a half sooner.

However, primary team physicians were able to decline triggered consultations, and the researchers found that about 43 percent did so. The authors note that there were several reasons provided by the primary teams for declining a triggered consult. These included the primary team feeling that they are already meeting the patient's needs, or that the patient doesn't have any palliative care needs at the time. These explanations, Courtright said, highlight the fact that prognosis isn't a perfect measure of palliative care needs for every patient -- it's just one aspect of serious illness.

The study also showed that none of the patients or their caregivers declined a triggered consultation after it was accepted by the primary team physician.

"This approach helps us get a foot in the door and really explain what palliative care is to patients and their families," Courtright explained. "Sometimes, there is this sense from primary teams that patients or families are hesitant or don't want to talk about palliative care, but, when a palliative care clinician walks into the room and explains what they do, often people really are glad to see us there."

The researchers believe this the first time a scalable, data-driven prediction system of this kind has been tested in a real clinical setting for palliative care. There have been other palliative care triggers used, but few that have been developed from empirical evidence, and even fewer that have been rigorously tested.

Moving forward, the researchers plan to continue fine-tuning the Palliative Connect prediction model. Additionally, another study is underway to gauge the perspective of physicians, patients, and palliative care clinicians on consult triggers to better inform future use of such interventions.

"Our goal is for every seriously ill patient to have a conversation with their clinician about their priorities and wishes, and to document those priorities in the medical record," O'Connor said. "We think that triggers are allowing us to do that, so we'll continue to evaluate and refine in order to help more patients."

Credit: 
University of Pennsylvania School of Medicine

FSU research: Fear not a factor in gun ownership

image: This is co-author and FSU doctoral student, Benjamin Dowd-Arrow.

Image: 
FSU Photo/Bruce Palmer

Are gun owners more or less afraid than people who do not own guns? A new study from researchers at Florida State University and the University of Arizona hopes to add some empirical data to the conversation after finding that gun owners tend to report less fear than non-gun owners.

The study, led by sociology doctoral student Benjamin Dowd-Arrow, used the Chapman University Survey of American Fears to examine both the types and the amount of fear that gun owners had in comparison to non-gun owners.

"There's a lot of popular rhetoric in the media and among politicians as to why people own guns," Dowd-Arrow said. "The biggest claim is that they're cowards. So, we wanted to see if owning guns was truly a symptom of fear."

Dowd-Arrow and his team examined fear pertaining to specific phobias and victimization. The results, published in SSM - Population Health, showed that the popular rhetoric surrounding gun ownership was not true.

The researchers first examined gun ownership as a result of fears. For the most part, the study showed that fears were unrelated to the probability of owning a gun.

There were some exceptions. Adults who reported a fear of animals and adults who reported a fear of being mugged were less likely to own a gun. Adults who reported a fear of being victimized by a random/mass shooting were more likely to own a gun.

The researchers then examined the fears of people who owned guns. They found that people who own guns tend to report fewer phobias and victimization fears than people who do not own guns. This general pattern was observed across multiple types of fear, including fear of animals, heights and being mugged.

"There's little evidence to suggest that gun ownership is an effect of fear," Dowd-Arrow said. "However, gun ownership may be associated with less fear because firearms help their owners to feel safe, secure and protected in a world they perceive to be uncertain and potentially dangerous."

Dowd-Arrow said the team's research offers potential policy implications for the gun debate and gun-safety legislation moving forward.

"By eliminating stereotypes and false information around gun ownership, we can possibly create better or more useful policy," Dowd-Arrow said.

Researchers stressed that even though gun owners were found to be less afraid, they are not endorsing gun ownership as safe.

"Research has already indicated that owning a firearm is linked to increased odds of suicide, accidental injury and death and violence against women," Dowd-Arrow said. "More research is needed to really understand how fear is linked to these health outcomes."

Researchers suggested future studies could examine the fear factor in carrying a concealed weapon versus keeping a gun stowed in a car or house. Additional topics to investigate include regional differences in gun ownership and the effects of gun ownership on mental health and sleep disturbance.

"Firearm culture in America is a fascinating topic that is grossly understudied," Dowd-Arrow said. "More research can help us understand what motivates gun ownership. If it isn't fear, then what is it? Or is it a specific fear, such as being a victim in a random shooting?"

Credit: 
Florida State University

School spending cuts triggered by great recession linked to sizable learning losses for learning losses for students in hardest hit areas

WASHINGTON, D.C., September 25, 2019--Substantial school spending cuts triggered by the Great Recession were associated with sizable losses in academic achievement for students living in counties most affected by the economic downturn, according to a new study published today in AERA Open, a peer-reviewed journal of the American Educational Research Association.

The estimated declines in student math and English language arts achievement in school districts with the most severe school spending cuts represent a loss of approximately 25 percent of the expected annual gains in achievement for students in grades 3 through 8, compared to their peers in the districts least affected by the Great Recession.

According to the study, conducted by scholars Kenneth Shores of Pennsylvania State University and Matthew Philip Steinberg of George Mason University, the steepest declines in expected math and English language arts achievement gains were in school districts serving the poorest students--districts where an average of 72 percent of students received free or reduced-price lunch--and in school districts serving the most African American students--39 percent African American, on average.

As a result, the authors note, the Great Recession was associated not only with declines in average academic achievement among counties most adversely affected by the Great Recession but also with increases in achievement gaps between poor and wealthy school districts and between school districts with many and few African American students.

"Our results reinforce what other recent studies have demonstrated: that there is a link between educational spending and student achievement," said Shores, an assistant professor of human development and family studies at Pennsylvania State University. "What is different about this study is that we show that divestments in educational spending matter nearly as much for student achievement as do investments."

For their study, the authors used a dataset consisting of test scores for 2,548 counties across the continental United States for the 2008-09 through 2014-15 school years, combining student achievement information from the Stanford Education Data Archive, demographic information from the U.S. Department of Education, and county-level economic data from multiple sources. The study sample includes test scores for 86 percent of the population of U.S. students who are annually tested in grades 3 through 8.

Although the authors found that the recession resulted in a decline in per pupil revenues of nearly $900 on average for the entire U.S., the consequences for school spending varied substantially among counties. Comparing counties with employment losses in the top and bottom quartiles, school spending declined at a faster rate in the hardest hit areas--by about $600 more per pupil per year--for the first two years of the recession (2007-08 to 2009-10).

In contrast, in the five years leading up to the start of the Great Recession in December 2007--that is, 2002-03 through 2007-08--changes in school spending differed little across the counties that were most and least affected by the downturn. After the recession hit, school spending continued to decline until the 2012-13 school year, but after the first two years, it declined at similar rates across the two groups of districts.

The resulting achievement gap between students in counties most and least affected by the recession persisted for more than three years following the 2009-10 school year.

"Our findings suggest that the first two years of differential declines in school spending were enough to put those hardest hit students at an academic disadvantage, even after spending levels began to increase," Shores said.

"The Great Recession's effects varied significantly among U.S. counties; yet the federal response, in the form of the American Recovery and Reinvestment Act of 2009, neglected this variation," said Steinberg, an associate professor of education policy at George Mason University. "Our findings suggest that greater fiscal support should be targeted to schools that not only serve the most vulnerable student populations but that also are located in communities that are the most vulnerable to the adverse consequences of an economic recession."

The study also found that achievement decreased more for older students than for younger students, a finding that surprised the authors. Prior evidence had found that divestments in resources for younger children tended to be more consequential than equivalent divestments for older children.

"While our data do not speak to this, one potential explanation is that teacher layoffs were concentrated in older grades," Steinberg said. "If true, parents with older children would rightfully be concerned that schools' responses to spending cuts were affecting those students disproportionately. Improving the understanding of how districts redistribute resources differently across schools and grades during periods of districtwide spending declines in the wake of recessionary events is an important line of future research."

Credit: 
American Educational Research Association