Culture

Losing money causes plastic changes in the brain

Researchers at the HSE Institute for Cognitive Neuroscience have shown experimentally that economic activity can actively change the brain. Signals that predict regular financial losses evoke plastic changes in the cortex. Therefore, these signals are processed by the brain more meticulously, which helps to identify such situations more accurately. The article was published in Scientific Reports.

The sight of an envelope from the tax authority, a falling currency rate, or a sad face of your chief accountant can mean impending financial troubles. How does the brain learn to recognize situations like this? Do these situations cause changes in brain function? These questions were studied by cognitive neuroscientists at HSE University using a popular economic game - the monetary incentive delay task (MID Task).

The MID Task requires that a person respond quickly to a signal that signals an opportunity to receive a reward or avoid a loss. It also allows you to divide brain mechanisms reward processing into separate stages: expectation of reward and learning.

'We hypothesized that, like the plastic changes in the brain during the learning of a second language or playing a musical instrument, similar neuroplastic changes occur for certain signals that are associated with important economic outcomes. For example, the sound of a slot machine can for a long time be associated with a big win or loss while visiting a casino, which causes a particularly strong reaction in our brain in the future,' explains Anna Shestakova, director of HSE University's Centre for Cognition & Decision Making.

The subjects (29 people) took part in an economic game in which sound signals predicted losses of various sizes: the participants could lose between one and fifty-one monetary units in each round of the game. Participants had to quickly and accurately respond to audio signals to avoid monetary losses.

The study showed that participating in such a game leads to plastic changes in the auditory cortex of the brain, which begins to more accurately distinguish sounds that are associated with large financial losses. Moreover, scientists have demonstrated a link of this plastic change of the brain with the 'learning signal' generated by the human brain during performance of the MID Task. Subjects with a more pronounced neural 'learning signal' demonstrated stronger plastic changes in the nervous system.

The results of the experiment suggest that life's economic experience can lead to changes in the brain, which alters how external signals are perceived. Interestingly, the brain learns to identify important economic signals automatically. Moreover, scientists have shown precisely how this rewiring of the brain occurs and have demonstrated the role of individual differences in brain learning systems using the neurotransmitter dopamine.

'This is the first experimental evidence to show that economic activity can actively change the brain,' said Aleksey Gorin, a graduate student at HSE University and one of the authors of the study. 'Signals leading to financial losses evoke rather fast neuroplastic changes. Therefore, they are identified by the brain automatically, and do not require voluntary attention,' he said.

Credit: 
National Research University Higher School of Economics

Tri-lab initiative leads innovation in novel hybrid energy systems

Future novel hybrid energy systems could lead to paradigm shifts in clean energy production, according to a paper published last week in Joule.

Researchers from the U.S. Department of Energy's (DOE's) three applied energy laboratories--Idaho National Laboratory (INL), the National Renewable Energy Laboratory (NREL), and the National Energy Technology Laboratory (NETL)--co?authored the paper describing such integrated energy systems.

Their effort outlines novel concepts to simultaneously leverage diverse energy generators--including renewable, nuclear, and fossil with carbon capture--to provide power, heat, mobility, and other energy services. The historic collaboration between the nation's nuclear energy, renewable energy, and fossil energy laboratories aims to address a grand national challenge from an objective, holistic perspective.

"Working together, researchers at the nation's applied energy laboratories have identified critical synergies among different power generation sources, which will be vital to transforming our energy economy," said Martin Keller, director of NREL. "We look forward to advancing these creative solutions, collaboratively."

The new article presents an objective new framework for engineering-based modeling and analysis to support complex optimization of energy generation, transmission, services, processes and products, and market interactions.

In short, it outlines a viable path forward for hybrid energy systems. Such systems are capable of leveraging multiple energy sources to maximize the value of each. They do this by creating higher-value products, delivering lower-emission energy to industry, and better coordinating demand with energy production.

"The design of integrated energy systems is a significant challenge--and opportunity," INL Director Mark Peters said. "The collaboration by the three applied national laboratories, and the setup and operation of real-world experiments at their testing facilities, represents a comprehensive and focused effort that is transparent and objective. This work will help realize future advanced energy systems that should help our nation expand affordable energy options and significantly contribute to wide-scale decarbonization efforts."

The paper describes one example of the multi-input, multi-output nature of these systems: a hypothetical, tightly coupled industrial energy park that uses heat and electricity from highly flexible advanced nuclear reactors, small-scale fossil generators, and renewable energy technologies to produce electricity and hydrogen from electrolysis.

"In this scenario, depending on market pricing, electricity and or heat could be sold into the grid, used on-site, or stored for later distribution and use," said David C. Miller, NETL's senior fellow for Strategic Systems Analysis & Engineering and co-author of the article. "Furthermore, the output streams could also be used to produce hydrogen or other valuable chemicals and products."

This flexibility could provide an abundant supply of clean energy for a larger net-zero-emission energy system. Such systems could support sectors of the economy that are more difficult to decarbonize, such as industry and transportation.

"Considering complementary attributes among various energy technologies opens up new opportunities for asset use optimization that meet multiple energy services and maximize economic value," said Douglas Arent, NREL's executive director for Strategic Public-Private Partnerships and the study's lead author.

Groundwork for the article began in 2018, when NETL, NREL, and INL hosted the first tri-lab workshopPDF in response to DOE Deputy Secretary Mark Menezes' call for more coordinated work across the DOE applied energy laboratories. Building off knowledge gained during that collaboration, a focused workshop was held in April 2019 on the priority topic Modeling & Analysis of Current & Future Energy Systems. A third tri-lab workshop, held July 31- Aug. 1, 2019, focused on addressing the science and technology challenges associated with the design, development, and deployment of new and advanced materials and components that will enable integrated hybrid energy systems.

"The National Energy Technology Laboratory is proud to partner with INL and NREL in this foundational work," NETL Director Brian J. Anderson said. "The complimentary expertise across the three labs are bringing revolutionary ideas to the table on how to design and optimize integrated energy systems of the future."

As illustrated by the NETL-NREL-INL research to date, the design of hybrid energy systems will require input from experts across the spectrum of energy research. To that end, the body of work by the three applied national laboratories, including the tri-lab workshops and the recent Joule article, represent a significant step forward toward realizing the advanced energy systems of the future.

"The national laboratories offer a diversity of expertise that will allow us to achieve effective, cross-sector collaboration that is necessary to solve the true energy and environment grand challenges of our time," said Shannon Bragg-Sitton, INL lead for Integrated Energy Systems and co-author of the article.

Credit: 
DOE/National Renewable Energy Laboratory

Drug may boost vaccine responses in older adults

A drug that boosts the removal of cellular debris in immune cells may increase the protective effects of vaccines in older adults, a study published today in eLife shows.

The results may lead to new approaches to protect older individuals from viruses such as the one causing the current COVID-19 pandemic and influenza.

"Older adults are at high risk of being severely affected by infectious diseases, but unfortunately most vaccines in this age group are less efficient than in younger adults," explains lead author Ghada Alsaleh, a postdoctoral researcher at the Kennedy Institute of Rheumatology, University of Oxford, UK.

Previously, Alsaleh and colleagues showed that in older mice immune cells may become less efficient at removing cellular debris, a process called autophagy, and this leads to a poorer immune response in the animals. In the current study, they looked at samples from young and older people participating in clinical trials for vaccines against the respiratory syncytial virus and the hepatitis C virus to see if the same event happens in human immune cells called T cells. They found that autophagy increases in T cells from younger people after receiving vaccines, but this response is blunted in older people.

When they examined T cells from the older individuals in the laboratory, the team found that these cells have less of a natural compound called spermidine. Spermidine ramps up autophagy and boosts T-cell function. Supplementing these older immune cells with spermidine in the laboratory restored autophagy to the same levels seen in T cells from younger people. "Our work suggests that boosting autophagy during vaccination may help make vaccines more effective for older people," Alsaleh says.

A small clinical trial recently tested whether giving spermidine to older adults would improve their cognitive function. As the results were positive, and spermidine did not appear to have any harmful effects, this provides some evidence that it would be safe to test whether spermidine might also be helpful for boosting the immune response of older people to vaccines.

"Our findings will inform vaccine trials in which autophagy-boosting agents, such as spermidine, are given in a controlled environment to older participants," concludes senior author Anna Katharina Simon, Professor of Immunology at the University of Oxford. "It will be interesting to see whether these agents can enhance vaccination efficiency and help protect older people from viral infections."

Credit: 
eLife

Type of sugar used to sweeten sheep milk kefir may improve consumer acceptance

image: : Sensory acceptance (appearance, aroma, flavor, texture, and overall liking) of experimental kefir fermented milk formulations. Values are expressed as mean ± SD (n = 100 consumers). a-c The same lowercase letters indicate lack of statistical difference (P > 0.05) for the same sensory attribute. SUC = sucrose, DEM = raw demerara sugar, BSG = brown sugar, FRU = fructose, COC = coconut sugar, HON = honey.

Image: 
Journal of Dairy Science

Philadelphia, December 15, 2020 - The study of human emotions can be used to gauge the sensory acceptance of dairy products. A possible route to increase worldwide consumption of sheep milk kefir may be to improve its sensory acceptance, which can be a determining factor for its inclusion in daily diets. In an article appearing in the Journal of Dairy Science, scientists studied the effects of kefir sweetened with five different sugars on sensory acceptance and emotional profile in regular consumers of fermented dairy products.

The authors of this study, from the Federal Institute of Rio de Janeiro, Fluminese Federal University, Federal Institute of Paraná, and Natural Resources Institute Finland, assessed the addition of demerara sugar, brown sugar, fructose, coconut sugar, and honey to sheep milk kefir. One hundred consumers rated the appearance, aroma, taste, texture, and overall impression, and expressed whether they were satisfied, active, loving, calm, comfortable, energetic, happy, healthy, refreshing, disgusted, worried, or upset.

Sheep produce 10.6 million tons of milk per year, or 1.3 percent of the world's milk production. "The results of the present study are relevant for the sheep milk dairy industry, as they indicate that emotional perceptions and sensory acceptance of kefir sweetened with different agents are directly correlated," said lead author Adriano G. Cruz, PhD, Food Department, Federal Institute of Rio de Janeiro, Rio de Janeiro, Brazil. "The evaluation of emotions evoked by products can be an important tool to obtain additional information that can be used for product optimization and market strategies by the sheep milk industry."

The use of brown sugar decreased ratings for taste, texture, and overall impression, as well as the emotions "active," "loving," "energetic," "healthy," and "refreshing." The use of coconut sugar decreased ratings for appearance, aroma, and taste, in addition to the feelings "refreshing" and "upset." The use of honey improved ratings for appearance and aroma but reduced the ratings for the emotions "active," "loving," "energetic," and "healthy." Kefir samples with higher sensory acceptance scores were associated with higher levels of the feelings "satisfied," "active," "comfortable," "energetic," "healthy," and "refreshing."

The results of the study suggest that demerara sugar or fructose should be used as a substitute for sucrose in the production of sheep milk kefir to increase consumption.

Professor Cruz added, "These findings are interesting, as they give useful information to sheep milk processors to establish different marketing strategies for each group of samples, serving as initial guidelines."

Credit: 
Elsevier

UMBC researchers identify where giant jets from black holes discharge their energy

image: In this artist's rendering courtesy of NASA, the remnants of a star torn apart by a black hole form a disk around the black hole's center, while jets eject from either side. The jets can travel at nearly the speed of light, and they discharge their high energy along the way. New research from UMBC in Nature Communications shows that the energy dissipation happens much farther away from the black hole's center than previously thought. The methods for the study, standard statistical techniques and minimal reliance on assumptions from any particular jet model, make the findings difficult to dispute. The results offer clues about jet formation and structure.

Image: 
NASA

The supermassive black holes at the centers of galaxies are the most massive objects in the universe. They range from about 1 million to upwards of 10 billion times the mass of the Sun. Some of these black holes also blast out gigantic, super-heated jets of plasma at nearly the speed of light. The primary way that the jets discharge this powerful motion energy is by converting it into extremely high-energy gamma rays. However, UMBC physics Ph.D. candidate Adam Leah Harvey says, "How exactly this radiation is created is an open question."

The jet has to discharge its energy somewhere, and previous work doesn't agree where. The prime candidates are two regions made of gas and light that encircle black holes, called the broad-line region and the molecular torus.

A black hole's jet has the potential to convert visible and infrared light in either region to high-energy gamma rays by giving away some of its energy. Harvey's new NASA-funded research sheds light on this controversy by offering strong evidence that the jets mostly release energy in the molecular torus, and not in the broad-line region. The study was published in Nature Communications and co-authored by UMBC physicists Markos Georganopoulos and Eileen Meyer.

Far out

The broad-line region is closer to the center of a black hole, at a distance of about 0.3 light-years. The molecular torus is much farther out--more than 3 light-years. While all of these distances seem huge to a non-astronomer, the new work "tells us that we're getting energy dissipation far away from the black hole at the relevant scales," Harvey explains.

"The implications are extremely important for our understanding of jets launched by black holes," Harvey says. Which region primarily absorbs the jet's energy offers clues to how the jets initially form, pick up speed, and become column-shaped. For example, "It indicates that the jet is not accelerated enough at smaller scales to start to dissipate energy," Harvey says.

Other researchers have proposed contradictory ideas about the jets' structure and behavior. Because of the trusted methods Harvey used in their new work, however, they expect the results to be broadly accepted in the scientific community. "The results basically help to constrain those possibilities--those different models--of jet formation."

On solid footing

To come to their conclusions, Harvey applied a standard statistical technique called "bootstrapping" to data from 62 observations of black hole jets. "A lot of what came before this paper has been very model-dependent. Other papers have made a lot of very specific assumptions, whereas our method is extremely general," Harvey explains. "There isn't much to undermine the analysis. It's well-understood methods, and just using observational data. So the result should be correct."

A quantity called the seed factor was central to the analysis. The seed factor indicates where the light waves that the jet converts to gamma rays come from. If the conversion happens at the molecular torus, one seed factor is expected. If it happens at the broad-line region, the seed factor will be different.

Georganopolous, associate professor of physics and one of Harvey's advisors, originally developed the seed factor concept, but "applying the idea of the seed factor had to wait for someone with a lot of perseverance, and this someone was Adam Leah," Georganopolous says.

Harvey calculated the seed factors for all 62 observations. They found that the seed factors fell in a normal distribution aligned almost perfectly around the expected value for the molecular torus. That result strongly suggests that the energy from the jet is discharging into light waves in the molecular torus, and not in the broad-line region.

Tangents and searches

Harvey shares that the support of their mentors, Georganopoulos and Meyer, assistant professor of physics, was instrumental to the project's success. "I think that without them letting me go off on a lot of tangents and searches of how to do things, this would have never gotten to the level that it's at," Harvey says. "Because they allowed me to really dig into it, I was able to pull out a lot more from this project."

Harvey identifies as an "observational astronomer," but adds, "I'm really more of a data scientist and a statistician than I am a physicist." And the statistics has been the most exciting part of this work, they say.

"I just think it's really cool that I was able to figure out methods to create such a strong study of such a weird system that is so removed from my own personal reality." Harvey says. "It's going to be fun to see what people do with it."

Credit: 
University of Maryland Baltimore County

Engineers develop soft robotic gripper

Scientists often look to nature for cues when designing robots - some robots mimic human hands while others simulate the actions of octopus arms or inchworms. Now, researchers in the University of Georgia College of Engineering have designed a new soft robotic gripper that draws inspiration from an unusual source: pole beans.

While pole beans and other twining plants use their touch-sensitive shoots to wrap themselves around supports like ropes and rods to grow upward, the UGA team's robot is designed to firmly but gently grasp objects as small as 1 millimeter in diameter.

"We had tried different designs but we were not happy with the results, then I recalled the pole beans I grew in our garden few years ago," said Mable Fok, an associate professor and the study's lead author. "This plant can hold onto other plants or rope so tightly. So, I did some research on twining plants and thought it was a good design from nature for us to explore."

In a new study published in the journal Optics Express, the researchers say their soft robotic spiral gripper offers several advantages over existing robotic devices.

"Our robot's twining action only requires a single pneumatic control, which greatly simplifies its operation by eliminating the need for complex coordination between multiple pneumatic controls," said Fok. "Since we use a unique twining motion, the soft robotic gripper works well in confined areas and needs only a small operational space."

The UGA device offers another advancement over many existing robotics: an embedded sensor to provide critical real-time feedback.

"We have embedded a fiber optic sensor in the middle of the robot's elastic spine that can sense the twining angle, the physical parameters of the target, and any external disturbances that might cause the target to come loose," said Fok.

The researchers believe their soft robotic gripper - a little more than 3 inches long and fashioned from silicone - could be useful in many settings, including agriculture, medicine and research. Applications might include selecting and packaging agricultural products that require a soft touch such as plants and flowers, surgical robotics, or selecting and holding research samples in fragile glass tubes during experiments.

In their study, the research team says the spiral gripper proved effective in gripping objects such as pencils and paintbrushes - even an item as small as the thin wire of a straightened paperclip. The device also demonstrated excellent repeatability, high twining sensing accuracy and precise external disturbance detection.

In addition to Fok, the research team includes Mei Yang and Ning Liu, both Ph.D. candidates in engineering; Liam Paul Cooper, an undergraduate studying computer systems engineering; and Xianqiao Wang, an associate professor in the College of Engineering.

The team plans to continue its work with an eye on improving the automatic feedback control based on the readings of the fiber optic sensor. They also want to explore miniaturizing the design to serve as the foundation of a biomedical robot.

"This twining soft robot with its embedded fiber optic sensor forms a building block for a more comprehensive soft robot. Having a simpler design and control is definitely an advantage," said Fok.

Credit: 
University of Georgia

Device mimics life's first steps in outer space

image: Abdellahi Sow uses the VENUS apparatus, which offers researchers insight into how life can form in space.

Image: 
Emanuele Congiu

WASHINGTON, December 15, 2020 -- A device developed by scientists at the CY Cergy Paris University and Paris Observatory promises insight into how the building blocks of life form in outer space.

In an article published in Review of Scientific Instruments, by AIP Publishing, the scientists detail how VENUS -- an acronym of the French phrase "Vers de Nouvelles Syntheses," which means "toward new syntheses" -- mimics how molecules come together in the freezing darkness of interstellar space.

"We try to simulate how complex organic molecules are formed in such a harsh environment," said Emanuele Congiu, one of the authors and an astrophysicist at the observatory. "Observatories can see a lot of molecules in space. What we do not understand yet, or fully, is how they formed in this harsh environment."

VENUS has a chamber designed to replicate the strong vacuum of space, while holding a frigid temperature that is set lower than minus 400 degrees Fahrenheit (10 kelvins). It uses up to five beams to deliver atoms or molecules onto a tiny sliver of ice without disturbing that environment.

That process, Congiu said, replicates how molecules form on the ice that sits atop tiny dust particles found inside interstellar clouds. VENUS is the first device to do the replication with more than three beams, which lets researchers simulate more complicated interactions.

Over the past 50 years, nearly 200 different molecular species have been discovered in the star-forming regions of space. Some of them, the so-called "prebiotic species," are believed by scientists to be involved in the processes that lead to the early forms of life.

A key use of the VENUS device will be working in concert with scientists who discover molecular reactions in space but need a fuller understanding of what they have observed. It specifically mentions NASA's launch of the James Webb Space Telescope, which is scheduled for 2021. The largest and most powerful space telescope ever launched, it is expected to dramatically expand scientists' knowledge of the universe.

"What we can do in the lab in one day takes thousands of years in space," Congiu said. "Our work in the lab can complement the wealth of data that comes from the space observatories. Otherwise, astronomers would not be able to interpret all of their observations. Researchers who make observations can ask us to simulate a certain reaction, to see if what they think they see is real or not."

Credit: 
American Institute of Physics

HSS bone study sheds light on complications after spinal surgery

The microscopic structure of bone appears to predict which patients will experience poor outcomes after spinal fusion, according to a new study by researchers at Hospital for Special Surgery (HSS) in New York City.

Spinal fusion is among the most commonly performed orthopedic surgeries in the United States, with more than 400,000 patients undergoing the procedure each year. Although most cases are successful, as many as 45 percent of patients experience complications after the operation, often resulting from the bone's inability to tolerate the hardware surgeons use to support the skeleton.

The most widely used technology to evaluate patients' bones before spine fusion surgery is called dual x-ray absorptiometry, or DXA. DXA imaging gives physicians a rough sense of the strength of a person's bone, but it is not foolproof. In some cases, surgeons find that patients whose DXA scans appear normal have bones that are so weak that the appliances they use during spinal fusion are at risk of failure.

The HSS researchers hypothesized that a more sensitive measure of bone quality could identify abnormalities in the skeleton that DXA does not detect, and that these defects would be linked to postoperative complications.

For the new study, the team led by Emily Stein, MD, an endocrinologist and bone specialist, along with spine surgeons Han Jo Kim, MD, Matthew Cunningham MD, PhD, and Frank Schwab, MD, turned to a cutting-edge technique for assessing bone called high-resolution peripheral quantitative computed tomography (HR-pQCT). HR-pQCT can separately measure how much bone is in the outer (cortical) and inner (trabecular) compartments and measure on a microscopic level how the inner trabecular bone network is organized, including the number, thickness and spacing between the parts of that network called trabeculae. These measurements--known broadly as the microarchitecture--may provide a much more robust assessment of skeletal health than DXA, particularly in this population of patients who frequently have changes in the area of the fusion which undermine the utility of DXA.

"DXA provides a two-dimensional measure of bone density, or the amount of bone present, whereas HR-pQCT provides a true three-dimensional measurement of the bone density," Dr. Stein said. "This provides additional information about the structural features of bone that can be leading to weakness or fragility."

The study included 54 men and women scheduled for spinal fusion at HSS between December 2017 and December 2019. Patients underwent conventional DXA scans, as well as HR-pQCT scans of the radius (forearm) and leg bone (tibia).

Of the 54 patients in the study, 14 experienced complications within the first six months of surgery, including broken rods, loosened bone screws, fractures and abnormal bending of the spine. Although the number of people in the study was small, the researchers found that those with abnormalities on HR-pQCT were significantly more likely to experience complications than those without such defects -- abnormalities that were not evident on DXA imaging. These abnormalities involved lower bone mineral density in the trabeculae (the spongy tissue in the center of bones), fewer and thinner trabeculae, as well as thinner cortices.

"We used HR-pQCT to show for the first time that abnormalities in the microscopic structure of bone are directly related to the development of complications after fusion. Our study identified several abnormalities in patients who had complications," Dr. Stein said.

For the moment, HR-pQCT is a research tool that's not widely available to surgeons. "I think that it does provide a lot of additional information that we're not getting from DXA. On DXA, the spine is almost always going to look fine, which can be misleading," noted Dr. Stein.

Dr. Stein's group is expanding their study to include more patients. "Spinal fusion surgeries are so invasive, the potential for complications is high," she explained. "We want to have the most optimal strategies for lowering complications in our patients, and that begins with understanding who is at risk and why. In our future work with additional patients, we hope to be able to define which features, or group of features of the bone structure is most important in contributing to surgical success. This will help us to devise the most targeted treatment strategies for our patients."

Credit: 
Hospital for Special Surgery

Extreme political advertising can hurt campaign efforts

image: Retweet networks of the 2016 US presidential debates show the rise of echo chambers for Donald Trump (red) and Hillary Clinton (blue) that gradually absorb undecided voters (yellow) across the three debates.

Image: 
Courtesy Fu Lab/Dartmouth College

HANOVER, N.H. - December 14, 2020 - Aggressive political messaging can work against candidates by radicalizing supporters and alienating moderates, according to a Dartmouth study.

The research, published in Physical Review X, shows how messages conveyed through political advertising and media appearances can move voters into extreme social networks, making them less influential with undecided voters and others in the middle of the political spectrum.

"The common belief is that political advertising helps a candidate's efforts," said Feng Fu, an assistant professor of mathematics at Dartmouth and the senior researcher on the study. "This research finds that overly-amplified exposure and super-strong positioning of a campaign can actually lessen the likelihood of winning the widespread support that is desired."

In the study, researchers used a computer model to simulate social media users to assess how political campaigns, personal beliefs and social relationships impact interactions between individuals.

When extreme political messaging was introduced, the researchers found that supporters moved toward more extreme opinions.

While political messaging might have the effect of solidifying support of existing backers, aggressive communications had the negative impact of pulling supporters away from voters that occupy more moderate social spaces. Once networking within more extreme groups, those individuals are no longer able to persuade undecided voters to support a desired candidate.

"Political strategists need to consider how attack ads and other extreme messages might backfire," said Xin Wang, who served as lead author of the research paper as a visiting PhD student at Dartmouth, "existing supporters may become too radical for their relatively moderate, undecided friends."

In addition to showing the potential negative effect of top-down political advertising, the research also demonstrates the impact of bottom-up exchanges--such as social media sharing between individuals.

The study shows that when people are relatively open-minded with their politics, they can be pulled into echo chambers through everyday political discussion with people who have somewhat similar political opinions.

"In our model, opposing echo chambers only form when people are willing to have their minds changed," said Antonio Sirianni, a postdoctoral fellow at Dartmouth who co-led the study. "When individuals consider opinions similar to their own, but ignore substantially different opinions, the environment becomes ripe for polarization."

The new study builds on well-known concepts of how echo chambers are formed, including "confirmation bias," where people are more likely to accept claims that are consistent with their pre-existing beliefs, and "selective exposure," where people seek out individuals with similar beliefs.

The Dartmouth study differs from past research by demonstrating the impacts of external political campaigns and influencer messaging on political processes.

As part of the paper, the research team provides a visual example of polarization on Twitter during the 2016 U.S. presidential campaign to describe the research model.

Credit: 
Dartmouth College

Resistance training paired with peanut protein affects muscle health in older adults

image: This infographic reviews findings from a recent study that suggests when combined with resistance training, defatted peanut powder can be an effective plant-based protein option for positively affecting select markers of muscle growth and strength in untrained older adults. Functional tips for adding peanut powder to a balanced diet are provided and recipes made with peanut powder are displayed. To download a copy of this infographic, please visit: https://peanut-institute.com/peanut-products/peanut-flour/.

Image: 
© 2020 The Peanut Institute

Declines in muscle mass and strength can begin in early adulthood, unnoticeable at first, and eventually progress until functionality, endurance, and general health may be compromised. Evidence-based and cost-effective lifestyle interventions, such as resistance training (RT) and ensuring optimal dietary protein intake, aim to increase muscle mass in older individuals, and support healthy aging and longevity.

Now, as the popularity and consumer demand for plant-based protein to support exercise training grows, the full array of essential and non-essential amino acids and high protein digestibility of defatted peanut protein powder (PP) makes it an exceptional plant-based protein option. Yet, no studies to date have examined if PP combined with RT can enhance training adaptations and measures of muscle mass, function and strength, especially in an older population. For the first time, a randomized controlled clinical trial from researchers at Auburn University published in the Journal of the International Society of Sports Nutrition demonstrates that in combination with RT, intake of PP positively affects select markers of muscle growth and strength among untrained, older adults.

"Many of the previous studies in this space have looked at how animal-based or soy protein-based supplements enhance the response to resistance training," says Dr. Roberts, PhD, a co-principal investigator on the study from Auburn University in the School of Kinesiology. "This study suggests that pairing resistance training with supplemental peanut powder may be an effective plant-based protein solution to meet protein needs and perhaps slow or prevent age-related loss of muscle in older adults."

Thirty-nine older, untrained individuals completed a six-week or ten-week supervised RT program, where full-body training was implemented twice weekly. Participants were also randomly assigned to consume either a PP supplement mixed with 16 fl. oz. of water once per day (75 total grams of powder providing 30 grams protein, >9.2 grams of essential amino acids, ~315 calories) or be a "wait-list" control who did not receive any supplement (CTL). On workout days, PP supplements were provided immediately following exercise and compliance was monitored by trained study personnel. Skeletal muscle biopsies and other markers of muscle quality, body composition and strength, as well as three-day self-reported habitual food intake, were collected.

PP supplementation significantly increased knee flexion peak torque - a marker of muscle strength - in the ten-week cohort relative to the CTL group. In looking at the combined data from both the six- and ten-week groups, PP participants experienced significant increases in vastus lateralis (VL) thickness - a measure of muscle growth - compared to CTL participants. Notably, the consumption of protein and fiber significantly increased during the study in the PP group compared to CTL. This is attributed to the ~15 grams per day of fiber and 30 grams per day of protein received from the nutritional supplement. Surprisingly, PP supplementation after one bout of resistance exercise did not enhance muscle protein synthesis rates within a 24-hour period following the first training bout. Body composition was not different between the PP and CTL groups.

"There is strong evidence to suggest protein needs, specifically, the intake of more essential amino acids, increase with age due to many factors," added co-principal investigator, Drew Frugé, PhD, RD, with the Department of Nutrition, Dietetics and Hospitality Management at Auburn University. "The protein isolated from peanuts contains a full complement of essential amino acids, including the important muscle growth 'switch' leucine, that can be delivered in a nutrient-dense package with the functional benefit of being simply incorporated into many easy to consume and tasty food or beverage preparations that meet the dietary needs of older adults."

This study followed a rigorous methodology by using a randomized design in a laboratory setting and supervising participant training, as well as PP supplement compliance. However, the researchers noted a few limitations, mainly the duration of the intervention of the second cohort. As the original intent was to recruit two separate ten-week cohorts, due to the SARS-CoV-2 pandemic, the researchers voluntarily decided to end the second cohort after only six weeks of training to maintain the health and safety of the participants. The decision to compare PP supplementation to no supplementation was made to establish more "real world" relevance (i.e., people supplement their diets with protein powder, or nothing at all).

Despite such limitations, the researchers concluded that "...peanut protein powder supplementation with 6-10 weeks of resistance training enhance certain aspects of muscle hypertrophy and strength in older adults, compared to a resistance training program alone in the elderly population." Future studies that are longer in duration are needed to definitively determine if PP supplementation can enhance hypertrophic adaptations with resistance training.

Credit: 
Ketchum New York

Accurate neural network computer vision without the 'black box'

image: New research offers clues to what goes on inside the minds of machines as they learn to see. A method developed by Cynthia Rudin's lab reveals how much a neural network calls to mind different concepts as an image travels through the network's layers.

Image: 
Courtesy of Zhi Chen, Duke University

DURHAM, N.C. -- The artificial intelligence behind self-driving cars, medical image analysis and other computer vision applications relies on what's called deep neural networks.

Loosely modeled on the brain, these consist of layers of interconnected "neurons" -- mathematical functions that send and receive information -- that "fire" in response to features of the input data. The first layer processes a raw data input -- such as pixels in an image -- and passes that information to the next layer above, triggering some of those neurons, which then pass a signal to even higher layers until eventually it arrives at a determination of what is in the input image.

But here's the problem, says Duke computer science professor Cynthia Rudin. "We can input, say, a medical image, and observe what comes out the other end ('this is a picture of a malignant lesion', but it's hard to know what happened in between."

It's what's known as the "black box" problem. What happens in the mind of the machine -- the network's hidden layers -- is often inscrutable, even to the people who built it.

"The problem with deep learning models is they're so complex that we don't actually know what they're learning," said Zhi Chen, a Ph.D. student in Rudin's lab at Duke. "They can often leverage information we don't want them to. Their reasoning processes can be completely wrong."

Rudin, Chen and Duke undergraduate Yijie Bei have come up with a way to address this issue. By modifying the reasoning process behind the predictions, it is possible that researchers can better troubleshoot the networks or understand whether they are trustworthy.

Most approaches attempt to uncover what led a computer vision system to the right answer after the fact, by pointing to the key features or pixels that identified an image: "The growth in this chest X-ray was classified as malignant because, to the model, these areas are critical in the classification of lung cancer." Such approaches don't reveal the network's reasoning, just where it was looking.

The Duke team tried a different tack. Instead of attempting to account for a network's decision-making on a post hoc basis, their method trains the network to show its work by expressing its understanding about concepts along the way. Their method works by revealing how much the network calls to mind different concepts to help decipher what it sees. "It disentangles how different concepts are represented within the layers of the network," Rudin said.

Given an image of a library, for example, the approach makes it possible to determine whether and how much the different layers of the neural network rely on their mental representation of "books" to identify the scene.

The researchers found that, with a small adjustment to a neural network, it is possible to identify objects and scenes in images just as accurately as the original network, and yet gain substantial interpretability in the network's reasoning process. "The technique is very simple to apply," Rudin said.

The method controls the way information flows through the network. It involves replacing one standard part of a neural network with a new part. The new part constrains only a single neuron in the network to fire in response to a particular concept that humans understand. The concepts could be categories of everyday objects, such as "book" or "bike." But they could also be general characteristics, such as such as "metal," "wood," "cold" or "warm." By having only one neuron control the information about one concept at a time, it is much easier to understand how the network "thinks."

The researchers tried their approach on a neural network trained by millions of labeled images to recognize various kinds of indoor and outdoor scenes, from classrooms and food courts to playgrounds and patios. Then they turned it on images it hadn't seen before. They also looked to see which concepts the network layers drew on the most as they processed the data.

Chen pulls up a plot showing what happened when they fed a picture of an orange sunset into the network. Their trained neural network says that warm colors in the sunset image, like orange, tend to be associated with the concept "bed" in earlier layers of the network. In short, the network activates the "bed neuron" highly in early layers. As the image travels through successive layers, the network gradually relies on a more sophisticated mental representation of each concept, and the "airplane" concept becomes more activated than the notion of beds, perhaps because "airplanes" are more often associated with skies and clouds.

It's only a small part of what's going on, to be sure. But from this trajectory the researchers are able to capture important aspects of the network's train of thought.

The researchers say their module can be wired into any neural network that recognizes images. In one experiment, they connected it to a neural network trained to detect skin cancer in photos.

Before an AI can learn to spot melanoma, it must learn what makes melanomas look different from normal moles and other benign spots on your skin, by sifting through thousands of training images labeled and marked up by skin cancer experts.

But the network appeared to be summoning up a concept of "irregular border" that it formed on its own, without help from the training labels. The people annotating the images for use in artificial intelligence applications hadn't made note of that feature, but the machine did.

"Our method revealed a shortcoming in the dataset," Rudin said. Perhaps if they had included this information in the data, it would have made it clearer whether the model was reasoning correctly. "This example just illustrates why we shouldn't put blind faith in "black box" models with no clue of what goes on inside them, especially for tricky medical diagnoses," Rudin said.

Credit: 
Duke University

St. Edward's University study finds a manly beard may help drive sales

Austin, Texas -- The next time you are considering purchasing a big-ticket item, it might be worth paying attention to the salesperson's facial hair.

The beard seems to be a subtle but consistent clue used in evaluating the knowledge and trustworthiness of the sales/service personnel you interact with. If the salesperson is sporting a beard, you may be more likely to pull out your wallet. And if you work in a sales or service role, you might consider the power of donning a beard before no-shave November rolls around.

Sarah Mittal, assistant professor of Marketing at St. Edward's University and the paper's lead researcher, and David H. Silvera, associate professor of Business at University of Texas at San Antonio, conducted five studies to test the "power of the beard," predicting that the beard would be an advantage in sales and service roles. The studies examined the beard's effect on perception of expertise, trustworthiness, likelihood of sales and service satisfaction. Their findings are published online in the Journal of Business Research in their article titled, "It Grows on You: Perceptions of sales/service personnel with facial hair."

In the competitive world of sales and service personnel, expertise and trustworthiness are critical for relationship building and closing sales. They found that regardless of the sales industry or context (online), or the salesperson's race or ethnicity, attractiveness or likability, potential buyers view bearded sales personnel as having greater expertise and trustworthiness than their clean-shaven, stubbled and mustached counterparts.

"Our research suggests that those in a sales or service role, where expertise and trust are crucial to converting sales, would be well-served to grow a beard. Your LinkedIn profile and marketing materials may even benefit from the subtle cue conveyed by donning a beard," Mittal said.

Of the fives studies, one was a real-world study utilizing Facebook Ad Manager. Using the Facebook platform, the researchers deployed bearded and clean-shaven ads to examine the effectiveness for a real-world business. They found that the Facebook advertisement with the bearded version of the sales representative was able to yield a higher click-through rate (CTR), which places prospective customers in the sales pipeline. In fact, the bearded ad's CTR of 2.66% is considerably above industry averages of about 0.71% (industrial services) and 1.04% (technology).

While past research has focused on the benefit of beards in attracting potential mates (cue bearded Bumble profiles) and in the interview process, the researchers believe these studies are the first examination of the beard's effect in sales and service contexts. This effect is rooted in evolutionary psychology, which is one of many biologically informed approaches to the study of human behavior.

"Beards may go in and out of style in terms of their ability to increase physical attractiveness, but from an evolutionary perspective, they consistently serve as a cue to others about one's masculinity, maturity, resources, competence, leadership and status," Mittal said. "In sum, the ability to grow a healthy beard inherently signals 'immuno-competence,' and this has downstream effects on the way a bearded individual is evaluated in many facets of life."

Through their modeling, the researchers were able to rule out differences in perceived age, attractiveness and likability as alternative explanations for their results. They also controlled for the study subjects' own age, gender, income and ethnicity to ensure that consumer demographics did not influence the effects.

"The beard truly seems to send a consistent message about expertise in one's field -- a key driver in sales success. These effects also hold in a service context, where bearded individuals receive higher service satisfaction ratings," Silvera said.

The researchers believe their studies' insights could influence not only policy and perceptions in the business world where the benefits of the beard are largely under-appreciated but that those working in such fields (with the ability to grow a beard) may nudge their performance success upwards with this simple change in appearance.

"Given these findings, corporate policies that currently ban facial hair may think twice; as other facial hair styles did not have a 'negative' effect on trust or expertise, there is only an upside to be gained from allowing individuals to don a well-kept beard," Mittal added.

Three Takeaways from "It Grows on You: Perceptions of sales/service personnel with facial hair"

Facial hair on male sales personnel drives increased perceptions of expertise, which then increases trust, purchase likelihood and satisfaction.

The beard-effect happens regardless of the salesperson's race or ethnicity, age, level of perceived attractiveness and likability.

The beard-effect occurs across sales industries and contexts (in-person and online).

Credit: 
St. Edward's University

Kernels of history

Earlier this year Douglas J. Kennett, a UC Santa Barbara professor of anthropology, demonstrated that maize, or corn, became a staple crop in the Americas 4,700 years ago. It turns out he was just beginning to tell the story of the world's biggest grain crop.

In a new paper in the Proceedings of the National Academy of Sciences, Kennett and his co-authors report that by analyzing the genomes of ancient maize they are able to fill in some of the gaps in the 9,000-year-old history of corn, which was first partially domesticated in southwestern Mexico and spread through Central and South America fairly rapidly.

The researchers sequenced the whole genomes of three roughly 2,000-year-old cobs from El Gigante rock shelter in Honduras. Analysis of the genomes yielded a surprise: the millennia-old varieties of Central American maize were more related to ancient and modern corn varieties that were improved in South America, uncovering flows of grains between the regions.

"I was most surprised that improved maize varieties developed by Indigenous South Americans were reintroduced northward into Central America," co-lead author Kennett said. "We could only know this through whole genome sequencing."

The genetic sleuthing revealed that the arrival of corn from South America may have played a role in the development of more productive varieties and greater consumption in Central America starting about 4,700 years ago.

"We show that humans were carrying maize from South America back towards the domestication center in Mexico," said Logan Kistler, curator of archaeogenomics and archaeobotany at the Smithsonian's National Museum of Natural History and co-lead author. "This would have provided an infusion of genetic diversity that may have added resilience or increased productivity. It also underscores that the process of domestication and crop improvement doesn't just travel in a straight line."

First partially domesticated from teosinte, a wild grass, maize has only reluctantly given up the secrets of its long development. Genetic research, Kennett said, has been challenging because a scarcity of suitable cobs in an environment not kind to organic material. Researchers, however, caught a break in Honduras.

"Well-preserved maize is extremely rare in the Americas, but the El Gigante rock shelter has over 10,000 specimens to work with," he said. "Most of these fragmentary remains date later than 2,500 years ago, and locating earlier material in the assemblage was challenging and required directly radiocarbon dating large numbers of maize cobs.

"Using this approach," Kennett continued, "we did identify about 20 early cobs dating to between 4,300-4,000 years ago and we attempted to extract ancient DNA from all of these along with a set of cobs dating to between 2,300 and 1,900 years old. Using the best available techniques we were able to extract working genomes from only three cobs and all of these dated to between 2,300 and 1,900 years ago. This is the frustrating reality of ancient DNA work."

While difficult, the research on maize has entered into a new era of discovery, Kennett said, because whole genome sequencing is revolutionizing our understanding of the past.

"We can now know much more about the biochemical processes involved in the domestication process and the sequence of domestication alleles that became fixed during the domestication syndrome," he said. "It is also allowing us to track the spread of maize in much greater detail and how these changes altered the path of human history."

Looking ahead, Kennett said that while researchers now know that improved varieties of maize spread from South America into parts of Central America, they don't know when, precisely, it happened. Based on cob morphology, however, they've hypothesized it occurred by 4,300 years ago.

"Testing this hypothesis will require recovering earlier samples with well-preserved genetic material," he said. "We also don't know how far north these new varieties spread. This will require whole genome work on cobs of equivalent age farther north (e.g., Mexico). We also don't know if people migrated north carry these newly improved varieties or if seeds simply passed through preexisting exchange networks. This will require working collaboratively with Indigenous populations in Central America to sequence ancient genomes to determine if the appearance of new varieties parallels the appearance of new populations."

Credit: 
University of California - Santa Barbara

Telemedicine needed to diagnose and treat dysphagia in COVID-19 patients, doctors say

image: Dysphagia - swallowing difficulties - in patients with COVID-19 should be diagnosed and treated by telemedicine to lessen risk to health care professionals, say Johns Hopkins Medicine and Providence VA Medical Center physicians.

Image: 
Graphic created by M.E. Newman, Johns Hopkins Medicine, with public domain images.

COVID-19 and SARS-CoV-2, the virus behind the disease, have caused health care providers to change how they treat patients. Clinicians are now frequently using telemedicine to see their patients for routine checkups, saving office visits for emergencies. The same goes for rehabilitation. For example, researchers are looking for ways to improve the screening, assessment and treatment of patients with COVID-19 and dysphagia -- swallowing difficulties -- by doing it remotely.

Health care professionals whose work puts them in contact with the body areas frequented by SARS-CoV-2 -- such as the nose, mouth and airway -- share a responsibility for engaging patients in a manner that won't add to the spread of COVID-19. Risks need to be weighed before screenings, assessments and treatments are undertaken.

Ideally, clinicians assess dysphagia through a clinical (bedside) evaluation and one of two standard tests: a videofluoroscopic swallow study or a flexible endoscopic evaluation of swallowing. These exams determine swallowing ability, look for changes in the anatomy and movements of the larynx and tongue, analyze airway vulnerability, and measure other characteristics related to swallowing physiology.

However, during the pandemic, clinicians diagnosing and treating dysphagia in COVID-19 patients are putting themselves at risk by using these up close and physical techniques. And simply relying on methods such as medical history reviews and patient reporting of symptoms is not enough.

"The irony is that patients with COVID-19, especially those who were recently removed from mechanical ventilation in intensive care units, may be among those who most need the clinical and instrumental exams for properly and comprehensively assessing dysphagia," says Martin B. Brodsky, Ph.D., Sc.M., associate professor of physical medicine and rehabilitation at the Johns Hopkins University School of Medicine.

Therefore, in an editorial in the September 2020 issue of the Archives of Physical Medicine and Rehabilitation, Brodsky and colleague Richard Gilbert, M.D., with the Laboratory for Biological Architecture at the Providence VA Medical Center, say it's time to embrace telemedicine for dysphagia.

"To make that happen, there needs to be continued engagement by clinicians with third-party payers -- including insurance companies, and state and federal government programs such as Medicare -- to get support, acceptance and financial coverage for the use of telemedicine in this way," he explains.

Treating dysphagia remotely is not new, having been researched and practiced (to a smaller extent) for nearly 20 years. However, more widespread use had previously been hampered by technological difficulties, high expense of equipment, lack of standardized training, and billing and coverage issues.

That all changed, Brodsky says, with the arrival of COVID-19.

"Although vast improvements in telemedicine for dysphagia have been made in recent years, patients continue to be limited in their ability to receive effective remote care," Brodsky says. "With the current pandemic, we need that to change because the traditional clinical and instrumental exams used for assessing dysphagia are putting health care workers who treat patients with COVID-19 at risk for contracting and further spreading the disease."

"We need innovative thinking and technologies to be rapidly translated into clinical practice to enable telemedicine services for dysphagia -- now, more than ever," adds Brodsky.

Credit: 
Johns Hopkins Medicine

Johns Hopkins Medicine expert weighs devastating impact of COVID-19 on health care workers

image: A recent Johns Hopkins Medicine study uses a computer model to predict the number of COVID-19 infections among health care workers in four different scenarios based on data from early in the pandemic.

Image: 
Graphic created by M.E. Newman, Johns Hopkins Medicine, from data in Razzak et al, PLOS ONE 15(12): e0242589

During the COVID-19 pandemic, health care workers have been at the forefront of the battle against the life-threatening illness. Sadly, they are not immune to the effects of the disease. Many have contracted COVID-19, and some have died.

In a paper published Dec. 4, 2020, in the journal PLOS One, Junaid Razzak, M.B.B.S., Ph.D., director of the Johns Hopkins Center for Global Emergency Care, and his colleagues estimated the impacts of COVID-19 on the U.S. health care community based on observed numbers of health care worker infections during the early phase of the pandemic in Hubei, China, and Italy, areas that experienced peaks in COVID-19 cases before the United States.

"We looked at what was known from other countries earlier in the pandemic and modeled the impact on the health of the front-line health care workers in this country to see what gains were possible here if known interventions were applied," says Razzak, who is a professor of emergency medicine at the Johns Hopkins University School of Medicine.

Using a Monte Carlo risk analysis simulation model to predict outcomes and based on data from China and Italy, Razzak and his team estimate that between 53,000 and 54,000 U.S. hospital workers could become infected with COVID-19 during the course of the pandemic. The team projects the number of U.S. hospital worker deaths for the same time period at approximately 1,600.

The estimates by the researchers also suggest that if health care workers considered high risk -- including those over age 60 -- wore appropriate personal protective equipment -- such as gowns and face masks -- the number of infections would decrease to about 28,000 and the number of deaths to between 700 and 1,000. If hospital workers age 60 and over are restricted from direct patient care, then the predicted numbers could drop to 2,000 infected and 60 deaths.

Razzak and his team say that since the current COVID-19 mortality among U.S. health care workers has surpassed their original estimates, it proves that this group bears -- and will continue to bear -- a significant burden of illness due to COVID-19.

"Our analysis shows that continuous widespread and proper use of personal protective equipment, along with limiting exposure for hospital workers over the age of 60, are necessary measures for this country to take now," Razzak says. "These efforts will save the lives of health care workers who will then be able to save the lives of others infected with COVID-19."

Credit: 
Johns Hopkins Medicine