Culture

Downward head tilt can make people seem more dominant

image: Stimuli used in Study 1 (top row) and Study 2 (middle and bottom rows). From left to right, the poses illustrate downward head tilts, neutral head angles, and upward head tilts. In all images, targets posed with neutral facial expressions (i.e., no facial-muscle movement).

Image: 
<em>Psychological Science</em>

We often look to people's faces for signs of how they're thinking or feeling, trying to gauge whether their eyes are narrowed or widened, whether the mouth is turned up or down. But findings published in the June 2019 issue of Psychological Science, a journal of the Association for Psychological Science, show that facial features aren't the only source of this information--we also draw social inferences from the head itself.

"We show that tilting one's head downward systematically changes the way the face is perceived, such that a neutral face--a face with no muscle movement or facial expression--appears to be more dominant when the head is tilted down," explain researchers Zachary Witkower and Jessica Tracy of the University of British Columbia. "This effect is caused by the fact that tilting one's head downward leads to the artificial appearance of lowered and V-shaped eyebrows--which in turn elicit perceptions of aggression, intimidation, and dominance."

"These findings suggest that 'neutral' faces can still be quite communicative," Witkower and Tracy add. "Subtle shifts of the head can have profound effects on social perception, partly because they can have large effects on the appearance of the face."

Although researchers have investigated how facial muscle movements, in the form of facial expressions, correlate with social impressions, few studies have specifically examined how head movements might play a role. Witkower and Tracy designed a series of studies to investigate whether the angle of head position might influence social perception, even when facial features remain neutral.

In one online study with 101 participants, the researchers generated variations of avatars with neutral facial expressions and one of three head positions: tilted upward 10 degrees, neutral (0 degrees), or tilted downward 10 degrees.

The participants judged the dominance of each avatar image, rating their agreement with statements including "This person would enjoy having control over others" and "This person would be willing to use aggressive tactics to get their way."

The results showed that participants rated the avatars with downward head tilt as more dominant than those with neutral or upward-titled heads.

A second online study, in which 570 participants rated images of actual people, showed the same pattern of results.

Additional findings revealed that the portion of the face around the eyes and eyebrows is both necessary and sufficient to produce the dominance effect. That is, participants rated downward-tilted heads as more dominant even when they could only see the eyes and eyebrows; this was not true when the rest of the face was visible and the eyes and eyebrows were obscured.

Two more experiments indicated that the angle of the eyebrows drove this effect--downward-tilted heads had eyebrows that appeared to take more of a V shape, even though the eyebrows had not moved from a neutral position, and this was associated with perceptions of dominance.

"In other words, tilting the head downward can have the same effect on social perceptions as does lowering one's eyebrows--a movement made by the corrugator muscle, known as Action Unit 4 in the Facial Action Coding System--but without any actual facial movement," say Witkower and Tracy. "Head tilt is thus an 'action unit imposter' in that it creates the illusory appearance of a facial muscle movement where none in fact exists."

Given these intriguing results, the researchers are continuing to investigate the influence of head tilt on social perception, exploring whether the effects might extend beyond perceptions of dominance to how we interpret facial expressions of emotion.

Ultimately, Witkower and Tracy note, these findings could have practical implications for our everyday social interactions:

"People often display certain movements or expressions during their everyday interactions, such as a friendly smile or wave, as a way to communicate information. Our research suggests that we may also want to consider how we hold their head during these interactions, as subtle head movements can dramatically change the meaning of otherwise innocuous facial expressions."

Credit: 
Association for Psychological Science

Early-season hurricanes result in greater transmission of mosquito-borne infectious disease

image: Researchers from Georgia State and Arizona State University developed a mathematical model to study the impact of heavy rainfall events (HREs).

Image: 
Georgia State University

The timing of a hurricane is one of the primary factors influencing its impact on the spread of mosquito-borne infectious diseases such as West Nile Virus, dengue, chikungunya and Zika, according to a study led by Georgia State University.

Researchers from Georgia State and Arizona State University developed a mathematical model to study the impact of heavy rainfall events (HREs) such as hurricanes on the transmission of vector-borne infectious diseases in temperate areas of the world, including the southern coastal U.S. In the aftermath of this type of extreme weather event, the mosquito population often booms in the presence of stagnant water. At the same time, the breakdown of public and private health infrastructure can put people at increased risk of infection. The study, which was published in Philosophical Transactions of the Royal Society B, found that the risk of a disease outbreak is highest if the HRE occurs early in the transmission season, or the period of time when mosquitos are able to pass on the virus to humans.

According to the study, an HRE that occurs on July 1 results in 70 percent fewer disease cases compared to an HRE that occurs on June 1.

"Mosquitos are very sensitive to temperature not only in terms of their ability to survive and reproduce, but also in their ability to infect individuals," said Gerardo Chowell, professor of mathematical epidemiology in the School of Public Health and lead author of the study. "The warmer it is, the faster an infected mosquito will be able to transmit the virus. Considering that mosquitos have an average lifespan of less than two weeks, that temperature difference can have a dramatic effect on disease outbreaks."

Population displacement can also affect the spread of vector-borne disease in a few ways, the researchers found. When people opt to leave the area, it reduces the number of local infections, while potentially increasing the number of infections elsewhere. However, those individuals who are not displaced during an HRE may be at higher risk because standard measures to combat mosquito breeding (such as removing pools of stagnant water) are neglected when fewer people remain in the area. And as people move into a disaster area to offer emergency relief -- or when they return after the event -- the number of local infections rises.

"Since mosquito-borne diseases tend to be spread by the movement of people rather than the movement of mosquitoes, disaster-induced movements of people can shift where and when outbreaks occur," said Charles Perrings, professor in the School of Life Sciences at Arizona State University and a co-author of the study.

Chowell notes that as HREs become more frequent in the southern U.S. and other tropical areas there's a need to develop further quantitative tools to assess how these disasters can affect the risk of disease transmission.

"Our team will now focus on improving methods to quantify the number of people that actually leave during a hurricane, how quickly they leave and when they return," he says. "We are also looking at additional hurricanes to study the impact of different displacement patterns."

Credit: 
Georgia State University

The formative years: Giant planets vs. brown dwarfs

video: Animation showing the 617 observations conducted during GPIES from November 2014 to April 2019 (right) and the location of the stars in the southern sky (left). Open circles indicate system (like 51 Eri at 2 o'clock) which have been visited multiple times. Stars indicated by a red dot have a disk of material. Blue dots are planetary systems (with one planet at least). Brown dot are binary systems with a brown dwarf.

Image: 
P. Kalas, D. Savransky, R. De Rosa and GPIES.

Based on preliminary results from a new Gemini Observatory survey of 531 stars with the Gemini Planet Imager (GPI), it appears more and more likely that large planets and brown dwarfs have very different roots.

The GPI Exoplanet Survey (GPIES), one of the largest and most sensitive direct imaging exoplanet surveys to date, is still ongoing at the Gemini South telescope in Chile. "From our analysis of the first 300 stars observed, we are already seeing strong trends," said Eric L. Nielsen of Stanford University, who is the lead author of the study, published in The Astronomical Journal.

In November 2014, GPI Principal Investigator Bruce Macintosh of Stanford University and his international team set out to observe almost 600 young nearby stars with the newly commissioned instrument. GPI was funded with support from the Gemini Observatory partnership, with the largest portion from the US National Science Foundation (NSF). The NSF, and the Canadian National Research Council (NRC; also a Gemini partner), funded researchers participating in GPIES.

Imaging a planet around another star is a difficult technical challenge possible with only a few instruments. Exoplanets are small, faint, and very close to their host star -- distinguishing an orbiting planet from its star is like resolving the width of a dime from several miles away. Even the brightest planets are ten thousand times fainter than their parent star. GPI can see planets up to a million times fainter, much more sensitive than previous planet-imaging instruments. "GPI is a great tool for studying planets, and the Gemini Observatory gave us time to do a careful, systematic survey," said Macintosh.

GPIES is now coming to an end. From the first 300 stars, GPIES has detected six giant planets and three brown dwarfs. "This analysis of the first 300 stars observed by GPIES represents the largest, most sensitive direct imaging survey for giant planets published to date," added Macintosh.

Brown dwarfs are more massive than planets, but not massive enough to fuse hydrogen like stars. "Our analysis of this Gemini survey suggests that wide-separation giant planets may have formed differently from their brown dwarf cousins," Nielsen said.

The team's paper advances the idea that massive planets form due to the slow accumulation of material surrounding a young star, while brown dwarfs come about due to rapid gravitational collapse. "It's a bit like the difference between a gentle light rain and a thunderstorm," said Macintosh.

"With six detected planets and three detected brown dwarfs from our survey, along with unprecedented sensitivity to planets a few times the mass of Jupiter at orbital distances well beyond Jupiter's, we can now answer some key questions, especially about where and how these objects form," Nielsen said.

This discovery may answer a longstanding question as to whether brown dwarfs -- intermediate-mass objects -- are born more like stars or planets. Stars form from the top down by the gravitational collapse of large primordial clouds of gas and dust, while planets are thought -- but have not been confirmed -- to form from the bottom up by the assembly of small rocky bodies that then grow into larger ones, a process also termed "core accretion."

"What the GPIES team's analysis shows is that the properties of brown dwarfs and giant planets run completely counter to each other," said Eugene Chiang, professor of astronomy at the University of California Berkeley and a co-author of the paper. "Whereas more massive brown dwarfs outnumber less massive brown dwarfs, for giant planets the trend is reversed: less massive planets outnumber more massive ones. Moreover, brown dwarfs tend to be found far from their host stars, while giant planets concentrate closer in. These opposing trends point to brown dwarfs forming top-down, and giant planets forming bottom-up."

More Surprises

Of the 300 stars surveyed thus far, 123 are at least 1.5 times more massive than our Sun. One of the most striking results of the GPI survey is that all hosts of detected planets are among these higher-mass stars -- even though it is easier to see a giant planet orbiting a fainter, more Sun-like star. Astronomers have suspected this relationship for years, but the GPIES survey has unambiguously confirmed it. This finding also supports the bottom-up formation scenario for planets.

One of the study's greatest surprises has been how different other planetary systems are from our own. Our Solar System has small rocky planets in the inner parts and giant gas planets in the outer parts. But the very first exoplanets discovered reversed this trend, with giant planets skimming closer to their stars than does moon-sized Mercury. Furthermore, radial-velocity studies -- which rely on the fact that a star experiences a gravitationally induced "wobble" when it is orbited by a planet -- have shown that the number of giant planets increases with distance from the star out to about Jupiter's orbit. But the GPIES team's preliminary results, which probe still larger distances, has shown that giant planets become less numerous farther out.

"The region in the middle could be where you're most likely to find planets larger than Jupiter around other stars," added Nielsen, "which is very interesting since this is where we see Jupiter and Saturn in our own Solar System." In this regard, the location of Jupiter in our own Solar System may fit the overall exoplanet trend.

But a surprise from all exoplanet surveys is how intrinsically rare giant planets seem to be around Sun-like stars, and how different other solar systems are. The Kepler mission discovered far more small and close-in planets -- two or more "super-Earth" planets per Sun-like star, densely packed into inner solar systems much more crowded than our own. Extrapolation of simple models suggested GPI would find a dozen giant planets or more, but it only saw six. Putting it all together, giant planets may be present around only a minority of stars like our own.

In January 2019, GPIES observed its 531st, and final, new star, and the team is currently following up the remaining candidates to determine which are truly planets and which are distant background stars impersonating giant planets.

The next-generation telescopes -- such as NASA's James Webb Space Telescope and WFIRST mission, the Giant Magellan Telescope, Thirty Meter Telescope, and Extremely Large Telescope -- should be able to push the boundaries of study, imaging planets much closer to their star and overlapping with other techniques, producing a full accounting of giant planet and brown dwarf populations from 1 to 1,000 AU.

"Further observations of additional higher mass stars can test whether this trend is real," said Macintosh, "especially as our survey is limited by the number of bright, young nearby stars available for study by direct imagers like GPI."

Background:

GPI is specifically designed to search for planets and brown dwarfs around other stars, using a mask known as a coronagraph to partially block a star's light. Together with adaptive optics correcting for turbulence in the Earth's atmosphere and advanced image processing, researchers can search the star's neighborhood for Jupiter-like exoplanets and brown dwarfs up to a million times fainter than the host star.

In our Solar System, Jupiter is the largest planet, being about 318 times as massive as the Earth and lying about five times farther from the Sun than does the Earth. Brown dwarfs range from 13 to 90 times the mass of Jupiter; and while they can be up to a tenth the mass of the Sun, they lack the nuclear fusion in their core to burn as a star -- so they lie somewhere between a diminutive star and a super-planet.

An early success of GPIES was the discovery of 51 Eridani b in December 2014, a planet about two-and-a-half times more massive than Jupiter, that orbits its star beyond the distance that Saturn orbits our own Sun. The host star, 51 Eridani, is just 97 light-years away, and is only 26 million years old (nearby and young, by astronomy standards). The star had been observed by multiple planet-imaging surveys with a variety of telescopes and instruments, but its planet was not detected until GPI's superior instrumentation was able to suppress the starlight enough for the planet to be visible.

GPIES also discovered the brown dwarf HR 2562 B, which is at a separation similar to that between the Sun and Uranus, and is 30 times more massive than Jupiter.

Most exoplanets discovered thus far, including those found by NASA's Kepler spacecraft, are found via indirect methods, such as observing a dimming in the star's light as the orbiting planet eclipses its parent star, or by observing the star's wobble as the planet's gravity tugs on the star. These methods have been very successful, but they only probe the central regions of planetary systems. Those regions outside the orbit of Jupiter, where the giant planets are in our Solar System, are usually out of their reach. GPI, however, endeavors to directly detect planets in this parameter space by taking a picture of them alongside their parent stars.

The Gemini results support those from these other techniques, including a recent study of exoplanets discovered by the radial velocity method that found the most likely separation for a giant planet around Sun-like stars is about 3 AU. The finding that brown dwarfs occur with a frequency of only about 1%, independent of stellar mass, is also consistent with previous results from direct imaging surveys.

Credit: 
Association of Universities for Research in Astronomy (AURA)

Salmonella resistant to antibiotics of last resort found in US

Researchers from North Carolina State University have found a gene that gives Salmonella resistance to antibiotics of last resort in a sample taken from a human patient in the U.S. The find is the first evidence that the gene mcr-3.1 has made its way into the U.S. from Asia.

There are more than 2,500 known serotypes of Salmonella. In the U.S., Salmonella enterica 4,[5],12:i:- ST34 is responsible for a significant percentage of human illnesses. The drug resistance gene in question - known as mcr-3.1 - gives Salmonella resistance to colistin, the drug of last resort for treating infections caused by multidrug-resistant Salmonella.

"Public health officials have known about this gene for some time," says Siddhartha Thakur, professor and director of global health at NC State and corresponding author of the research. "In 2015, they saw that mcr-3.1 had moved from a chromosome to a plasmid in China, which paves the way for the gene to be transmitted between organisms. For example, E. coli and Salmonella are in the same family, so once the gene is on a plasmid, that plasmid could move between the bacteria and they could transmit this gene to each other. Once mcr-3.1 jumped to the plasmid, it spread to 30 different countries, although not - as far as we knew - to the U.S."

Thakur's lab is one of several nationally participating in epidemiological surveillance for resistant strains of Salmonella. The lab generates whole genome sequences from Salmonella samples every year as part of routine monitoring for the presence of antimicrobial-resistant bacteria. When veterinary medicine student Valerie Nelson and Ph.D. student Daniel Monte did genome sequencing on 100 clinical human stool samples taken from the southeastern U.S. between 2014 and 2016, they discovered that one sample contained the resistant mcr-3.1 gene. The sample came from a person who had traveled to China two weeks prior to becoming ill with a Salmonella infection.

"This project proved the importance of ongoing sequencing and surveillance," says Nelson. "The original project did not involve this gene at all."

"The positive sample was from 2014, so this discovery definitely has implications for the spread of colistin-resistant Salmonella in the U.S.," Thakur says. "Our lab will continue to try and fill in these knowledge gaps."

The research appears in the Journal of Medical Microbiology and was supported by the National Institutes of Health/Food and Drug Administration (award number 5U 18FD006194-02). Monte and Nelson are first author and co-author, respectively. Prior to his global health role, Thakur was associate director of the emerging infectious diseases program at NC State's Comparative Medicine Institute.

Credit: 
North Carolina State University

New model more accurately predicts choices in classic decision-making task

image: Which door will you choose? New model helps predict what choices people make in the Iowa Gambling Task by focusing on the 'exploratory strategies' they use.

Image: 
dil/unsplash.

A new mathematical model that predicts which choices people will make in the Iowa Gambling Task, a task used for the past 25 years to study decision-making, outperforms previously developed models. Romain Ligneul of the Champalimaud Center for the Unknown in Portugal presents this research in PLOS Computational Biology.

The Iowa Gambling Task presents a subject with four virtual card decks, each containing a different mix of cards that can win or lose fake money. Without being told which decks are more valuable, the subject then picks cards from the decks as they please. Most healthy people gradually learn which decks are more valuable and choose to pick cards only from those decks.

Earlier studies have used Iowa Gambling Task data to build mathematical models that can predict people's card-picking choices. However, building such models is computationally challenging, and previously developed models do not account for the exploratory strategies people use in the task.

In reviewing previously collected data from 500 subjects, Ligneul found that healthy people tend to cycle through the four decks and pick one card from each, especially at the beginning of the task. He then incorporated this behavior, termed sequential exploration, into a new mathematical model that also accounts for the well-known reward-maximizing behaviors people exhibit in the task.

Ligneul found that his new model outperforms earlier models in predicting people's card-picking choices. He also found that sequential exploration behaviors seem to decline as subjects get older, perhaps because of neurological changes typically associated with aging.

"This study provides a mathematical method to disentangle our drive to explore the environment and our drive to exploit it," Ligneul says. "It appears that the balance of these two drives evolves with aging."

The new model and findings could help refine insights gleaned from the Iowa Gambling Task. It could also improve understanding of learning and decision-making disruptions that are associated with aging and various neuropsychiatric conditions, such as addiction, impulsive disorders, brain injury, and more.

Credit: 
PLOS

Encouraging critically necessary blood donation among minorities

image: Better community education and communication are critical for increasing levels of blood donation among minorities, according to a study by researchers at Georgia State University and Georgia Southern University.

Image: 
Georgia State University

Better community education and communication are critical for increasing levels of blood donation among minorities, according to a study by researchers at Georgia State University and Georgia Southern University.

Nursing associate professor Regena Spratling in the Byrdine F. Lewis College of Nursing and Health Professions at Georgia State and her colleagues in the Georgia Southern University School of Public Health conducted the first systematic literature review of research on barriers and facilitators among minorities with blood donations.

The research found that medical mistrust is a significant barrier to blood donation among minorities. More significant to healthcare providers is a lack of explanation to minority donors when they are turned down to being a donor. For example, potential donors found to have low hemoglobin may believe that permanently bans them from giving blood when in they may be eligible later if they eat a healthy diet and drink plenty of fluids. Better education by healthcare providers working with these donors can reduce this barrier, researchers said.

Knowing a blood transfusion recipient made minorities more likely to donate, the researchers found. In many minority communities, donating blood for a friend, family, church or community member is positively viewed. Cultural or community ties are linked closely to blood donation. Giving blood to benefit one's community was a primary motivator.

A higher prevalence of blood-based, hereditary diseases, such as sickle cell and thalassemia, is found among minorities. These diseases increase the need for blood products in minority populations. Blood from donors with similar backgrounds reduces the likelihood of severe transfusion complications. These subtle similarities go deeper into blood background than blood types A, B, AB and O and positive and negative Rh factor.

The researchers reviewed nearly four dozen articles in peer-reviewed journals on blood donation with corresponding data on donors. Half of the articles appeared in publications focused on blood transfusion. The remainder were in related journals. Very few articles in nursing or broader healthcare journals focused on blood donations in specific race and ethnic populations. The researchers found the lack of widespread discussion of low minority blood donation was a primary barrier to solving the problem.

Credit: 
Georgia State University

Using prevalent technologies and 'Internet of Things' data for atmospheric science

image: Wireless communication links, social networks and smartphones as examples of data-generating sources that can be harnessed for environmental monitoring.

Image: 
Noam David

The use of prevalent technologies and crowdsourced data may benefit weather forecasting and atmospheric research, according to a new paper authored by Dr. Noam David, a Visiting Scientist at the Laboratory of Associate Professor Yoshihide Sekimoto at the Institute of Industrial Science, The University of Tokyo, Japan. The paper, published in Advances in Atmospheric Sciences, reviews a number of research works on the subject and points to the potential of this innovative approach.

Specialized instruments for environmental monitoring are often limited as a result of technical and practical constraints. Existing technologies, including remote sensing systems and ground-level tools, may suffer from obstacles such as low spatial representativity (in situ sensors, for example) or lack of accuracy when measuring near the Earth's surface (satellites). These constraints often limit the ability to carry out representative observations and, as a result, the capacity to deepen our existing understanding of atmospheric processes. Multi-systems and IoT (Internet of Things) technologies have become increasingly distributed as they are embedded into our environment. As they become more widely deployed, these technologies generate unprecedented data volumes with immense coverage, immediacy and availability. As a result, a growing opportunity is emerging to complement state-of-the-art monitoring techniques with the large streams of data produced. Notably, these resources were originally designed for purposes other than environmental monitoring and are naturally not as precise as dedicated sensors. Therefore, they should be treated as complementary tools and not as a substitute. However, in the many cases where dedicated instruments are not deployed in the field, these newly available 'environmental sensors' can provide some response which is often invaluable.

Smartphones, for example, contain weather-sensitive sensors and recent works indicate the ability to use the data collected by these devices on a multisource basis to monitor atmospheric pressure and temperature. Data shared as an open source in social networks can provide vital environmental information reported by thousands of 'human observers' directly from an area of interest. Wireless communication links that form the basis for transmitting data between cellular communication base stations serve as an additional example. Weather conditions affect the signal strength on these links and this effect can be measured. As a result the links can be utilized as an environmental monitoring facility. A variety of studies on the subject point to the ability to monitor rainfall and other hydrometeors including fog, water vapor, dew and even the precursors of air pollution using the data generated by these systems.

Notably, the data from these new 'sensors' could be assimilated into high-resolution numerical prediction models, and thus may lead to improvements in forecasting capabilities. Put to use, this novel approach could provide the groundwork for developing new early-warning systems against natural hazards, and generate a variety of products necessary for a wide range of fields. The contribution to public health and safety as a result of these could potentially be of significant value.

Credit: 
Institute of Atmospheric Physics, Chinese Academy of Sciences

Handgun licensing more effective at reducing gun deaths than background checks alone

A new white paper from the Johns Hopkins Center for Gun Policy and Research at the Johns Hopkins Bloomberg School of Public Health concludes that of the approaches used by states to screen out prohibited individuals from owning firearms, only purchaser licensing has been shown to reduce gun homicides and suicides. Purchaser licensing is currently used by nine states and Washington, D.C.

The white paper, "The Impact of Handgun Purchaser Licensing Laws on Gun Violence," and an accompanying infographic explain that states generally use three approaches to screen out prohibited individuals from purchasing firearms: 1: the minimum that federal law requires--mandatory background checks for sales from a licensed dealer; 2: comprehensive background check requirements that also cover private-party transfers of firearms; 3: a background check for all firearm transfers as a complement to a licensing or permit system. Some states with comprehensive background checks or firearm purchaser licensing limit these requirements to transfers of handguns.

"Licensing differs from a standard background check in important ways, and the purpose of issuing this white paper and infographic is to clear up confusion about the efficacy of these laws," says the report's lead author, Cassandra Crifasi, PhD, MPH, deputy director of the Johns Hopkins Center for Gun Policy and Research and assistant professor in the Bloomberg School's Department of Health Policy and Management. "Comprehensive background checks are a necessary component of any system designed to keep guns from prohibited persons, but they are insufficient to reduce firearm-related deaths without a complementary system of purchaser licensing."

In general, states with licensing require prospective gun buyers to apply for a license with a state or local law enforcement agency, pass a background check, often submit fingerprints, and, in some cases, show evidence of gun safety training. States with licensing typically have more thorough processes for checking backgrounds, allow law enforcement more time to conduct those checks, or have mandatory waiting periods.

In contrast, the mandatory federal background law requires that a prospective buyer undergo a background check if they purchase a firearm, but only if they purchase it from a licensed dealer. Other key differences among the three approaches are highlighted in the report's companion infographic.

To date, available research shows that states with comprehensive background checks that are not part of a licensing system experience fewer guns diverted to criminal use. However, research to date has not shown that background checks alone lead to significant reductions in gun-related deaths.

In comparison, earlier research from the report authors showed that when Missouri repealed its handgun purchaser licensing law in 2007, homicides rose an estimated 17 to 27 percent through 2016. A separate study found the repeal was associated with a 16 percent increase in firearm suicides through 2012. In contrast, when Connecticut enacted a handgun licensing law in 1995 to supplement its universal background check policy, the state experienced a 40 percent decrease in gun homicides and a 15 percent reduction in gun suicides over the first ten years the law was in effect.

"The most likely reasons we see impacts on firearm homicides and suicides for licensing and not for comprehensive background checks witout licensing center on the more direct interface between prospective purchasers and law enforcement and more robust systems for background checks," says report co-author Daniel Webster, ScD, MPH, director of the Johns Hopkins Center for Gun Policy and Research and Bloomberg Professor of American Health at the Bloomberg School. "These procedures may deter individuals who might otherwise make impulsive decisions to acquire a gun to hurt themselves or others."

The white paper posits that one reason advocacy organizations have pushed policymakers to adopt comprehensive background checks (versus licensing) is because of their broad appeal: Polls consistently find that over 85 percent of U.S. adults support comprehensive background checks with no difference between gun owners and non-gun owners.

Yet, says Crifasi, national public opinon surveys also show that three-quarters of adults support laws requiring handgun purchasers to obtain a license from a law enforcement agency, with support among gun owners at 60 percent.

"Given this level of support among the population, including gun owners, and the robust body of evidence on the effectiveness of licensing laws, policymakers should consider licensing as a key strategy to reduce gun violence in the communities they serve," Crifasi says.

In addition to Washington, D.C., the nine states that have licensing requirements include Connecticut, Hawaii, Illinois, Iowa, Maryland, Massachusetts, New Jersey, New York and North Carolina.

In the 2018-2019 legislative session, gun purchasing licensing bills were introduced in Oregon, Delaware and Minnesota. Last year, both the U.S. House of Representatives and the U.S. Senate introduced measures that would incentivize states to adopt handgun licensing laws.

Credit: 
Johns Hopkins Bloomberg School of Public Health

New quantum dot microscope shows electric potentials of individual atoms

image: Image from a scanning tunnelling microscope (STM, left) and a scanning quantum dot microscope (SQDM, right). Using a scanning tunnelling microscope, the physical structure of a surface can be measured on the atomic level. Quantum dot microscopy can visualize the electric potentials on the surface at a similar level of detail -- a perfect combination.

Image: 
Copyright: Forschungszentrum Jülich / Christian Wagner

A team of researchers from Jülich in cooperation with the University of Magdeburg has developed a new method to measure the electric potentials of a sample at atomic accuracy. Using conventional methods, it was virtually impossible until now to quantitatively record the electric potentials that occur in the immediate vicinity of individual molecules or atoms. The new scanning quantum dot microscopy method, which was recently presented in the journal Nature Materials by scientists from Forschungszentrum Jülich together with partners from two other institutions, could open up new opportunities for chip manufacture or the characterization of biomolecules such as DNA.

The positive atomic nuclei and negative electrons of which all matter consists produce electric potential fields that superpose and compensate each other, even over very short distances. Conventional methods do not permit quantitative measurements of these small-area fields, which are responsible for many material properties and functions on the nanoscale. Almost all established methods capable of imaging such potentials are based on the measurement of forces that are caused by electric charges. Yet these forces are difficult to distinguish from other forces that occur on the nanoscale, which prevents quantitative measurements.

Four years ago, however, scientists from Forschungszentrum Jülich discovered a method based on a completely different principle. Scanning quantum dot microscopy involves attaching a single organic molecule - the "quantum dot" - to the tip of an atomic force microscope. This molecule then serves as a probe. "The molecule is so small that we can attach individual electrons from the tip of the atomic force microscope to the molecule in a controlled manner," explains Dr. Christian Wagner, head of the Controlled Mechanical Manipulation of Molecules group at Jülich's Peter Grünberg Institute (PGI-3).

The researchers immediately recognized how promising the method was and filed a patent application. However, practical application was still a long way off. "Initially, it was simply a surprising effect that was limited in its applicability. That has all changed now. Not only can we visualize the electric fields of individual atoms and molecules, we can also quantify them precisely," explains Wagner. "This was confirmed by a comparison with theoretical calculations conducted by our collaborators from Luxembourg. In addition, we can image large areas of a sample and thus show a variety of nanostructures at once. And we only need one hour for a detailed image."

The Jülich researchers spent years investigating the method and finally developed a coherent theory. The reason for the very sharp images is an effect that permits the microscope tip to remain at a relatively large distance from the sample, roughly 2-3 nanometres - unimaginable for a normal atomic force microscope.

In this context, it is important to know that all elements of a sample generate electric fields that influences the quantum dot and can therefore be measured. The microscope tip acts as a protective shield that dampens the disruptive fields from areas of the sample that are further away. "The influence of the shielded electric fields thus decreases exponentially, and the quantum dot only detects the immediate surrounding area," explains Wagner. "Our resolution is thus much sharper than could be expected from even an ideal point probe."

The Jülich researchers owe the speed at which the complete sample surface can be measured to their partners from Otto von Guericke University Magdeburg. Engineers there developed a controller that helped to automate the complex, repeated sequence of scanning the sample. "An atomic force microscope works a bit like a record player," says Wagner. "The tip moves across the sample and pieces together a complete image of the surface. In previous scanning quantum dot microscopy work, however, we had to move to an individual site on the sample, measure a spectrum, move to the next site, measure another spectrum, and so on, in order to combine these measurements into a single image. With the Magdeburg engineers' controller, we can now simply scan the whole surface, just like using a normal atomic force microscope. While it used to take us 5-6 hours for a single molecule, we can now image sample areas with hundreds of molecules in just one hour."

There are some disadvantages as well, however. Preparing the measurements takes a lot of time and effort. The molecule serving as the quantum dot for the measurement has to be attached to the tip beforehand - and this is only possible in a vacuum at low temperatures. In contrast, normal atomic force microscopes also work at room temperature, with no need for a vacuum or complicated preparations.

And yet, Prof. Stefan Tautz, director at PGI-3, is optimistic: "This does not have to limit our options. Our method is still new, and we are excited for the first projects so we can show what it can really do."

There are many fields of application for quantum dot microscopy. Semiconductor electronics is pushing scale boundaries in areas where a single atom can make a difference for functionality. Electrostatic interaction also plays an important role in other functional materials, such as catalysts. The characterization of biomolecules is another avenue. Thanks to the comparatively large distance between the tip and the sample, the method is also suitable for rough surfaces - such as the surface of DNA molecules, with their characteristic 3D structure.

Credit: 
Forschungszentrum Juelich

Sensing food textures is a matter of pressure

image: Von Frey Hairs

Research team member Nicole Etter, assistant professor of communications sciences and disorders in the College of Health and Human Development, trained graduate students on the team to administer tactile pressure tests she developed on participants' tongues using the Von Frey Hairs, shown here.

Image: 
Nicole Etter / Penn State

Food's texture affects whether it is eaten, liked or rejected, according to Penn State researchers, who say some people are better at detecting even minor differences in consistency because their tongues can perceive particle sizes.

That is the key finding of a study conducted in the Sensory Evaluation Center in the College of Agricultural Sciences by a cross-disciplinary team that included both food and speech scientists specializing in sensory perception and behavior. The research included 111 volunteer tasters who had their tongues checked for physical sensitivity and then were asked their perceptions about various textures in chocolate.

"We've known for a long time that individual differences in taste and smell can cause differences in liking and food intake -- now it looks like the same might be true for texture," said John Hayes, associate professor of food science. "This may have implications for parents of picky eaters since texture is often a major reason food is rejected."

The perception of food texture arises from the interaction of a food with mechanoreceptors in the mouth, Hayes noted. It depends on neural impulses carried by multiple nerves. Despite being a key driver of the acceptance or rejection of foods, he pointed out, oral texture perception remains poorly understood relative to taste and smell, two other sensory inputs critical for flavor perception.

One argument is that texture typically is not noticed when it is within an acceptable range, but that it is a major factor in rejection if an adverse texture is present, explained Hayes, director of the Sensory Evaluation Center. For chocolate specifically, oral texture is a critical quality attribute, with grittiness often being used to differentiate bulk chocolate from premium chocolates.

"Chocolate manufacturers spend lots of energy grinding cocoa and sugar down to the right particle size for optimal acceptability by consumers," he said. "This work may help them figure out when it is good enough without going overboard."

Researchers tested whether there was a relationship between oral touch sensitivity and the perception of particle size. They used a device called Von Frey Hairs to gauge whether participants could discriminate between different amounts of force applied to their tongues.

When participants were split into groups based on pressure-point sensitivity -- high and low acuity -- there was a significant relationship between chocolate-texture discrimination and pressure-point sensitivity for the high-acuity group on the center tongue. However, a similar relationship was not seen for data from the lateral edge of the tongue.

Chocolate texture-detection experiments included both manipulated chocolates produced in a pilot plant in the Rodney A. Erickson Food Science Building and with two commercially produced chocolates. Because chocolate is a semi-solid suspension of fine particles from cocoa and sugar dispersed in a continuous fat base, Hayes explained, it is an ideal food for the study of texture.

"These findings are novel, as we are unaware of previous work showing a relationship between oral pressure sensitivity and ability to detect differences in particle size in a food product," Hayes said. "Collectively, these findings suggest that texture-detection mechanisms, which underpin point-pressure sensitivity, likely contribute to the detection of particle size in food such as chocolate."

Research team member Nicole Etter, assistant professor of communication sciences and disorders in the College of Health and Human Development, trained students on the team to administer tactile pressure tests she developed on participants' tongues using the Von Frey Hairs. As a speech therapist, she explained that her interest in the findings -- recently published in Scientific Reports -- were different than the food scientists.

"The overarching purpose of my work is to identify how we use touch sensation -- the ability to feel our tongue move and determine where our tongue is in our mouth -- to behave," she said. "I'm primarily interested in understanding how a patient uses sensation from their tongue to know where and how to move their tongue to make the proper sound."

However, in this research, Etter said she was trying to determine whether individual tactile sensations on the tongue relate to the ability to perceive or identify the texture of food -- in this case, chocolate. And she focused on another consideration, too.

"An important aspect of speech-language pathology is helping people with feeding and swallowing problems," she said. "Many clinical populations -- ranging from young children with disabilities to older adults with dementia -- may reject foods based on their perception of texture. This research starts to help us understand those individual differences."

This study sets the stage for follow-on cross-disciplinary research at Penn State, Etter believes. She plans to collaborate with Hayes and the Sensory Evaluation Center on studies involving foods beyond chocolate and older, perhaps less-healthy participants to judge the ability of older people to experience oral sensations and explore food-rejection behavior that may have serious health and nutrition implications.

Credit: 
Penn State

Research reveals liquid gold on the nanoscale

image: Shape changes in Au nanoclusters, indicating cluster surface melting at high temperatures. Images of two individual clusters containing 561 and 2530 atoms are shown.

Image: 
Swansea University.

The research published in Nature Communications set out to answer a simple question - how do nanoparticles melt? Although this question has been a focus of researchers for the past century, it still is an open problem - initial theoretical models describing melting date from around 100 years, and even the most relevant models being some 50 years old.

Professor Richard Palmer, who led the team based at the University's College of Engineering said of the research: "Although melting behaviour was known to change on the nanoscale, the way in which nanoparticles melt was an open question. Given that the theoretical models are now rather old, there was a clear case for us to carry out our new imaging experiments to see if we could test and improve these theoretical models."

The research team used gold in their experiments as it acts as a model system for noble and other metals. The team arrived at their results by imaging gold nanoparticles, with diameters ranging from 2 to 5 nanometres, via aberration corrected scanning transmission electron microscope. Their observations were later supported by large-scale quantum mechanical simulations.

Professor Palmer said: "We were able to prove the dependence of the melting point of the nanoparticles on their size and for the first time see directly the formation of a liquid shell around a solid core in the nanoparticles over a wide region of elevated temperatures, in fact for hundreds of degrees.

"This helps us to describe accurately how nanoparticles melt and to predict their behaviour at elevated temperatures. This is a science breakthrough in a field we can all relate to - melting - and will also help those producing nanotech devices for a range of practical and everyday uses, including medicine, catalysis and electronics."

Credit: 
Swansea University

New economic study shows combination of SNAP and WIC improves food security

AMES, Iowa - Forty million Americans, including 6.5 million children, are food insecure, according to the U.S. Department of Agriculture, which means they do not have enough food for an active, healthy life.

Many rely on the Supplemental Nutrition Assistance Program (SNAP) - the largest food assistance program for low-income families - to help make ends meet. Still, 51.2 percent of households receiving SNAP benefits, commonly known as food stamps, were food insecure in 2016.

Given the extent of food insecurity, a team of Iowa State University economists developed a methodology to analyze potential redundancies between SNAP and the Special Supplemental Nutrition Program for Women, Infants and Children (WIC), the third-largest food assistance program in the U.S. Their research, published in the Southern Economic Journal, provides evidence that the programs are in fact complementary, not redundant. They found that participating in both SNAP and WIC compared to SNAP alone increases food security by at least 2 percentage points and potentially as much as 24 percentage points.

"Our findings can help policymakers design more efficient programs to meet food needs," said Helen Jensen, ISU professor emeritus of economics. "We know low-income families often participate in more than one food assistance program, and we find the combination of SNAP and WIC helps reduce food insecurity for participating households."

Challenges of measuring program effects

The programs are similar, but serve different needs. WIC covers specific foods to meet the nutritional needs of pregnant women and new mothers as well as infants and young children. Participants also receive nutrition counseling and referrals for health services, such as prenatal programs. In comparison, eligible households can use SNAP benefits to buy most food items. All households included in the study were potentially eligible for both programs, but they chose whether or not to participate.

This "self-selection" is one reason it is difficult for researchers to ascertain whether a program causes a change in food insecurity. WIC and SNAP benefits are not randomly assigned, so any differences in food security outcomes between participants and nonparticipants could be due to actual causal impacts of the programs or unobserved differences between households that apply for benefits and those that do not.

If households at greatest risk of becoming food insecure are most likely to apply - for example, in the case of a job loss - it might falsely appear the programs are ineffective in alleviating food insecurity, the researchers said. In fact, while participants may be less food secure than eligible nonparticipants, participants may still be more food secure than they would have been in a world without the programs.

Another challenge for researchers is that households are known to systematically underreport benefits, often because they don't want to admit they are receiving government assistance.

"For these reasons, traditional econometric methods lead to misleading estimates," said Oleksandr Zhylyevskyy, associate professor of economics. "With that in mind, we developed a methodology that allows us to more accurately measure the true effects of WIC and SNAP."

The researchers applied their methodology to data from the USDA's National Household Food Acquisition and Purchase Survey or FoodAPS, which provides self-reported household participation in SNAP and WIC and validated data for SNAP participation. The study included 460 households that were income-eligible for both programs. They were surveyed for one week.

On average, these households were families of four with two children, one under the age of 6. The average monthly income was about $1,600. More than 75 percent rented a home or apartment, 26 percent did not own or lease a vehicle and 11 percent had used a food pantry within the past 30 days.

FoodAPS matched survey responses about SNAP participation with official administrative records to identify response errors, but no similar verification was available for WIC. The ISU researchers say the new methodology was specifically designed to handle this type of scenario in which researchers can corroborate answers for some survey questions, but not others.

"Our goal was to strike a balance between making assumptions that are weak enough to be credible, but strong enough to be informative," said Brent Kreider, professor of economics. "Policymakers may ask whether these programs actually work or merely increase government spending without reducing food insecurity. We find WIC helps even when SNAP is already in place."

Credit: 
Iowa State University

The whisper of schizophrenia: Machine learning finds 'sound' words predict psychosis

A machine-learning method discovered a hidden clue in people's language predictive of the later emergence of psychosis -- the frequent use of words associated with sound. A paper published by the journal npj Schizophrenia published the findings by scientists at Emory University and Harvard University.

The researchers also developed a new machine-learning method to more precisely quantify the semantic richness of people's conversational language, a known indicator for psychosis.

Their results show that automated analysis of the two language variables -- more frequent use of words associated with sound and speaking with low semantic density, or vagueness -- can predict whether an at-risk person will later develop psychosis with 93 percent accuracy.

Even trained clinicians had not noticed how people at risk for psychosis use more words associated with sound than the average, although abnormal auditory perception is a pre-clinical symptom.

"Trying to hear these subtleties in conversations with people is like trying to see microscopic germs with your eyes," says Neguine Rezaii, first author of the paper. "The automated technique we've developed is a really sensitive tool to detect these hidden patterns. It's like a microscope for warning signs of psychosis."

Rezaii began work on the paper while she was a resident at Emory School of Medicine's Department of Psychiatry and Behavioral Sciences. She is now at fellow in Harvard Medical School's Department of Neurology.

"It was previously known that subtle features of future psychosis are present in people's language, but we've used machine learning to actually uncover hidden details about those features," says senior author Phillip Wolff, a professor of psychology at Emory. Wolff's lab focuses on language semantics and machine learning to predict decision-making and mental health.

"Our finding is novel and adds to the evidence showing the potential for using machine learning to identify linguistic abnormalities associated with mental illness," says co-author Elaine Walker, an Emory professor of psychology and neuroscience who researches how schizophrenia and other psychotic disorders develop.

The onset of schizophrenia and other psychotic disorders typically occurs in the early 20s, with warning signs -- known as prodromal syndrome -- beginning around age 17. About 25 to 30 percent of youth who meet criteria for a prodromal syndrome will develop schizophrenia or another psychotic disorder.

Using structured interviews and cognitive tests, trained clinicians can predict psychosis with about 80 percent accuracy in those with a prodromal syndrome. Machine-learning research is among the many ongoing efforts to streamline diagnostic methods, identify new variables, and improve the accuracy of predictions.

Currently, there is no cure for psychosis.

"If we can identify individuals who are at risk earlier and use preventive interventions, we might be able to reverse the deficits," Walker says. "There are good data showing that treatments like cognitive-behavioral therapy can delay onset, and perhaps even reduce the occurrence of psychosis."

For the current paper, the researchers first used machine learning to establish "norms" for conversational language. They fed a computer software program the online conversations of 30,000 users of Reddit, a social media platform where people have informal discussions about a range of topics. The software program, known as Word2Vec, uses an algorithm to change individual words to vectors, assigning each one a location in a semantic space based on its meaning. Those with similar meanings are positioned closer together than those with far different meanings.

The Wolff lab also developed a computer program to perform what the researchers dubbed "vector unpacking," or analysis of the semantic density of word usage. Previous work has measured semantic coherence between sentences. Vector unpacking allowed the researchers to quantify how much information was packed into each sentence.

After generating a baseline of "normal" data, the researchers applied the same techniques to diagnostic interviews of 40 participants that had been conducted by trained clinicians, as part of the multi-site North American Prodrome Longitudinal Study (NAPLS), funded by the National Institutes of Health. NAPLS is focused on young people at clinical high risk for psychosis. Walker is the principal investigator for NAPLS at Emory, one of nine universities involved in the 14-year project.

The automated analyses of the participant samples were then compared to the normal baseline sample and the longitudinal data on whether the participants converted to psychosis.

The results showed that higher than normal usage of words related to sound, combined with a higher rate of using words with similar meaning, meant that psychosis was likely on the horizon.

Strengths of the study include the simplicity of using just two variables -- both of which have a strong theoretical foundation -- the replication of the results in a holdout dataset, and the high accuracy of its predictions, at above 90 percent.

"In the clinical realm, we often lack precision," Rezaii says. "We need more quantified, objective ways to measure subtle variables, such as those hidden within language usage."

Rezaii and Wolff are now gathering larger data sets and testing the application of their methods on a variety of neuropsychiatric diseases, including dementia.

"This research is interesting not just for its potential to reveal more about mental illness, but for understanding how the mind works -- how it puts ideas together," Wolff says. "Machine learning technology is advancing so rapidly that it's giving us tools to data mine the human mind."

Credit: 
Emory Health Sciences

A metal-free, sustainable approach to CO2 reduction

image: Carbon emissions.

Image: 
Chris LeBoutillier via PIXEL

Researchers in Japan present an organic catalyst for carbon dioxide (CO2) reduction that is inexpensive, readily available and recyclable. As the level of catalytic activity can be tuned by the solvent conditions, their findings could open up many new directions for converting CO2 to industrially useful organic compounds.

Sustainability is a key goal in the development of next-generation catalysts for CO2 reduction. One promising approach that many teams are focusing on is a reaction called the hydrosilylation1 of CO2. However, most catalysts developed to date for this purpose have the disadvantage of containing metals that are expensive, not widely available and potentially detrimental to the environment.

Now, scientists at Tokyo Institute of Technology (Tokyo Tech) and the Renewable Energy Research Center at Japan's National Institute of Advanced Industrial Science and Technology (AIST) have demonstrated the possibility of using a fully recyclable, metal-free catalyst.

By comparing how well different organic catalysts could achieve hydrosilylation of CO2, the team identified one that surpassed all others in terms of selectivity and yield. This catalyst, called tetrabutylammonium (TBA) formate, achieved 99% selectivity and produced the desired formate product with a 98% yield. The reaction occurred rapidly (within 24 hours) and under mild conditions, at a temperature of 60°C.

Remarkably, the catalyst has a turnover number2 of up to 1800, which is more than an order of magnitude higher than previous results.

In 2015, team leader Ken Motokura of Tokyo Tech's Department of Chemical Science and Engineering and his colleagues found that formate salts show promising catalytic activity. It was this hint that provided the basis for the current study. Motokura explains: "Although we did expect formate salts to exhibit good catalytic activity, TBA formate showed much higher selectivity, stability and activity that went beyond our expectations."

In the current study, the researchers found that the catalyst can be made reusable by using toluene3 as a solvent. They showed that Lewis basic solvents4 such as N-methylpyrrolidone (NMP) and dimethyl sulfoxide (DMSO) can accelerate the reaction, meaning that the catalytic system is tunable.

Overall the findings, published in the online edition of the journal ACS Sustainable Chemistry & Engineering, offer a new, environmentally friendly path to reducing CO2 at the same time as yielding industrially important formate products.

Silyl formate can be easily converted to formic acid, which can serve as an important hydrogen carrier, for example, in fuel cells. The high reactivity of silyl formate enables its conversion into intermediates for the preparation of organic compounds such as carboxylic acids, amides and alcohols.

"This efficient transformation technique of CO2 to silyl formate will expand the possibilities for CO2 utilization as a chemical feedstock," Motokura says.

Credit: 
Tokyo Institute of Technology

Special fibroblasts help pancreatic cancer cells evade immune detection

image: Imaging mass cytometry (IMC) staining of a human PDAC section, using metal-conjugated antibodies to mark different cell types, and apCAF markers. The arrows point to examples of apCAFs in the PDAC stroma.

Image: 
Tuveson lab/CSHL 2019

Cold Spring Harbor, NY -- Pancreatic ductal adenocarcinoma (PDAC) is the fourth leading cause of cancer-related deaths in the world. Mostly chemoresistant, PDAC so far has no effective treatment. Understanding the connective tissue, called stroma, that surrounds, nurtures, and even protects PDAC tumors, is key to developing effective therapeutics.

"PDAC patients are diagnosed really late, so we don't know they're sick until the very end stages," said Ela Elyada, a postdoctoral fellow in Dr. David Tuveson's lab at Cold Spring Harbor Laboratory (CSHL). "We can't diagnose patients early enough because we don't have tools, and they don't respond to drugs. One barrier to the drugs is the fibroblasts in the stroma."

PDAC is characterized by an abundance of non-malignant stromal cells, and fibroblasts are one of the most common types of stromal cells. "We have a lot of fibroblast in pancreatic cancer, unlike other cancers which are mostly cancer cells," Elyada said. These cancer-associated fibroblasts (CAFs) can help cancer cells proliferate, survive and evade detection by the immune system.

The insidious role CAFs seem to play in protecting cancer cells labels them as bad, but completely obliterating CAFs in mice also worsened their cancers. Elyada wanted to investigate the nature of CAFs: are they good or bad? To crack the case, she, Associate Professor Paul Robson at the Jackson Laboratory, and colleagues used single-cell RNA sequencing to classify the fibroblasts into three distinct sub-populations, identifying specific functions and characteristics unique to each. This includes two previously identified types of CAFs, myofibroblastic CAFs (myCAFs) and inflammatory CAFs (iCAFs), and also a new type of CAF called antigen-presenting CAFs (apCAFs). The apCAFs were present in both mice and human PDAC. Their findings are published in the journal Cancer Discovery.

While newly identified apCAFs had the properties of a fibroblast, Elyada and her team found that they were different from the other fibroblast sub-populations. They expressed MHC class II genes, which are usually only expressed by specialized immune cells. Cells with MHC class II molecules on the surface can present antigens, or foreign peptides from viruses and bacteria, to helper T-cells. Detecting the antigen, the T-cell activates and recruits cytotoxic T-cells and other immune elements to attack and eliminate the invader. But apCAFs present in pancreatic tumors lack other components that activate T-cells. Elyada and her team hypothesize that this may result in incompletely activated T-cells that are unable to properly eliminate the cancer cells.

"We showed that apCAFs have specific capabilities of interacting with T-cells in a way that other CAFs don't," said Elyada. The research team now wants to know how the apCAFs are interacting with T-cells and the immune system. "If we can show that the apCAFs are somehow inhibiting the activity of T-cells, we can come up with therapies that specifically target that type of CAFs," Elyada proposed. "We can also combine it with other, complementary immune therapies to make them more effective."

Credit: 
Cold Spring Harbor Laboratory