Tech

Traditional stereotypes about masculinity may help explain support for Trump

UNIVERSITY PARK, Pa. -- American politicians have long been expected to uphold a certain veneer: powerful, influential and never vulnerable. New Penn State research has found that these idealized forms of masculinity may also help explain support for Donald Trump in the 2016 presidential election and in the days leading up to the 2020 election.

Across several studies, the researchers found that when men and women endorsed "hegemonic masculinity" -- a culturally idealized form of masculinity that says men should be strong, tough, and dominant -- they were more likely to vote for and have positive feelings about Trump.

The researchers found this was true even when they controlled for political party, gender and how much the participants trusted the government.

Nathaniel Schermerhorn, a dual doctoral candidate in psychology and women's, gender, and sexuality studies, said the findings -- published in the Proceedings of the National Academy of Sciences of the United States of America -- suggest that while American society seems to be ready for a female president, an active rejection of hegemonic masculinity may need to happen first.

"The pervasiveness of hegemonic masculinity exists because we do not always know that our attitudes and behaviors are contributing to it," Schermerhorn said. "The success of Donald Trump's 2016 campaign shows that even if we, as a society, have made progress in saying that discrimination and prejudice is undesirable, we have not, as a society, fully interrogated the systematic ways in which those prejudices are upheld."

Because American politics are largely dominated by men, the researchers said political campaigns often emphasize traditionally masculine characteristics to convince voters of a candidate's competence and skill.

"Historically, American politics have been a masculinity contest about proving which candidate is better," Schermerhorn said. "Since the 1980s, the Republican party has used this to their rhetorical advantage by presenting the Republican candidate as masculine and feminizing the entire Democratic party, for example by calling them 'snowflakes.'"

Theresa Vescio, professor of psychology and women's, gender, and sexuality studies, said Trump's 2016 campaign was no exception -- he often criticized his opponent's masculinity and displayed sexist attitudes toward Hilary Clinton while positioning himself as a tough, powerful and successful businessman.

Vescio said that while this may resonate with voters who share similar ideals of masculinity, such attitudes may not actually be realistic.

"In contemporary America, idealized forms of masculinity suggest that men should be high in power, status and dominance, while being physically, mentally and emotionally tough," Vescio said. "But this is an incredibly high standard that few can achieve or maintain. Therefore, this is an idea that many men strive to achieve, but few men actually exhibit."

Vescio said that while Trump's success with voters has been attributed to many different possible factors, she and the other researchers were specifically interested in to what extent hegemonic masculinity played a role with constituents.

The researchers recruited a total of 2,007 participants for seven different studies. In the first six studies, participants answered questions about their endorsements of hegemonic masculinity, trust in the government, sexism, racism, homophobia and xenophobia. They also indicated their political affiliation, how they voted in the 2016 presidential election, and their evaluations of Trump and Clinton.

In a seventh and final study, participants answered similar questions but also provided information about how they were going to vote in the 2020 presidential election, as well as their evaluations of Trump and Biden.

After analyzing the data, the researchers found that across all studies, participants who endorsed hegemonic masculinity were more likely to vote for Trump and to evaluate him positively. This was true for women and men, white and non-white participants, Democrats and Republicans, and across level of education.

"Additionally, we found that stronger endorsement of hegemonic masculinity was related to greater sexism, racism, homophobia, xenophobia, and Islamophobia," Vescio said. "But, hegemonic masculinity continued to predict support for Trump even when controlling for these prejudices."

Schermerhorn said the results can help shine a light on how both men and women respond to masculine and feminine candidates. He said that because hegemonic masculinity is embedded in social and political institutions, people may internalize the status quo as beneficial, even when it isn't.

"While endorsing hegemonic masculinity predicted a higher likelihood of supporting Trump, it did not necessarily predict negative support for Democratic candidates," he said. "This could suggest that hegemonic masculinity may actually be a predictor of maintaining the status quo and not the inverse -- working against the status quo."

Credit: 
Penn State

Gas pressure depletion and seismicity

image: A sectioned quartz-quartz grain contact revealing a thin clay film (ribbon-like structure). Compaction and shear of these thin clay films has played a key role in controlling compaction of the Groningen gas reservoir to date.

Image: 
Microstructures were obtained by B.A. Verberne.

Boulder, Colo., USA: Europe's largest gas field, the Groningen field in the Netherlands, is widely known for induced subsidence and seismicity caused by gas pressure depletion and associated compaction of the sandstone reservoir. Whether compaction is elastic or partly inelastic, as implied by recent experiments, is key to forecasting system behavior and seismic hazard.

Bart Verberne and colleagues sought evidence for a role of inelastic deformation through comparative microstructural analysis of unique drill-core, recovered from the seismogenic center of the field in 2015, 50 years after gas production started, versus core recovered before production (1965). Quartz grain fracturing, crack healing, and stress-induced Dauphiné twinning are equally developed in the 2015 and 1965 cores, with the only measurable effect of gas production being enhanced microcracking of sparse K-feldspar grains in the 2015 core.

Interpreting these grains as strain markers, Verberne and colleagues suggest that reservoir compaction involves elastic strain plus inelastic compression of weak clay films within grain contacts.

Credit: 
Geological Society of America

Scientists develop new approach to understanding massive volcanic eruptions

image: Dormant volcano in Equador.

Image: 
Alain Volentik

A geosciences team led by the University of South Florida (USF) has developed a new way to reconstruct the sizes of volcanic eruptions that occurred thousands of years ago, creating a first-of-its kind tool that can aid scientists in understanding past explosive eruptions that shaped the earth and improve the way of estimating hazards of future eruptions.

The advanced numerical model the USF team developed allows scientists to reconstruct eruption rates through time by estimating the dimensions of the umbrella clouds that contribute to the accumulation of vast deposits of volcanic ash. The research is published in the new edition of the Nature Journal, Communications, Earth and Environment.

The research, which was used to decipher the 2,500-year-old eruption of a volcano in Ecuador, was led by USF doctoral candidate Robert Constantinescu in collaboration with USF colleagues Research Associate Laura Connor, Professor Chuck Connor, Associate Professor Sylvain Charbonnier, doctoral alum Alain Volentik and other members of an international team. USF's Volcanology Group is one of the world's leading centers of volcano science and hazard assessment.

When large explosive eruptions occur, they form laterally spreading umbrella clouds into the stratosphere, facilitating the transport of fine-grained ash over hundreds of miles that settles and covers large swaths of land.

Current technology allows scientists to observe ash clouds. However, past eruptions are characterized based on the geological interpretation of their tephra deposits - the pieces and fragments of rock ejected into the air by an erupting volcano. By estimating the erupted volume and mass, plume height, umbrella cloud dimensions and other characteristics, the scientists are able to understand and characterize the volcanic eruptions, therefore improving the forecast of future events.

Using a series of field techniques combined with statistical and numerical modeling, volcanologists extract information from the deposits in order to characterize and classify an eruption on one of the most commonly used scales, the Volcanic Explosivity Index (VEI). Until now, the most sought-after information is the eruption column height and the total erupted mass or volume, Constantinescu said.

But over time, deposits erode and can provide an uncertain picture of older eruptions. Also, current models have been limited in that they assume all volcanic eruptions created mostly vertical plumes, Constantinescu said, and don't account for large explosive eruptions that form laterally spreading umbrella ash clouds.

The USF team's work shows that it is the dimensions of the umbrella clouds that is the telling factor in reconstructing past large explosive eruptions.

"The better we can reconstruct the nature of past eruptions from deposit data, the better we can anticipate potential hazards associated with future explosive eruptions," the team wrote in the new journal article.

The researchers propose updating the VEI scale with the umbrella cloud dimensions, which can now be easily estimated using the mathematical models they've developed.

The researchers applied their model to the tephra deposit of the eruption of Pululagua, a now dormant volcano about 50 miles north of the capital city of Quito. Ecuador is considered one of the world's most hazardous countries for volcanoes. The volcano last erupted an estimated 2,500 years ago and the area is now a geobotanical reserve renowned for its biodiversity and lush green landscape.

There are about 1,500 potentially active volcanoes worldwide, in addition to those that lurk beneath the world's oceans. In 2020, there were at least 67 confirmed eruptions from 63 different volcanoes, according to the Smithsonian Institution Global Volcanism Program.
"If in modern times the umbrella clouds of large eruptions are easily observed, we now have the ability to estimate the umbrella clouds of past eruptions," Constantinescu said. "Our numerical model enables us to better characterize past volcanic eruptions and inform models for future hazard assessment."

Credit: 
University of South Florida

The true cost of chemotherapy

Chemotherapy for breast cancer costs the UK economy more than £248 million annually, including 'out-of-pocket' personal costs of more than £1,000 per patient - according to new research from the University of East Anglia.

A new study published today is the first to investigate the total non-healthcare cost of chemotherapy to the UK.

It includes the cost of lost productivity, work absence, and personal costs such as paying for transport and parking for treatment, the cost of wigs and new bras, and over the counter medications.

The UEA research team say that better targeting of treatment could help avoid placing unnecessary costs upon patients, their caregivers and wider society.

Prof Richard Fordham, from UEA's Norwich Medical School, said: "Breast cancer is the most common cancer in women and second most common cancer overall with two million cases per year worldwide.

"Most patients require surgery, additional radiotherapy, chemotherapy, hormone therapy or a combination of these to reduce the risk of the cancer coming back. Around a third of breast cancer patients receive chemotherapy, but there are grey areas around which patients do and don't need chemotherapy.

"As well as the cost of the treatment itself, there are many societal and personal costs associated with chemotherapy. These might include taking time off work, paying for hospital transport or parking, paying for over-the-counter medications or dietary supplements, the cost of wigs, headscarves and new bras, and the cost of informal care.

"But until now it has not been known what the total cost of all of this really is. We wanted to find out what the true total cost of chemotherapy is for patients, caregivers and wider society, for treating breast cancer in the UK."

The research team collected data from sources including UK cancer registries, clinical guidelines and published patient survey data. Patient and staff views were collected through semi-structured interviews.

Key findings:

The total cost of breast cancer chemotherapy in the UK economy is over £248 million.

Societal productivity losses of £141.4 million - including £3.2 million lost to premature mortality, and £133.7 million lost to short-term (£28.7 m) and long-term (105m) work absence. Further costs include £3.4m associated with mortality losses from secondary malignancies due to adjuvant chemotherapy and £1.1m in lost productivity arises from informal care provision.

£1.1 million in lost productivity arises from informal care provision.

Out-of-pocket patient costs for chemotherapy total £4.2million, or an annual average of £1,100 per patient.

In addition, costs for the emotional wellbeing of carers could be as much as £82 million. Emotional wellbeing reflects how much additional income would be required to offset a wellbeing loss.

Dr Stephanie Howard-Wilsher, also of UEA's Norwich Medical School, said: "We spoke to breast cancer patients who had undertaken chemotherapy to better understand the actual experiences and impacts of these costs. We also interviewed healthcare staff involved in breast cancer care for their views on chemotherapy and associated costs.

"The interviews with patients really show the impact that breast cancer has on lives. They talk about their worlds just falling apart, and chemotherapy side effects like hair loss, tiredness, constipation and diarrhea, loss of taste. And they also talk about the emotional impact for their families and those caring for them.

Researcher Anna Sweeting, from UEA's Norwich Medical School, said: "Interviews with healthcare professionals showed us how patients cope with chemotherapy differently. For some patients, their cognitive function is never quite the same afterwards, they suffer with 'chemo brain'. And it also gave us insight into the impact for families, including children's mental health.

"Our work shows how chemotherapy carries significant and far-reaching indirect costs for society, as well as for patients and their carers - beyond the costs associated with the treatment itself."

"The greatest burden accrued to society is in patient productivity losses. Patient out-of-pocket expenses and costs of informal care were smaller by comparison but nevertheless significant. Better targeting of chemotherapy treatment could help avoid placing unnecessary costs upon patients, their caregivers and wider society," added health economist and former UEA researcher Krishnali Parsekar.

Credit: 
University of East Anglia

Putty-like composites of gallium metal with potential for real-world application

video: The following clip demonstrates the formation of a putty-like composite between gallium and graphite. Loading more graphite into the mixture increases the viscosity of the composite. The qualitative difference between liquid and paste-like forms of the metal is also shown towards the end of this video.

Image: 
Institute for Basic Science

Gallium is a highly useful element that has accompanied the advancement of human civilization throughout the 20th century. Gallium is designated as a technologically critical element, as it is essential for the fabrication of semiconductors and transistors. Notably, gallium nitride and related compounds allowed for the discovery of the blue LED, which was the final key in the development of an energy-efficient and long-lasting white LED lighting system. This discovery has led to the awarding of the 2014 Nobel Prize in Physics. It is estimated that up to 98% of the demand for gallium originates from the semiconductor and electronics industry.

In addition to its use in electronics, the unique physical properties of gallium have led to its use in other areas. Gallium itself is a metal with a very low melting point and is a liquid at just above room temperature (30 °C). Also, gallium is capable of forming several eutectic systems (alloys that have a lower melting point than any of its constituents, including gallium) with a number of other metals. Both pure gallium and these gallium based liquid metal alloys have high surface tension and are considered "non-spreadable" on most surfaces. This renders them difficult to handle, shape, or process, which limits their potential for real-world application. However, a recent discovery may have unlocked the possibility for broader use of gallium in the field of functional materials.

A research team at the Center for Multidimensional Carbon Materials (CMCM) within the Institute for Basic Science (IBS) in Ulsan, South Korea and the Ulsan National Institute of Science and Technology (UNIST) has invented a new method for incorporating filler particles in liquid gallium to create functional composites of liquid metal. The incorporation of fillers transforms the material from a liquid state into either a paste- or putty-like form (with consistency and "feel" similar to the commercial product "Plasticine") depending on the amount of added particles. In the case when graphene oxide (G-O) was used as a filler material, G-O content of 1.6~1.8% resulted in a paste-like form, while 3.6% was optimal for putty formation. A variety of new gallium composites and the mechanism of their formation is described in a recent article published in the journal Science Advances.

The mixing of particles inside the gallium based liquid metal alters the physical properties of the material, which allows for much easier handling. First author Chunhui Wang notes: "The ability for liquid gallium composites to form pastes or putties is extremely beneficial. It removes most of the issues of handling of gallium for applications. It no longer stains surfaces, it can be coated or "painted" onto almost any surface, it can be molded into a variety of shapes. This opens up a wide variety of applications for gallium not seen before." The potential application of this discovery includes situations where soft and flexible electronics are required, such as in wearable devices and medical implants. The study even showed that the composite can be fashioned into a porous foam-like material with extreme heat resistance, with the ability to withstand a blowtorch for 1 minute without sustaining any damage.

In this study, the team was able to identify the factors that would allow the fillers to successfully mix with liquid gallium. Co-corresponding author Benjamin Cunning described the prerequisites: "Liquid gallium develops an oxide 'skin' when exposed to air, and this is crucial for mixing. This skin coats the filler particle and stabilizes it inside the gallium, but this skin is resilient. We learned that particles of a large enough size have to be used otherwise mixing cannot occur and a composite cannot be formed".

The researchers used four different materials as fillers in their study: graphene oxide, silicon carbide, diamond, and graphite. Among these, two of them in particular displayed excellent properties when incorporated in liquid gallium: reduced graphene oxide (rG-O) for electromagnetic interference (EMI) shielding and diamond particles for thermal interface materials. A 13-micron thick coating of Ga/rG-O composite on a reduced graphene oxide film was able to improve the film's shielding efficiency from 20 dB up to 75 dB, which is sufficient for both commercial (>30 dB) and military (>60 dB) applications. However, the most remarkable property of the composite was its ability to provide EMI shielding property to any everyday common material. The researchers demonstrated that a similar 20-micron thick coating of Ga/rG-O applied on a simple sheet of paper yielded a shielding efficiency of over 70 dB.

Perhaps most exciting was the thermal performance when diamond particles were incorporated into the material. The CMCM team measured the thermal conductivities in collaboration with UNIST researchers Dr. Shalik Joshi and Prof. KIM Gun-ho, and the "real-world" application experiments were carried out by LEE Seunghwan and Prof. LEE Jaeson. The thermal conductivity experiment showed that the diamond containing composite had bulk thermal conductivities of up to ~110 W m-1 K-1, with larger filler particles yielding greater thermal conductivity. This exceeded the thermal conductivity of the commercially available thermal paste (79 W m-1 K-1) by more than 50%. The application experiment further proved the gallium-diamond mixture's effectiveness as a thermal interface material (TIM) between a heat source and a heat sink. Interestingly, the composite with smaller size diamond particles showed superior real-world cooling capability despite having lower thermal conductivity. The reason for this discrepancy is due to the larger diamond particles being more prone to protruding through the bulk gallium and creating air gaps at the interface of the heat sink or heat source and the TIM, reducing its effectiveness. (Ruoff notes that there are some likely ways to solve this issue in the future.)

Lastly, the group has even created and tested a composite made from a mixture of gallium metal and commercial silicone putty - better known as "Silly Putty" (Crayola LLC). This last type of gallium containing composite is formed by an entirely different mechanism, which involves small droplets of gallium being dispersed throughout the Silly Putty. While it does not have the impressive EMI shielding ability of the above-mentioned Ga/rG-O (the material requires 2 mm of coating to achieve the same 70 dB shielding efficiency), it is compensated with superior mechanical properties. Since this composite uses silicone polymer rather than gallium metal as the base material, it is stretchable in addition to being malleable.

Prof. Rod Ruoff, director of CMCM who conceived of the idea of mixing such carbon fillers with liquid metals notes: "We first submitted this work in September 2019, and it has undergone a few iterations since then. We have discovered that a wide variety of particles can be incorporated into liquid gallium and have provided a fundamental understanding of how particle size plays a role in successful mixing. We found this behavior extends to gallium alloys that are liquids at temperatures below room temperature such as indium-gallium, tin-gallium, and indium-tin-gallium. The capabilities of our UNIST collaborators have demonstrated outstanding applications for these composites, and we hope our work inspires others to discover new functional fillers with exciting applications."

Credit: 
Institute for Basic Science

See live cells with 7 times greater sensitivity using new microscopy technique

image: Researchers at the University of Tokyo have found a way to enhance the sensitivity of existing quantitative phase imaging so that all structures inside living cells can be seen simultaneously, from tiny particles to large structures. This artistic representation of the technique shows pulses of sculpted light (green, top) traveling through a cell (center), and exiting (bottom) where changes in the light waves can be analyzed and converted into a more detailed image.

Image: 
s-graphics.co.jp, CC BY-NC-ND

Experts in optical physics have developed a new way to see inside living cells in greater detail using existing microscopy technology and without needing to add stains or fluorescent dyes.

Since individual cells are almost translucent, microscope cameras must detect extremely subtle differences in the light passing through parts of the cell. Those differences are known as the phase of the light. Camera image sensors are limited by what amount of light phase difference they can detect, referred to as dynamic range.

"To see greater detail using the same image sensor, we must expand the dynamic range so that we can detect smaller phase changes of light," said Associate Professor Takuro Ideguchi from the University of Tokyo Institute for Photon Science and Technology.

The research team developed a technique to take two exposures to measure large and small changes in light phase separately and then seamlessly connect them to create a highly detailed final image. They named their method adaptive dynamic range shift quantitative phase imaging (ADRIFT-QPI) and recently published their results in Light: Science & Applications.

"Our ADRIFT-QPI method needs no special laser, no special microscope or image sensors; we can use live cells, we don't need any stains or fluorescence, and there is very little chance of phototoxicity," said Ideguchi.

Phototoxicity refers to killing cells with light, which can become a problem with some other imaging techniques, such as fluorescence imaging.

Quantitative phase imaging sends a pulse of a flat sheet of light towards the cell, then measures the phase shift of the light waves after they pass through the cell. Computer analysis then reconstructs an image of the major structures inside the cell. Ideguchi and his collaborators have previously pioneered other methods to enhance quantitative phase microscopy.

Quantitative phase imaging is a powerful tool for examining individual cells because it allows researchers to make detailed measurements, like tracking the growth rate of a cell based on the shift in light waves. However, the quantitative aspect of the technique has low sensitivity because of the low saturation capacity of the image sensor, so tracking nanosized particles in and around cells is not possible with a conventional approach.

The new ADRIFT-QPI method has overcome the dynamic range limitation of quantitative phase imaging. During ADRIFT-QPI, the camera takes two exposures and produces a final image that has seven times greater sensitivity than traditional quantitative phase microscopy images.

The first exposure is produced with conventional quantitative phase imaging - a flat sheet of light is pulsed towards the sample and the phase shifts of the light are measured after it passes through the sample. A computer image analysis program develops an image of the sample based on the first exposure then rapidly designs a sculpted wavefront of light that mirrors that image of the sample. A separate component called a wavefront shaping device then generates this "sculpture of light" with higher intensity light for stronger illumination and pulses it towards the sample for a second exposure.

If the first exposure produced an image that was a perfect representation of the sample, the custom-sculpted light waves of the second exposure would enter the sample at different phases, pass through the sample, then emerge as a flat sheet of light, causing the camera to see nothing but a dark image.

"This is the interesting thing: We kind of erase the sample's image. We want to see almost nothing. We cancel out the large structures so that we can see the smaller ones in great detail," Ideguchi explained.

In reality, the first exposure is imperfect, so the sculptured light waves emerge with subtle phase deviations.

The second exposure reveals tiny light phase differences that were "washed out" by larger differences in the first exposure. These remaining tiny light phase difference can be measured with increased sensitivity due to the stronger illumination used in the second exposure.

Additional computer analysis reconstructs a final image of the sample with an expanded dynamic range from the two measurement results. In proof-of-concept demonstrations, researchers estimate the ADRIFT-QPI produces images with seven times greater sensitivity than conventional quantitative phase imaging.

Ideguchi says that the true benefit of ADRIFT-QPI is its ability to see tiny particles in context of the whole living cell without needing any labels or stains.

"For example, small signals from nanoscale particles like viruses or particles moving around inside and outside a cell could be detected, which allows for simultaneous observation of their behavior and the cell's state," said Ideguchi.

Credit: 
University of Tokyo

New virtual screening strategy identifies existing drug that inhibits Covid-19 virus

image: Colorized scanning electron micrograph of an apoptotic cell (pink) heavily infected with SARS-COV-2 virus particles (green), isolated from a patient sample. Image captured at the NIAID Integrated Research Facility (IRF) in Fort Detrick, Maryland.

Image: 
National Institute of Allergy and Infectious Diseases/NIH, 2020 (CC0)

A novel computational drug screening strategy combined with lab experiments suggest that pralatrexate, a chemotherapy medication originally developed to treat lymphoma, could potentially be repurposed to treat Covid-19. Haiping Zhang of the Shenzhen Institutes of Advanced Technology in Shenzhen, China, and colleagues present these findings in the open-access journal PLOS Computational Biology.

With the Covid-19 pandemic causing illness and death worldwide, better treatments are urgently needed. One shortcut could be to repurpose existing drugs that were originally developed to treat other conditions. Computational methods can help identify such drugs by simulating how different drugs would interact with SARS-CoV-2, the virus that causes Covid-19.

To aid virtual screening of existing drugs, Zhang and colleagues combined multiple computational techniques that simulate drug-virus interactions from different, complimentary perspectives. They used this hybrid approach to screen 1,906 existing drugs for their potential ability to inhibit replication of SARS-CoV-2 by targeting a viral protein called RNA-dependent RNA polymerase (RdRP).

The novel screening approach identified four promising drugs, which were then tested against SARS-CoV-2 in lab experiments. Two of the drugs, pralatrexate and azithromycin, successfully inhibited replication of the virus. Further lab experiments showed that pralatrexate more strongly inhibited viral replication than did remdesivir, a drug that is currently used to treat some Covid-19 patients.

These findings suggest that pralatrexate could potentially be repurposed to treat Covid-19. However, this chemotherapy drug can prompt significant side effects and is used for people with terminal lymphoma, so immediate use for Covid-19 patients is not guaranteed. Still, the findings support the use of the new screening strategy to identify drugs that could be repurposed.

"We have demonstrated the value of our novel hybrid approach that combines deep-learning technologies with more traditional simulations of molecular dynamics," Zhang says. He and his colleagues are now developing additional computational methods for generating novel molecular structures that could be developed into new drugs to treat Covid-19.

Credit: 
PLOS

Controlling the nanoscale structure of membranes is key for clean water, researchers find

UNIVERSITY PARK, Pa. -- A desalination membrane acts as a filter for salty water: push the water through the membrane, get clean water suitable for agriculture, energy production and even drinking. The process seems simple enough, but it contains complex intricacies that have baffled scientists for decades -- until now.

Researchers from Penn State, The University of Texas at Austin, Iowa State University, Dow Chemical Company and DuPont Water Solutions published a key finding in understanding how membranes actually filter minerals from water, online today (Dec. 31) in Science. The article will be featured on the print edition's cover, to be issued tomorrow (Jan. 1).

"Despite their use for many years, there is much we don't know about how water filtration membranes work," said Enrique Gomez, professor of chemical engineering and materials science and engineering at Penn State, who led the research. "We found that how you control the density distribution of the membrane itself at the nanoscale is really important for water-production performance."

Co-led by Manish Kumar, associate professor in the Department of Civil, Architectural and Environmental Engineering at UT Austin, the team used multimodal electron microscopy, which combines the atomic-scale detailed imaging with techniques that reveal chemical composition, to determine that desalination membranes are inconsistent in density and mass. The researchers mapped the density variations in polymer film in three dimensions with a spatial resolution of approximately one nanometer -- that's less than half the diameter of a DNA strand. According to Gomez, this technological advancement was key in understanding the role of density in membranes.

"You can see how some places are more or less dense in a coffee filter just by your eye," Gomez said. "In filtration membranes, it looks even, but it's not at the nanoscale, and how you control that mass distribution is really important for water-filtration performance."

This was a surprise, Gomez and Kumar said, as it was previously thought that the thicker the membrane, the less water production. Filmtec, now a part of DuPont Water Solutions, which makes numerous desalination products, partnered with the researchers and funded the project because their in-house scientists found that thicker membranes were actually proving to be more permeable.

The researchers found that the thickness does not matter as much as avoiding highly dense nanoscale regions, or "dead zones." In a sense, a more consistent density throughout the membrane is more important than thickness for maximizing water production, according to Gomez.

This understanding could increase membrane efficiency by 30% to 40%, according to the researchers, resulting in more water filtered with less energy -- a potential cost-saving update to current desalination processes.

"Reverse osmosis membranes are so widely used for cleaning water, but there's still a lot we don't know about them," Kumar said. "We couldn't really say how water moves through them, so all the improvements over the last 40 years have essentially been done in the dark."

Reverse osmosis membranes work by applying pressure on one side. The minerals stay there, while the water passes through. While more efficient than non-membrane desalination processes, this still takes an immense amount of energy, the researchers said, but improving the efficiency of the membranes could reduce that burden.

"Freshwater management is becoming a crucial challenge throughout the world," Gomez said. "Shortages, droughts -- with increasing severe weather patterns, it is expected this problem will become even more significant. It's critically important to have clean water available, especially in low resource areas."

The team continues to study the structure of the membranes, as well as the chemical reactions involved in the desalination process. They are also examining how to develop the best membranes for specific materials, such as sustainable yet tough membranes that can prevent the formation of bacterial growth.

"We're continuing to push our techniques with more high-performance materials with the goal of elucidating the crucial factors of efficient filtration," Gomez said.

Credit: 
Penn State

Spontaneous robot dances highlight a new kind of order in active matter

image: When a swarm of smarticles is made to interact in a confined space, they form stunningly symmetric dances whose choreography emerges spontaneously from the physics of low rattling.

Image: 
Thomas A. Berrueta

Predicting when and how collections of particles, robots, or animals become orderly remains a challenge across science and engineering.

In the 19th century, scientists and engineers developed the discipline of statistical mechanics, which predicts how groups of simple particles transition between order and disorder, as when a collection of randomly colliding atoms freezes to form a uniform crystal lattice.

More challenging to predict are the collective behaviors that can be achieved when the particles become more complicated, such that they can move under their own power. This type of system - observed in bird flocks, bacterial colonies and robot swarms - goes by the name "active matter".

As reported in the January 1, 2021 issue of the journal Science, a team of physicists and engineers have proposed a new principle by which active matter systems can spontaneously order, without need for higher level instructions or even programmed interaction among the agents. And they have demonstrated this principle in a variety of systems, including groups of periodically shape-changing robots called "smarticles" - smart, active particles.

The theory, developed by Dr. Pavel Chvykov at the Massachusetts Institute of Technology while a student of Prof. Jeremy England, who is now a researcher in the School of Physics at Georgia Institute of Technology, posits that certain types of active matter with sufficiently messy dynamics will spontaneously find what the researchers refer to as "low rattling" states.

"Rattling is when matter takes energy flowing into it and turns it into random motion," England said. "Rattling can be greater either when the motion is more violent, or more random. Conversely, low rattling is either very slight or highly organized -- or both. So, the idea is that if your matter and energy source allow for the possibility of a low rattling state, the system will randomly rearrange until it finds that state and then gets stuck there. If you supply energy through forces with a particular pattern, this means the selected state will discover a way for the matter to move that finely matches that pattern."

To develop their theory, England and Chvykov took inspiration from a phenomenon - dubbed dubbed - discovered by the Swiss physicist Charles Soret in the late 19th century. In Soret's experiments, he discovered that subjecting an initially uniform salt solution in a tube to a difference in temperature would spontaneously lead to an increase in salt concentration in the colder region -- which corresponds to an increase in order of the solution.

Chvykov and England developed numerous mathematical models to demonstrate the low rattling principle, but it wasn't until they connected with Daniel Goldman, Dunn Family Professor of Physics at the Georgia Institute of Technology, that they were able to test their predictions.

Said Goldman, "A few years back, I saw England give a seminar and thought that some of our smarticle robots might prove valuable to test this theory." Working with Chvykov, who visited Goldman's lab, Ph.D. students William Savoie and Akash Vardhan used three flapping smarticles enclosed in a ring to compare experiments to theory. The students observed that instead of displaying complicated dynamics and exploring the container completely, the robots would spontaneously self-organize into a few dances -- for example, one dance consists of three robots slapping each other's arms in sequence. These dances could persist for hundreds of flaps, but suddenly lose stability and be replaced by a dance of a different pattern.

After first demonstrating that these simple dances were indeed low rattling states, Chvykov worked with engineers at Northwestern University, Prof. Todd Murphey and Ph.D. student Thomas Berrueta, who developed more refined and better controlled smarticles. The improved smarticles allowed the researchers to test the limits of the theory, including how the types and number of dances varied for different arm flapping patterns, as well as how these dances could be controlled. "By controlling sequences of low rattling states, we were able to make the system reach configurations that do useful work," Berrueta said. The Northwestern University researchers say that these findings may have broad practical implications for microrobotic swarms, active matter, and metamaterials.

As England noted: "For robot swarms, it's about getting many adaptive and smart group behaviors that you can design to be realized in a single swarm, even though the individual robots are relatively cheap and computationally simple. For living cells and novel materials, it might be about understanding what the 'swarm' of atoms or proteins can get you, as far as new material or computational properties."

Credit: 
Georgia Institute of Technology

New proposal for how aerosols drive increased atmospheric convection in thunderstorm clouds

High in the clouds, atmospheric aerosols, including anthropogenic air pollutants, increase updraft speeds in storm clouds by making the surrounding air more humid, a new study finds. The results offer a new mechanism explaining the widely observed - but poorly understood - atmospheric phenomenon and provide a physical basis for predicting increasing thunderstorm intensity, particularly in the high-aerosol regions of the tropics. Observations worldwide have highlighted aerosols' impact on weather, including their ability to strengthen convection in deep convective clouds, like those that form during thunderstorms, resulting in larger and more severe storms. Previous studies have suggested two mechanisms by which aerosol concentrations could affect the intensity of convection - both involving the release of latent heat into the atmosphere as moisture within clouds condenses (the "warm-phase") or freezes ("cold-phase") to airborne particles. However, the link between aerosols and increased convection remains unclear and represents a major obstacle to understanding current and future severe weather risks - a particularly salient topic as human activities have become a significant source of atmospheric aerosols. To address this, Tristan Abbot and Timothy Cronin use the System for Atmospheric Modeling (SAM), an atmospheric model that can simulate detailed cloud processes, to study cloud-aerosol interactions. While the results show that the high-resolution simulations could reproduce the observed link between aerosols and convection, Abbott and Cronin found that neither of the previously proposed mechanisms can fully explain this invigoration. The authors offer a third possibility: high aerosol concentrations increase environmental humidity by producing more clouds, which can mix more condensed water into the surrounding air. Because humid air favors stronger updrafts, atmospheric convection can intensify, producing invigorated thunderstorms.

Credit: 
American Association for the Advancement of Science (AAAS)

Model predicts global threat of sinking land will affect 635 million people worldwide

A new analysis suggests that, by 2040, 19% of the world's population - accounting for 21% of the global Gross Domestic Product - will be impacted by subsidence, the sinking of the ground's surface, a phenomenon often caused by human activities such as groundwater removal, and by natural causes as well. The results, reported in a Policy Forum, represent "a key first step toward formulating effective land-subsidence policies that are lacking in most countries worldwide," the authors say. Gerardo Herrera Garcia et al. performed a large-scale literature review that revealed that during the past century, land subsidence due to groundwater depletion occurred at 200 locations in 34 countries. During the next decades, factors including global population and economic growth, exacerbated by droughts, will probably increase land subsidence occurrence and related damages or impacts, they say. Policies that implement subsidence modeling in exposed areas, constant monitoring of high-risk areas, damage evaluation, and cost-effective countermeasures could help reduce the impacts of subsidence where it will hit hardest - namely, areas with increased population density, high groundwater demand, and irrigated areas suffering water stress. Towards informing such policies, the authors developed a model by combining spatial and statistical analyses that identified an area's subsidence susceptibility based on factors like flooding and groundwater depletion caused by human activities. Comparing their model to independent validation datasets revealed it was 94% capable of distinguishing between subsidence and non-subsidence areas. Notably, the model also revealed that most of the 635 million inhabitants in subsistence-susceptible areas are located in Asia, with a total exposed GDP of $9.78 trillion. While the model does not consider existing mitigation measures, potentially resulting in overestimates of subsidence exposure, their results still represent a step forward to effective policies, Herrera et al. say.

Credit: 
American Association for the Advancement of Science (AAAS)

Microfabricated elastic diamonds improve material's electronic properties

Overcoming a key obstacle in achieving diamond-based electronic and optoelectronic devices, researchers have presented a new way to fabricate micrometer-sized diamonds that can elastically stretch. Elastic diamonds could pave the way for advanced electronics, including semiconductors and quantum information technologies. In addition to being the hardest materials in nature, diamonds have exceptional electronic and photonic properties, featuring both ultrahigh thermal and electric conductivity. Not only would diamond-based electronics dissipate heat more quickly, reducing the need for cooling, they can handle high voltages and do so with greater efficiency than most other materials. Because of a diamond's rigid crystalline structure, practical use of the material in electronic devices has remained a limiting challenge. Subjecting diamond to large amounts of strain, which should alter the material's electronic properties, is one way to potentially overcome these obstacles. However, precisely controlling the strain across amounts of diamond needed for device applications has yet to be fully achieved. Here, Chaoqun Dang and colleagues present an approach for engineering diamond that exhibits uniform elastic strain. In a series of experiments, Dang et al. show how their microfabricated, micrometer-sized, single-crystalline diamond plates can elastically stretch - upwards of 10% - along several different crystallographic directions at room temperature. They could recover their length and shape, following these experiments. What's more, the authors show that this highly controllable elasticity can fundamentally change the diamond's electronic properties, including a near 2 electron volt bandgap reduction.

Credit: 
American Association for the Advancement of Science (AAAS)

Study points the way to boost immunotherapy against breast cancer, other solid tumors

image: UNC Lineberger Comprehensive Cancer Center's Jonathan S. Serody, MD, and colleagues report that adding a small molecule to a chimeric antigen receptor-T (CAR-T) cell therapy can help immune system T cells to effectively attack solid tumors, such as breast cancers. The boost helps recruit more immune cells into battle at the tumor site, according to the study published in the Journal of Experimental Medicine.

Image: 
UNC Lineberger Comprehensive Cancer Center

CHAPEL HILL, NC--Boosting immune system T cells to effectively attack solid tumors, such as breast cancers, can be done by adding a small molecule to a treatment procedure called chimeric antigen receptor-T (CAR-T) cell therapy, according to a study by researchers at the UNC Lineberger Comprehensive Cancer Center. The boost helps recruit more immune cells into battle at the tumor site. The findings are published in the Journal of Experimental Medicine.

CAR-T immunotherapy, in which T cells are modified in the laboratory to express chimeric antigen receptors, CARs, that in turn target surface proteins on cancer cells, has been most effective in the treatment of patients with B-cell leukemia or lymphoma. But this new research, conducted in mouse models, points to the potential for using CAR-T therapy effectively against solid tumors as well.

"We know that CAR T cells are safe for patients with solid tumors but so far they have not been able to cause significant tumor regression in the overwhelming majority of people treated," said Jonathan S. Serody, MD, the Elizabeth Thomas Professor of Medicine, Microbiology and Immunology and director of the Immunotherapy Program at UNC Lineberger. "Now we may have a new approach to make CAR T cells work in solid tumors, which we think could be a game-changer for therapies aimed at an appreciable number of cancers."

Serody is the paper's corresponding author and Nuo Xu, PhD, formerly a graduate student at UNC Lineberger and UNC School of Medicine, is the first author.

For CAR-T cell therapy to be effective, T cells infused back into patients have to be able to migrate to the site of a tumor. In treating patients with non-solid tumors, such as lymphomas, CAR T cells home in on bone marrow and other organs that make up the lymphatic system. But for solid tumors, such as breast cancer, that is usually not the case. Even if they do migrate to the tumor, they don't persist and expand well there due to the nature of the microenvironment surrounding such tumors, noted Serody.

So Serody and colleagues looked for ways to direct the lab-expanded cells toward the site of solid tumors. They focused on Th17 and Tc17 cells, which are known to have longer persistence in the micro-environment that surrounds a tumor, in part due to their better survival capabilities. To boost accumulation of Th17 and Tc17 cells near solid tumors, they turned to two small molecules that can activate an immune response: the stimulator of interferon genes (STING) agonists DMXAA and cGAMP.

DMXAA, which worked well in the investigator's mouse studies, has not provided benefit in human clinical trials as it does not activate human STING. The other STING agonist however, cGAMP, does activate human STING and is known to boost the human immune system. It also works well in mice.

In Serody's experiments, mice injected with cGAMP exhibited enhanced proliferation of T cells and those cells migrated to the tumor site. The end result was a significant decrease in tumor growth and enhanced survival.

"We hope to be able to study cGAMP in humans fairly soon," concluded Serody. "We will look to see if we can produce improvements in the treatment of head and neck cancers first, and if that proves promising, move into other forms of cancer by using CAR T cells generated by one of our colleagues here at UNC."

UNC Lineberger is one of a select few academic centers in the United States with the scientific, technical and clinical capabilities to develop and deliver CAR-T immunotherapy to patients. The cancer center currently has nine CAR-T clinical trials open and is developing new trials to treat a number of solid tumors, including ovarian and head and neck cancer. It also offers patients commercially available CAR-T therapies.

Credit: 
UNC Lineberger Comprehensive Cancer Center

Desalination breakthrough could lead to cheaper water filtration

video: Producing clean water at a lower cost could be on the horizon after researchers from The University of Texas at Austin and Penn State solved a complex problem that has baffled scientists for decades, until now.

Desalination membranes remove salt and other chemicals from water, a process critical to the health of society, cleaning billions of gallons of water for agriculture, energy production and drinking. The idea seems simple -- push salty water through and clean water comes out the other side -- but it contains complex intricacies that scientists are still trying to understand.

The research team, in partnership with DuPont Water Solutions, solved an important aspect of this mystery, opening the door to reduce costs of clean water production. The researchers determined desalination membranes are inconsistent in density and mass distribution, which can hold back their performance. Uniform density at the nanoscale is the key to increasing how much clean water these membranes can create.

Image: 
The University of Texas at Austin, Penn State

Producing clean water at a lower cost could be on the horizon after researchers from The University of Texas at Austin and Penn State solved a complex problem that has baffled scientists for decades, until now.

Desalination membranes remove salt and other chemicals from water, a process critical to the health of society, cleaning billions of gallons of water for agriculture, energy production and drinking. The idea seems simple -- push salty water through and clean water comes out the other side -- but it contains complex intricacies that scientists are still trying to understand.

The research team, in partnership with DuPont Water Solutions, solved an important aspect of this mystery, opening the door to reduce costs of clean water production. The researchers determined desalination membranes are inconsistent in density and mass distribution, which can hold back their performance. Uniform density at the nanoscale is the key to increasing how much clean water these membranes can create.

"Reverse osmosis membranes are widely used for cleaning water, but there's still a lot we don't know about them," said Manish Kumar, an associate professor in the Department of Civil, Architectural and Environmental Engineering at UT Austin, who co-led the research. "We couldn't really say how water moves through them, so all the improvements over the past 40 years have essentially been done in the dark."

The findings were published today in Science.

The paper documents an increase in efficiency in the membranes tested by 30%-40%, meaning they can clean more water while using significantly less energy. That could lead to increased access to clean water and lower water bills for individual homes and large users alike.

Reverse osmosis membranes work by applying pressure to the salty feed solution on one side. The minerals stay there while the water passes through. Although more efficient than non-membrane desalination processes, it still takes a large amount of energy, the researchers said, and improving the efficiency of the membranes could reduce that burden.

"Fresh water management is becoming a crucial challenge throughout the world," said Enrique Gomez, a professor of chemical engineering at Penn State who co-led the research. "Shortages, droughts -- with increasing severe weather patterns, it is expected this problem will become even more significant. It's critically important to have clean water availability, especially in low-resource areas."

The National Science Foundation and DuPont, which makes numerous desalination products, funded the research. The seeds were planted when DuPont researchers found that thicker membranes were actually proving to be more permeable. This came as a surprise because the conventional knowledge was that thickness reduces how much water could flow through the membranes.

The team connected with Dow Water Solutions, which is now a part of DuPont, in 2015 at a "water summit" Kumar organized, and they were eager to solve this mystery. The research team, which also includes researchers from Iowa State University, developed 3D reconstructions of the nanoscale membrane structure using state-of-the-art electron microscopes at the Materials Characterization Lab of Penn State. They modeled the path water takes through these membranes to predict how efficiently water could be cleaned based on structure. Greg Foss of the Texas Advanced Computing Center helped visualize these simulations, and most of the calculations were performed on Stampede2, TACC's supercomputer.

Credit: 
University of Texas at Austin

Researchers measure, model desalination membranes to maximize flow, clean more water

image: This 3D model of a polymer desalination membrane shows water flow -- the silver channels, moving from top to bottom -- avoiding dense spots in the membrane and slowing flow.

Image: 
Image by the Ganapathysubramanian research group/Iowa State University and Gregory Foss/Texas Advanced Computing Center.

AMES, Iowa - Nature has figured out how to make great membranes.

Biological membranes let the right stuff into cells while keeping the wrong stuff out. And, as researchers noted in a paper just published by the journal Science, they are remarkable and ideal for their job.

But they're not necessarily ideal for high-volume, industrial jobs such as pushing saltwater through a membrane to remove salt and make fresh water for drinking, irrigating crops, watering livestock or creating energy.

Can we learn from those high-performing biological membranes? Can we apply nature's homogenous design strategies to manufactured, polymer membranes? Can we quantify what makes some of those industrial membranes perform better than others?

Researchers from Iowa State University, Penn State University, the University of Texas at Austin, DuPont Water Solutions and Dow Chemical Co. - led by Enrique Gomez of Penn State and Manish Kumar of Texas - have used transmission electron microscopy and 3D computational modeling to look for answers.

Iowa State's Baskar Ganapathysubramanian, the Joseph C. and Elizabeth A. Anderlik Professor in Engineering from the department of mechanical engineering, and Biswajit Khara, a doctoral student in mechanical engineering, contributed their expertise in applied mathematics, high-performance computing and 3D modeling to the project.

The researchers found that creating a uniform membrane density down to the nanoscale of billionths of a meter is crucial for maximizing the performance of reverse-osmosis, water-filtration membranes. Their discovery has just been published online by the journal Science and will be the cover paper of the Jan. 1 print edition.

Working with Penn State's transmission electron microscope measurements of four different polymer membranes used for water desalination, the Iowa State engineers predicted water flow through 3D models of the membranes, allowing detailed comparative analysis of why some membranes performed better than others.

"The simulations were able to tease out that membranes that are more uniform - that have no 'hot spots' - have uniform flow and better performance," Ganapathysubramanian said. "The secret ingredient is less inhomogeneity."

Just take a look at the Science cover image the Iowa State researchers created with assistance from the Texas Advanced Computing Center, said Khara: Red above the membrane shows water under higher pressure and with higher concentrations of salt; the gold, granular, sponge-like structure in the middle shows denser and less-dense areas within the salt-stopping membrane; silver channels show how water flows through; and the blue at the bottom shows water under lower pressure and with lower concentrations of salt.

"You can see huge amounts of variation in the flow characteristics within the 3D membranes," Khara said.

Most telling are the silver lines showing water moving around dense spots in the membrane.

"We're showing how water concentration changes across the membrane." Ganapathysubramanian said of the models which required high-performance computing to solve. "This is beautiful. It has not been done before because such detailed 3D measurements were unavailable, and also because such simulations are non-trivial to perform."

Khara added, "The simulations themselves posed computtional challenges, as the diffusivity within an inhomogeneous membrane can differ by six orders of magnitude"

So, the paper concludes, the key to better desalination membranes is figuring out how to measure and control at very small scales the densities of manufactured membranes. Manufacturing engineers and materials scientists need to make the density uniform throughout the membrane, thus promoting water flow without sacrificing salt removal.

It's one more example of the computational work from Ganapathysubramanian's lab helping to solve a very fundamental yet practical problem.

"These simulations provided a lot of information for figuring out the key to making desalination membranes much more effective," said Ganapathysubramanian, whose work on the project was partly supported by two grants from the National Science Foundation.

Credit: 
Iowa State University