Tech

Analysis: Chile's transition to democracy slow, incomplete, fueled by social movements

A new article analyzes Chile's transition in 1990 from dictatorship to democracy, the nature of democracy between 1990 and 2019, and the appearance of several social movements geared to expanding this democracy. The article, by researchers at Carnegie Mellon University (CMU), appears in The Latin Americanist, a publication of the Southeastern Council of Latin American Studies.

"Our goal is to locate the October 2019 protest movement in the context of Chile's very slow and incomplete transition to democracy, as well as amid social movements that have consistently challenged the economic system and the democracy of elites that emerged after the end of the dictatorship in 1989," explains Silvia Borzutzky, teaching professor of political science and international relations at CMU's Heinz College, who cowrote the article.

The article presents a range of expert viewpoints on Chilean history, as well as the authors' analysis of how Chile's political and economic system and previous social movements culminated in what they call "a social explosion" in October 2019.

In its origins and performance, Chile's political system became illegitimate and provided the space for the emergence of several social movements, the authors argue. These included the movement of the Mapuche people to maintain autonomy and ownership of ancestral lands; the feminist movement, which focused on advancing women's rights, reducing poverty and maternal mortality, and strengthening laws on gendered violence; three different student movements; and a movement that sought changes in pension systems.

About 15 years after the end of the dictatorship, a new generation of Chileans began to see the government's inability to address educational issues, pensions, public transportation, and indigenous and women's rights. They took to the streets to demand change, and their movements became an almost permanent fixture of Chile's political landscape, the authors argue.

Although then-President Bachelet tried in 2006 to move from an elitist democracy to a democracy by commissions to expand participation, her failure created more dissatisfaction over lack of representation, dissatisfaction with politics, and persistent inequality, the authors suggest. Despite a stable economy, the political system fractured and remained largely incapable of meeting socioeconomic demands.

The authors cite these grievances and government-initiated violence to explain the massive protests in October 2019, known as the October 18 movement, but point out that other factors were also at play. By December 28, 2019, 27 people had died, nearly 2,500 were injured, and 2,840 were arrested, according to a United Nations investigation.

The article concludes by analyzing the demands made by the protesters and the government's responses to the October 18 movement. The authors also address the role of the COVID-19 pandemic, which delayed a referendum on a new constitution. Although the cycle of protests appears to have been tamed by the promise of a new constitution, the authors note that dissatisfaction could spur new protests at any time.

"The October 18 movement is both a continuation of and the culmination of many previous protests and actions," says Sarah Perry, a 2021 Master of Public Policy and Management graduate from CMU's Heinz College, who coauthored the article. "Because the country experienced a deficit of democratic values, these social movements were able to find their place, and to demand specific rights and benefits; they highlight the illegitimate nature of the political and economic systems in Chile."

Credit: 
Carnegie Mellon University

Investigating carbonate mineral chemical variations to improve oil recovery

Dr. Igor Ivanishin, a postdoctoral researcher in the Harold Vance Department of Petroleum Engineering at Texas A&M University, has firsthand experience with the frustrations of oil production. He spent nine years as a hydraulic fracturing engineer with operating and service companies in Russia. A few years ago, he came to Texas A&M to get his doctoral degree while delving into a reoccurring recovery problem in carbonate reservoirs: why don't they produce oil as predicted?

Ivanishin is investigating variations in the chemical composition of dolomite and calcite minerals to prove why a one-size-fits-all approach to well stimulation in carbonate reservoirs doesn't always work. Because these formations occur worldwide, his research has attracted the attention of several major oil and gas companies that want to collaborate with him and improve well stimulation operations.

The chemical crystalline lattice of ideal dolomite has regularly alternating layers of calcium and magnesium. When dolomites naturally form in sedimentary rock, extra calcium ions can substitute for magnesium ions. This modification expands the crystal lattice and makes it less stable. A similar situation happens in calcite, a mineral that doesn't contain magnesium or other ions in pure form but can in reservoirs.

Such variations are typical in sedimentary rocks but are not yet considered in well stimulation software models. Current modeling methods assume both dolomite and calcite have an ideal chemical composition that does not vary spatially within the carbonate reservoir. Thus, reservoir rock is thought to react at the same rate everywhere when acids are injected to dissolve the rock and form the channels or wormholes for oil and other hydrocarbons to travel through.

"I found publications that reported the presence of impurities in carbonates, but the authors did not think about variation in the chemical structure of these minerals," said Ivanishin. "These are angstrom-level tiny things, so it's difficult to imagine that such a small-size variation in chemical composition may affect the stability of the mineral, but it does."

As a doctoral student, Ivanishin consulted with geologists, mineralogists and chemical geologists on the subject. He received and personally collected dolomite samples from around the world. Initial chemical composition analysis of the different samples helped him to select dolomites with varying excess calcium contents. The reaction of these samples with hydrochloric acid revealed that having extra calcium, a calcium uptake, increased the rate dolomite dissolved up to five times greater than usual. He concluded that because the chemical composition of dolomite does vary spatially, injected acids would unevenly dissolve the rock in the target zone and not travel further into the reservoir, leaving some areas untouched.

For his postdoctoral research, Ivanishin is working with a large collection of calcites from Japan. He wants to determine if magnesium ions in calcite also change the dissolution rate of this mineral in acids. If calcite behaves the same way as dolomite, this should affect the design of stimulation treatments and other operations in carbonate formations, such as CO2 injection.

Ivanishin is currently working on creating computer simulations of these molecular variances and associated dissolution reactions so they can be easily shared and studied. His goal is to provide information to companies and consult with them on applications of this discovery in the field.

Though the investigation requires hard work and long hours, Ivanishin is glad the problem led him to College Station, Texas. Years ago, he talked with visiting international speakers at his former job about the carbonate recovery issue, including professors from Texas A&M. He decided to explore the university in person as a visiting student, then came back when he discovered it was the right place to be.

"I decided the next step in my career should be a Ph.D. from one of the best universities in the world," said Ivanishin. "The experience obtained here, talking with people from different companies, working with other engineers, exchanging ideas with experts from different fields and gathering information, is like a point of contact with the whole petroleum engineering world."

Credit: 
Texas A&M University

Balanced rocks set design ground motion values for New Zealand dam

image: Fragile geologic feature, Clyde Dam area, New Zealand.

Image: 
Mark Stirling

For the first time, researchers have used precariously-balanced rocks to set the formal design earthquake motions for a major existing engineered structure--the Clyde Dam, the largest concrete dam in New Zealand.

Mark Stirling of the University of Otago and colleagues identified and assessed the ages of these gravity-defying rock formations located about 2 kilometers from the dam site, using these data to determine the peak ground accelerations that the rocks could withstand before toppling.

This in turn was used to set the Safety Evaluation Earthquake (SEE) spectrum for the dam, or the expected peak earthquake ground motions occurring with a return period of 10,000 years that governs the safety assessment and seismic design of the structure.

As the researchers report in the Bulletin of the Seismological Society of America, the peak ground acceleration for the new SEE spectrum, developed from the rock data as well as an updated seismic hazard model for the region, is significantly reduced compared to their preliminary estimates developed in 2012.

However, the new design ground motion values are similar to those used--by chance--when the dam was built in the 1980s. "There is nothing that needs to be done in the way of dam strengthening," said Stirling. "However, the study shows all the relevant authorities that the dam is compliant given the modern regulations."

The study also "serves as an important proof-of-concept for future applications of fragile geologic features (FGFs) in engineering design," Stirling and his colleagues write.

FGFs are especially useful in setting engineering design parameters in places where the period between relevant earthquakes is very long--10,000 years or more. In these cases, the geologic features can help test probabilistic seismic hazard estimates. While seismologists have explored the usefulness of these features for other engineering design projects, such as the canceled Yucca Mountain nuclear waste repository in Nevada and the Diablo Canyon power plant in California, the Clyde Dam is the first to use fragile features to set design ground motion.

The Clyde Dam is located in the Central Otago "Range and Basin" region of the southern part of New Zealand's South Island. On a broad plateau located southwest of the dam called Cairnmuir flat, outcrops of schist rock that stick up above the landscape are carved by erosion into potentially unstable configurations.

In a painstaking effort, Stirling and colleagues identified these precariously-balanced rocks and took field measurements of their geometries to estimate their fragility. Then the researchers analyzed the formations using radionuclide data that estimate how long a rock surface has been exposed to the atmosphere. These data can show how long a rock has been balanced in a specific position.

"In terms of data collection, it was the FGF age estimation that was most challenging," said Stirling. "It required specialist input, hard physical work, and there were usually large uncertainties in interpreting the dates to say how long the FGFs had been fragile."

By combining these data with information on past earthquakes along the nearby Dunstan fault, Stirling and colleagues concluded that the rocks at Cairnmuir flat had been poised in their unstable positions since at least 24,000 years ago. This suggests that all of them have survived at least two Dunstan fault earthquakes.

The researchers then developed a fragility distribution of all precariously-balanced rocks in their study, based on peak ground acceleration, to determine the peak ground accelerations most likely to topple any random fragile rock structure with greater than 95% probability. This information was used to recommend a new SEE spectrum for the dam site.

Preliminary probabilistic seismic hazard calculations for the site suggested that "the FGFs in the area would be knocked down by these strong ground motions if they occurred--it's easy to roughly estimate the fragility of the features by eye in the field," Stirling explained. But since the features are still standing tall, it was then only a matter of time and research, he added, before the new Clyde Dam hazard estimates were revised.

Credit: 
Seismological Society of America

Inkjet printing show promise as new strategy for making e-textiles, study finds

In a new study, North Carolina State University researchers demonstrated they could print layers of electrically conductive ink on polyester fabric to make an e-textile that could be used in the design of future wearable devices.

Since the printing method can be completed at room temperature and in normal atmospheric conditions, researchers believe inkjet printing could offer a simpler and more effective method of manufacturing electronic textiles, also known as e-textiles. In addition, researchers said the findings suggest they could extend techniques common in the flexible electronic industry to textile manufacturing. They reported their findings in the journal ACS Applied Materials & Interfaces.

"Inkjet printing is a rapidly advancing new technology that's used in flexible electronics to make films used in cellphone displays and other devices," said the study's corresponding author Jesse S. Jur, professor of textile engineering, chemistry and science at NC State. "We think this printing method, which uses materials and processes that are common in both the electronics and textiles industries, also shows promise for making e-textiles for wearable devices."

In the study, researchers described how they used a FUJIFILM Dimatix inkjet printer to create a durable and flexible e-textile material, what they did to reliably create the e-textile, and its properties. Part of their challenge was to find the right composition of materials so the liquid ink would not seep through the porous surface of the textile materials and lose its ability to conduct electricity.

"Printing e-textiles has been a very big challenge for the e-textile industry," said the study's first author Inhwan Kim, a former graduate student at NC State. "We wanted to build a structure layer by layer, which has not been done on a textile layer with inkjet printing. It was a big struggle for us to find the right material composition."

They created the e-textile by printing layers of electrically conductive silver ink like a sandwich around layers of two liquid materials, which acted as insulators. They printed those sandwich layers on top of a woven polyester fabric. After they printed the layers of silver ink and insulating materials - made of urethane-acrylate, and poly(4-vinylphenol) - they monitored the surface of the material using a microscope. They found that the chemical properties of the insulating materials, as well as of the textile yarns, were important to maintaining the ability of the liquid silver ink to conduct electricity, and prevent it from penetrating through the porous fabric.

"We wanted a robust insulation layer in the middle, but we wanted to keep it as thin as possible to have the entire structure thin, and have the electric performance as high as possible," Kim said. "Also, if they are too bulky, people will not want to wear them."

The researchers evaluated the electrical performance of the e-textile after they bent the material multiple times. They tested more than 100 cycles of bending, finding the e-textile didn't lose its electrical performance. In future work, they want to improve the materials' electrical performance compared to e-textiles created using methods that require special facilities and atmospheric conditions, as well as increase the material's breathability.

Eventually, they want to use the printing method to create an e-textile that could be used in wearable electronics such as biomedical devices that could track heart rate, or used as a battery to store power for electronic devices.

"We were able to coat the ink on the fabric in a multi-layer material that's both durable and flexible," Kim said. "The beauty of this is, we did everything with an inkjet printer - we didn't use any lamination or other methodologies."

Credit: 
North Carolina State University

How do social media influence ethnic polarization?

Those who deactivated their Facebook profiles report a lower regard for other ethnic groups, and this effect was more prevalent among people living in more ethnically homogenous areas, shows a new study of users in Bosnia and Herzegovina (BiH). The findings run counter to a commonly held view that social media usage exacerbates societal polarization.

The work, conducted by researchers at New York University's Center for Social Media and Politics (CSMaP), appears in the Proceedings of the National Academy of Sciences (PNAS).

"For all our attention to the online drivers of polarization, we should not forget about the importance of offline factors as well," observes Nejla Asimovic, a doctoral candidate in NYU's Department of Politics and the lead author of the paper.

While a majority of Americans see social media as having a negative effect on the way things are going in the U.S. today, according to a recent survey by the Pew Research Center, and see it as fomenting polarization, the impact of social media on inter-ethnic attitudes has yet to be rigorously evaluated.

In the PNAS research, the paper's authors conducted an experiment in early July of 2019, coinciding with the 24th anniversary of the Srebrenica genocide. This period commemorating the 1995 atrocities--resulting in the deaths of over eight thousand Bosniak Muslims at the hands of Bosnian Serb forces--was chosen because of the heightened discourse around the past conflict during the studied days (July 7-July 14).

Participants were recruited through Facebook advertisements in BiH using both the Cyrillic and Latin alphabets. The more than 350 participants included those who identified themselves as Bosniaks (58.9 percent), as Serbs (15.7 percent), and as Croats (6.5 percent). Approximately 13 percent of respondents chose to identify as Bosnians and nearly 5 percent opted to not report their ethnic identification.

The subjects were randomly assigned to two groups: one whose Facebook accounts remained active during the studied period (the control group) and one whose accounts were deactivated during this time. Deactivation was confirmed through the monitoring of users' Facebook URLs; the control group was instructed to continue to use the platform as it normally would.

After the studied period, users filled out a questionnaire in which they were asked not only about attitudes toward those of other ethnic groups in the region (out-groups), but also about their knowledge of current events ("news knowledge") and about their well-being (e.g., feelings of loneliness, isolation, and joy).

Surprisingly, those in the group who deactivated their Facebook accounts reported more negative attitudes about ethnic out-groups than did those in the group who continued to use the platform. In response to these unexpected findings, the researchers turned to a question that had asked participants what they did in the time they were off Facebook.

"The most popular response was that people spent more time with friends and family," said NYU Professor Joshua A. Tucker, a co-author of the study. "This led us to suspect that perhaps our findings were being driven by people who were spending more time offline with people of their own ethnic group."

To test this intuition, the researchers decided to examine whether the effect of Facebook deactivation in driving worse out-group attitudes was more prevalent among people living in ethnically homogenous areas of the country. Notably, this is exactly what they found: these effects were largely concentrated among those who live in more ethnically homogeneous environments--and whose offline environments were therefore likely to be more ethnically homogeneous than their online environments. Moreover, these effects were not found among users living in parts of the country that were more ethnically mixed.

"Our research suggests that social media experience can be particularly influential in shaping out-group attitudes where the experiences of offline contact is low, especially in contexts of limited media fragmentation and no language barrier between groups," notes Asimovic. "We should keep in mind that offline environments or states' rhetoric may be as divisive, if not more, than online environments that may still allow people to engage--directly or indirectly--with the out-group."

In addition, they found that Facebook deactivation led to a significant decrease in the levels of news knowledge, but an improvement in users' subjective well-being--consistent with an earlier study of U.S. Facebook users.

"Our findings suggest that simply deactivating from social media is not a panacea to ethnic polarization, especially if the offline environment provides little to no opportunities for positive intergroup contact," says Asimovic. "Given these results, future work should be mindful in making assumptions about social media's impact and consider, with it, contextual factors and opportunities for intergroup contacts."

Credit: 
New York University

Early migrations of Siberians to America tracked using bacterial population structures

International team used the stomach bacteria Helicobacter pylori as a biomarker for ancient human migrations

DNA sequences catalogued at University of Warwick in EnteroBase, a public genomes database, demonstrate that a migration of Siberians to the Americas occurred approximately 12,000 years ago

Project began in 2000s but new statistical techniques allowed researchers to reconstruct and date the migrations of Siberian Helicobacter pylori

Early migrations of humans to the Americas from Siberia around 12,000 years ago have been traced using the bacteria they carried by an international team including scientists at the University of Warwick.

Using samples of a stomach bacteria called Helicobacter pylori, which has shared a tight co-evolutionary relationship with humans for at least the past 100,000 years, analyses using new statistical techniques provide evidence that humans colonised the Americas through a pre-Holocene migration of evolutionarily ancient northern Eurasians across the Bering land bridge.

The study entitled "Helicobacter pylori's historical journey through Siberia and the Americas" is published this week (14 June) in the prestigious international journal Proceedings of the National Academy of Sciences of the USA (PNAS) by a team of researchers led by Professor Yoshan Moodley at the University of Venda, South Africa.

The research used genetic information on H. pylori catalogued in EnteroBase at the University of Warwick to trace the evolutionary history of the bacteria. H. pylori is a stomach bacteria that infects approximately half of individuals worldwide, but scientists have found that its genetic sequence also varies with the region that it is identified in.

Previous analyses had identified three populations of H. pylori from individuals in Eurasia and the Americas, and current data demonstrates that H. pylori from Siberia define additional previously unknown subpopulations of those groupings. The data also indicated one of these bacterial populations, which includes H. pylori from indigenous Americans, was distributed over the breadth of Siberia, suggesting that this population may have travelled with humans to the Americas at some point.

However, classical statistical analyses of the sequences were partially inconsistent with each other. To reconstruct the most likely evolutionary history for H. pylori in Siberia, researchers compared the most likely evolutionary models and timings using a technique called approximate Bayesian computation (ABC). The results showed that a tiny population of H. pylori colonised the Americas in a single migration event approximately 12,000 years ago.

Professor Mark Achtman of Warwick Medical School at the University of Warwick, senior co-author on the paper, said: "This project began in the early 2000s, when nothing was known about the genetic diversity of Helicobacter pylori in central Asia. By 2007, hundreds of Siberian H. pylori strains had been cultivated and selected genes had been sequenced. But repeated attempts by multiple talented population geneticists failed to shed light on their evolutionary history.

"This study now uses the powerful approach of ABC statistics to reconstruct and date the migrations of Siberian H. pylori (and their human hosts) across Siberia and to the Americas."

Originally, all modern humans came from Africa. About 60,000 years ago small groups of hunter-gatherers left Africa on foot and made their way into Eurasia where they settled. These were the world's first human immigrants. Astonishingly, by the end of the ice age some 50,000 years later, modern humans had already reached the American continent which, if travelling over land, is almost as far away from Africa as it is possible to get.

These ancient human migrations took place during the last glacial period, or ice age, which lasted from 115,000 to 11,700 years ago. At that time, most of northern Eurasia, also known as Siberia, would have been a frozen wasteland, and presumably inhospitable to long-term human settlement. So how then, did humans manage to migrate across this vast region and find their way to North America? This is one of the most important, and as yet unanswered, questions in human prehistory, because it would explain how humans were able to colonise the whole world from an African origin, in such a short space of time.

The team took the unusual approach of using the DNA of a human stomach bacterium named Helicobacter pylori as a biomarker for ancient human migrations. They successfully collected, sequenced and analysed bacterial strains from indigenous people across Siberia and the Americas. The bacterial DNA sequence database they generated suggested that, remarkably, some groups of humans, known as ancient northern Eurasians, did manage to reside in Siberia throughout the bitter ice age. Yet, other human groups who originally inhabited warmer latitudes in Asia, colonised Siberia after the end of the ice age, leading to the complex mix of human populations we see in that region today.

The team also used their bacterial data set to model human migration into the Americas. It is important to remember that during the ice age, much more water was frozen at the earth's poles, making the sea level at that time over 100 metres lower than the present-day sea level, thus exposing a land bridge between Eurasia and North America and allowing human migration. The team showed that one small group of ancient northern Eurasians managed to successfully cross this land bridge about 12,000 years ago, and this population subsequently expanded to give rise to the indigenous Americans we see today.

Credit: 
University of Warwick

Association between childhood consumption of ultra-processed food, weight in early adulthood

What The Study Did: Researchers examined the association between the amount of ultra-processed food consumed by children and their weight in early adulthood.

Authors: Kiara Chang, Ph.D., of Imperial College London, is the corresponding author.

To access the embargoed study: Visit our For The Media website at this link https://media.jamanetwork.com/

(doi:10.1001/jamapediatrics.2021.1573)

Editor's Note: The article includes funding/support disclosures. Please see the article for additional information, including other authors, author contributions and affiliations, conflict of interest and financial disclosures, and funding and support.

Credit: 
JAMA Network

Kirigami-inspired stent offers new drug delivery method for tubular organs

Diseases that affect tubular structures in the body, such as the gastrointestinal (GI) system, vasculature and airway, present a unique challenge for delivering local treatments. Vertically oriented organs, such as the esophagus, and labyrinthine structures, such as the intestine, are difficult to coat with therapeutics, and in many cases, patients are instead prescribed systemic drugs that can have immunosuppressive effects. To improve drug delivery for diseases that affect tubular organs, like eosinophilic esophagitis and inflammatory bowel disease, a multidisciplinary team from Brigham and Women's Hospital, Massachusetts General Hospital and Massachusetts Institute of Technology (MIT) designed a stretchable stent based on the principles of kirigami that is capable of supporting rapid deposition of drug depots. The research is described in Nature Materials.

"We know that injected drugs like steroids can help relieve certain GI conditions, but the challenge is delivering them in a segment of a tubular organ multiple centimeters in length," said corresponding author Giovanni Traverso, MB, BChir, PhD, a gastroenterologist and biomedical engineer in the Brigham's Division of Gastroenterology and the Department of Mechanical Engineering at MIT. "One of the strategies we came up with was a dynamic stent, which can be stretched to change shape and deliver drugs circumferentially and longitudinally to cover the tube."

To design the drug-depositing stent system, the team looked to the principles of kirigami, a Japanese form of paper art similar to origami that includes cutting paper. The researchers previously demonstrated that the buckling properties of kirigami-based designs can be used to engineer footwear outsoles that generate friction to prevent slips and falls. The kirigami stent has a snakeskin-like, cylindrical design that expands to engage pop-out needles, which are controlled by air pressure applied to a soft actuator. This allows for the circumferential delivery of therapeutics into the GI tract, as well as the vasculature and airways. The stent is removed shortly after the delivery of the therapeutic and is not implanted in the body. It can be manufactured in various sizes, and drug delivery can be controlled by varying the thickness of the kirigami shell, needle length and applied pressure.

After refining the mechanics of the kirigami stent, the researchers coated the design with budesonide-loaded polymeric micro-particles to support extended drug delivery, which was then tested in the esophagi of pigs. Budesonide is a drug commonly used to treat a range of gastrointestinal diseases. The kirigami needles were left in their popped-out configuration for two minutes before the stent was removed. When the researchers examined the animals at different times (one, three and seven days after the drug's delivery), they found concentrations of the therapeutic in the animal tissue at all time points, indicating that the delivery system can promote the sustained administration of therapeutics.

"Our simple approach allows us to develop a drug-releasing system that can be applied to various length-scales and be matched with the size of any target tubular organ," said first author Sahab Babaee, PhD, a research affiliate in the Division of Gastroenterology at the Brigham and an MIT research scientist.

The researchers will continue to refine the drug delivery system in animal models and work toward developing it for use in humans. They hope that the system can also be deployed in structures like the trachea and iliac artery, thereby improving the targeted, sustained delivery of therapeutics for a range of diseases.

"The vision here is to think about the long-term release of the drug, so that one day a patient could receive local delivery of a treatment and have therapy for weeks, if not months or even years," Traverso said. "Removing the need to routinely take a prescribed medication, like a steroid or other drug, can really transform the patient experience."

Credit: 
Brigham and Women's Hospital

Stents inspired by paper-cutting art can deliver drugs to the GI tract

image: The device has two key elements -- a soft, stretchy tube made of silicone-based rubber, and a plastic coating etched with needles that pop up when the tube is stretched.

Image: 
MIT

CAMBRIDGE, MA -- Inspired by kirigami, the Japanese art of folding and cutting paper to create three-dimensional structures, MIT engineers and their collaborators have designed a new type of stent that could be used to deliver drugs to the gastrointestinal tract, respiratory tract, or other tubular organs in the body.

The stents are coated in a smooth layer of plastic etched with small "needles" that pop up when the tube is stretched, allowing the needles to penetrate tissue and deliver a payload of drug-containing microparticles. Those drugs are then released over an extended period of time after the stent is removed.

This kind of localized drug delivery could make it easier to treat inflammatory diseases affecting the GI tract such as inflammatory bowel disease or eosinophilic esophagitis, says Giovanni Traverso, an MIT assistant professor of mechanical engineering, a gastroenterologist at Brigham and Women's Hospital, and the senior author of the study.

"This technology could be applied in essentially any tubular organ," Traverso says. "Having the ability to deliver drugs locally, on an infrequent basis, really maximizes the likelihood of helping to resolve patients' conditions and could be transformative in how we think about patient care by enabling local, prolonged drug delivery following a single treatment."

Sahab Babaee, an MIT research scientist, is the lead author of the paper, which appears today in Nature Materials.

Stretchable stents

Inflammatory diseases of the GI tract, such as IBD, are often treated with drugs that dampen the body's immune response. These drugs are usually injected, so they can have side effects elsewhere in the body. Traverso and his colleagues wanted to come up with a way to deliver such drugs directly to the affected tissues, reducing the likelihood of side effects.

Stents could offer a way to deliver drugs to a targeted portion of the digestive tract, but inserting any kind of stent into the GI tract can be tricky because digested food is continuously moving through the tract. To make this possibility more feasible, the MIT team came up with the idea of creating a stent that would be inserted temporarily, lodge firmly into the tissue to deliver its payload, and then be easily removed.

The stent they designed has two key elements -- a soft, stretchy tube made of silicone-based rubber, and a plastic coating etched with needles that pop up when the tube is stretched. The design was inspired by kirigami, a technique that Traverso's lab has previously used to design a nonslip coating for shoe soles.Others have used it to create bandages that stick more securely to knees and other joints.

"The novelty of our approach is that we used tools and concepts from mechanics, combined with bioinspiration from scaly-skinned animals, to develop a new class of drug-releasing systems with the capacity to deposit drug depots directly into luminal walls of tubular organs for extended release," Babaee says. "The kirigami stents were engineered to provide a reversible shape transformation: from flat, to 3D, buckled-out needles for tissue engagement, and then to the original flat shape for easy and safe removal."

In this study, the MIT team coated the plastic needles with microparticles that can carry drugs. After the stent is inserted endoscopically, the endoscope is used to inflate a balloon inside the tube, causing the tube to elongate. As the tube stretches, the pulling motion causes the needles in the plastic to pop up and release their cargo.

"It's a dynamic system where you have a flat surface, and you can create these little needles that pop up and drive into the tissue to do the drug delivery," Traverso says.

For this study, the researchers created kirigami needles of several different sizes and shapes. By varying those features, as well as the thickness of the plastic sheet, the researchers can control how deeply the needles penetrate into the tissue. "The advantage of our system is that it can be applied to various length scales to be matched with the size of the target tubular compartments of the gastrointestinal tract or any tubular organs," Babaee says.

GI drug delivery

The researchers tested the stents by endoscopically inserting them into the esophagus of pigs. Once the stent was in place, the researchers inflated the balloon inside the stent, allowing the needles to pop up. The needles, which penetrated about half a millimeter into the tissue, were coated with microparticles containing a drug called budesonide, a steroid that is used to treat IBD and eosinophilic esophagitis.

Once the drug-containing particles were deposited in the tissue, the researchers deflated the balloon, flattening out the needles so the stent could be endoscopically removed. This process took only a couple of minutes, and the microparticles then stayed in the tissue and gradually released budesonide for about one week.

Depending on the composition of the particles, they could be tuned to release drugs over an even longer period of time, Traverso says. This could make it easier to keep patients on the correct drug schedule, because they would no longer need to take the drug themselves, but would periodically receive their medicine via temporary insertion of the stent. It would also avoid the side effects that can occur with systemic drug administration.

The researchers also showed that they could deliver the stents into blood vessels and the respiratory tract. They are now working on delivering other types of drugs and on scaling up the manufacturing process, with the goal of eventually testing the stents in patients.

Credit: 
Massachusetts Institute of Technology

Two decade analysis of African neuroscience research prompts calls for greater support

A team of neuroscientists are calling for greater support of neuroscience research in Africa following a long-term analysis of research outputs in the continent.

The findings detail important information about funding and international collaboration comparing activity in the continent to the US, UK and areas of Europe. It's hoped that the study will provide useful data to help shape and grow science in Africa.

Africa has the world's largest human genetic diversity which carries important implications for understanding human diseases, including neurological disorders.

Co-lead Senior author - Tom Baden, Professor of Neuroscience in the School of Life Sciences and the Sussex Neuroscience research group at the University of Sussex said: "One beautiful thing about science is that there is no such thing as a truly local problem. But that also means that there should be no such thing as a local solution - research and scientific communication by their very nature must be a global endeavour.

"And yet, currently the vast majority of research across most disciplines is carried out by a relatively small number of countries, located mostly in the global north. This is a huge waste of human potential."

The team, made up of experts from the University of Sussex, the Francis Crick Institute and institutions from across Africa, analysed all of the continent's Neuroscience outputs over two decades, thoroughly curating local and international collaborations, research citation, visibility and funding.

Lead author Mahmoud Bukar Maina, a Research Fellow in the School of Life Sciences and the Sussex Neuroscience research group at the University of Sussex and visiting scientist at Yobe State University, Nigeria, said: "Even though early progress in neuroscience began in Egypt, Africa's research in this area has not kept pace with developments in the field around the world. There are a number of reasons behind this and, for the first time, our work has provided a clear picture of why - covering both strengths and weaknesses of neuroscience research in Africa and comparing this to other continents.

"We hope it will provide useful data to guide governments, funders and other stakeholders in helping to shape science in Africa, and combat the 'brain drain' from the region."

Co-lead Senior author Lucia Prieto-Godino, a Group Leader at the Francis Crick Institute, said: "One of the reasons why this work is so important, is that the first step to solve any problem is understanding it. Here we analyse key features and the evolution of neuroscience publications across all 54 African countries, and put them in a global context. This highlights strengths and weaknesses, and informs which aspects will be key in the future to support the growth and global integration of neuroscience research in the continent."?

The study, published in Nature Communications, clearly details the African countries with the highest research outputs and reveals that the majority of research funding comes from external sources such as the USA and UK.

The researchers argue that local funding is vital in order to establish a sustainable African neuroscience research environment, suggesting greater government backing as well as support from the philanthropic sector.

Professor Baden added: "One pervasive problem highlighted in our research was the marked absence of domestic funding. In most African countries, international funding far predominates. This is doubly problematic.

"Firstly, it takes away the crucial funding stability that African researchers would need to meaningfully embark on large-scale and long-term research projects, and secondly, it means that the international, non-African funders essentially end up deciding what research is performed across the continent. Such as system would generate profound outrage across places like Europe - how then can it be acceptable for Africa?"

Credit: 
University of Sussex

Scientists expose the cold heart of landfalling hurricanes

video: The video shows the temperature difference of air inside a hurricane, relative to the air temperature in the surroundings. At the time of landfall, the warm heart encompasses the entire height of the hurricane, but over time, the warm core shrinks as the cold core grows upwards. These findings were reported by Professor Pinaki Chakraborty and Dr. Lin Li in a study in Physical Review Fluids.

Image: 
OIST

Hurricanes that make landfall typically decay but sometimes transition into extratropical cyclones and re-intensify, causing widespread damage to inland communities

The presence of a cold core is currently used to identify this transition, but a new study has now found that a cold core naturally forms in all landfalling hurricanes

The cold core was detected when scientists ran simulations of landfalling hurricanes that accounted for moisture stored within the cyclone

Over time, the scientists saw a cold core growing from the bottom of the hurricane, replacing the warm core

The research could help forecasters make more accurate predictions on whether communities farther inland will be impacted by these extreme weather events

Hurricanes are powerful weather events born in the open sea. Fueled by moisture from the warm ocean, hurricanes can intensify in strength, move vast distances across the water, and ultimately unleash their destruction upon land. But what happens to hurricanes after they've made landfall remains an open question.

Now, a recent study in Physical Review Fluids has used simulations to explore the fate of landfalling hurricanes. The scientists found that after landfall, the warm, dynamic heart of a hurricane is replaced by a growing cold core - an unexpected finding that could help forecasters predict the level of extreme weather that communities farther inland may face.

"Generally, if a hurricane hits land, it weakens and dies," said Professor Pinaki Chakraborty, senior author and head of the Fluid Mechanics Unit at the Okinawa Institute of Science and Technology Graduate University (OIST). "But sometimes, a hurricane can intensify again deep inland, creating a lot of destruction, like flooding, in communities far away from the coast. So, predicting the course that a hurricane will take is crucial."

These re-intensification events occur when hurricanes, also known as tropical cyclones or typhoons in other global regions, transition into extratropical cyclones: storms that occur outside the Earth's tropics. Unlike tropical cyclones that harness their strength from ocean moisture, extratropical cyclones gain their energy due to unstable conditions in the surrounding atmosphere. This instability comes in the form of weather fronts - boundaries that separate warmer, lighter air from colder, denser air.

"Weather fronts are always unstable, but the release of energy is typically very slow. When a hurricane comes, it can disturb the front and trigger a faster release of energy that allows the storm to intensify again," said first author Dr. Lin Li, a former PhD student in Prof. Chakraborty's unit.

However, predicting if this transition will occur is challenging for weather forecasters as hurricanes must interact with this front in a specific and complex way. Currently, forecasters use one key characteristic to objectively identify this transition: the presence of a cold core within a landfalling hurricane, caused by an inward rush of cold air from the weather front.

However, when Prof. Chakraborty and Dr. Li simulated what happens to hurricanes after hitting land, they found that a cold core was present in all landfalling hurricanes, growing upwards from the bottom of the hurricanes as they decayed, despite a stable atmosphere with no weather fronts.

"This appears to be a natural consequence of when a hurricane makes landfall and starts to decay," said Dr. Li.

Previous theoretical models of landfalling hurricanes missed the growing cold core as they didn't account for the moisture stored within landfalling hurricanes, explained the researchers.

Prof. Chakraborty said, "Once hurricanes move over land and lose their moisture supply, models typically viewed them as just a spinning, dry vortex of air, which like swirling tea in a cup, rubs over the surface of land and slows down due to friction."

However, the store of moisture within landfalling hurricanes means that thermodynamics still plays a critical role in how they decay.

In hurricanes over warm ocean, air that enters the hurricane is heavily saturated with moisture. As this air rises upward, it expands and cools, which lowers the amount of water vapor each "parcel" of air can hold. The water vapor within each air parcel therefore condenses, releasing heat. This means that these air parcels cool slower than the surrounding air outside the hurricane, generating a warm core.

But once a hurricane hits land, the air entering the hurricane contains less moisture. As these air parcels rise, they must travel higher before they reach a temperature cool enough for the water vapor to condense, delaying the release of heat. This means that at the bottom of the hurricane, where all the air parcels are moving upwards, it is comparatively cooler than the surrounding atmosphere, where air parcels move randomly in all directions, resulting in a cold core.

"As the hurricane keeps decaying, it eats up more and more of the moisture stored within the hurricane, so the air parcels must rise even higher before condensation occurs," said Dr. Li. "So over time, the cold core grows and the warm core shrinks."

The researchers hope that better understanding of cold cores could help forecasters more accurately distinguish between decaying hurricanes and ones transitioning into extratropical cyclones.

"It's no longer as simple as hurricanes having a warm core and extratropical cyclones having a cold core," said Prof. Chakraborty. "But in decaying hurricanes, the cold core we see is restricted to the lower half of the cyclone, whereas in an extratropical cyclone, the cold core spans the whole hurricane - that's the signature that forecasters need to look for."

Credit: 
Okinawa Institute of Science and Technology (OIST) Graduate University

New research shine light on perovskite solar cell performance

The potential of a class of materials called perovskites to enable solar cells to better absorb sunlight for energy production is widely known. However, this potential has yet to be fully realised, particularly under real-world operating conditions.

New research published today in the prestigious journal Nature Energy, has revealed defects in a popular perovskite light absorber that impede solar cell performance. The researchers found a change in the nature and density of these 'intragrain planar defects' correlated with a change in solar cell performance.

The discovery by an international team of researchers, led by Monash University and Wuhan University of Technology, could lead to improved solar cell technology and provide another step towards reducing the use of fossil fuels for energy.

Perovskite light absorbers have the potential to improve the efficiency of established silicon solar cells by adding an additional layer that can absorb colours, or parts of the energy spectrum, of sunlight which current silicon solar cells cannot.

The highest possible performance of silicon solar cells is around 32 per cent of capacity. This means only about 32 per cent of the energy available in sunlight can be captured by silicon solar cells.

Placing such a perovskite solar cell on top of a silicon solar cell, known as a tandem solar cell, can effectively boost the overall performance of the stack up to roughly 42 per cent.

Since small changes to the perovskite composition can tune the absorption spectrum of perovskite solar cells relatively easily, it is possible to create a perovskite solar cell that absorbs the higher energy light but lets the lower energy light pass through.

The research team used the imaging and diffraction protocol developed at the Monash Centre for Electron Microscopy (MCEM) to study the crystal structure of a range of perovskite solar cell materials in their pristine state.

Lead corresponding author, Professor Joanne Etheridge, Director of the MCEM and Professor in the Department of Materials Science and Engineering, said disruptions in the periodic crystal structure can have a strong influence on the material's electronic properties.

"Being able to map the local crystal structure of a thin film of perovskite light absorber and correlate this with the overall solar cell device performance provides exciting new insights into how device performance can be improved," Professor Etheridge said.

Lead author, Dr Wei Li from the Wuhan University of Technology said: "To make a good solar cell, a material must be able to transform sunlight into electricity efficiently and do so outdoors for many decades.

"Producing electricity from light involves absorbing photons to generate excited electrons, separating these electrons from the holes they left behind before they recombine, and finally extracting the separated electrons and holes in an external circuit.

"How these charge carriers behave within a crystalline semiconductor, and subsequently affect the overall performance of a solar cell, strongly depends on the crystallographic properties of the material."

The research team was able to control the presence of certain types of crystal defects - intragrain planar defects - by tuning the chemical composition of the perovskite films.

Planar defects are imperfections in the arrangement of atoms that occur on certain crystal planes.

These imperfections break the otherwise continuously repeating arrangement of atoms in a crystal lattice. Intragrain planar defects are a special type of disruption to the arrangement of atoms in the perovskite material.

The type and density of these defects in MA1-xFAxPbI3 (a type of perovskite solar cell) was changed by tuning the ratio of the small methylammonium (MA) molecules relative to the large formadinium (FA) molecules.

The compositions that contained no intragrain planar defects had the best solar cell performance. This research suggests that such crystal defects can have an important influence on perovskite solar cells and may be a factor limiting their current performance.

Joint lead author, Dr Mathias Rothmann, conducted part of this work during his PhD at Monash University and is now continuing work on perovskite solar cells at Oxford University. Dr Rothmann said this information opens new avenues for improving perovskite solar cell performance.

"For example, a blacksmith introduces defects in hot steel by hammering it, locking the defects in by quenching the steel in water, making the steel harder but less malleable as a result. In conventional silicon solar cells, however, defects are often associated with shorter charge carrier diffusion lengths and lifetimes, resulting in lower power conversion efficiencies," he said.

"We found a similar reduction in performance with the presence of these intragrain planar defects in these perovskite solar cells. We hope that our work can make a contribution towards a fossil fuel-free future based on the abundant availability of sunlight."

Credit: 
Monash University

Trees, plants and soil could help cities cut their carbon footprints -- when used smartly

Cities and nations around the globe are shooting for carbon neutrality, with some experts already talking about the need to ultimately reach carbon negativity. Carbon footprint declarations are used in construction to ease product selection for low carbon building, but these standards don't yet exist for green elements like soil, bushes and plants. A new study led by Aalto University is the first to map out how green infrastructure can be a resource for cities on the path to carbon neutrality.

The study, done in collaboration with the Natural Resources Institute Finland (Luke) and the University of Helsinki, charted out the lifecycle phases of plants, soils and mulches to determine the basic considerations needed to create standards for products commonly used in green urban spaces.

'Green infrastructure is a building block of cities, yet its products haven't yet been systematically assessed for their carbon storage potential. We're now starting to better understand the great importance of these nature-based solutions. Standards for these commonly used products would help us not only better plan our cities, but also help us reach carbon neutrality,' says Matti Kuittinen, an adjunct professor at Aalto University.

In their study, the team identified the existing carbon footprint standards, widely used in the construction industry, that would need development if applied to green infrastructure. To do so, they compared the flows of carbon in soils, mulches and plants over their lifespans. The team then tried to translate these carbon flows into the standardized reporting format used for conventional building products.

'One of the main challenges in assessing the carbon storage potential of plants is that the product you buy changes over time. If you install 50 bricks in a building and remove them in a decade, you still have 50 bricks. If you plant 20 seedlings, in ten years' time you might have 30 large bushes thanks to growth and spread,' explains Kuittinen.

The recommendations made in the study provide a concrete basis for developing global and regional -- for example, European Union -- standards for green infrastructure. The aim is to ensure claims of carbon storage hold true, as well as eventually have a tool for landscape designers to help plan new areas or refurbishing existing urban spaces.

The recommendations are particularly relevant for countries and regions like the Nordics, where nature has been traditionally integrated into urban landscapes. However, they can also help other areas meet their carbon targets.

'Cities need to take all kinds of actions to reach carbon neutrality. The benefit of green infrastructure is that once we know its carbon footprint, it doesn't require new, expensive technology; it's a simple, wide-reaching solution that can make real impact. This is an area that needs real attention from decision-makers in the European Union and elsewhere,' says Kuittinen.

Researchers at Aalto University, together with consortium partners of the Co-Carbon project, are currently starting field tests to determine the exact carbon sequestration potential of plants at various stages of growth. While the carbon storage potential of trees is relatively well-known, the study is set to be the first to focus on plants and bushes, elements commonly used in urban landscaping. At Luke, researchers are developing a tool to model the changes in carbon storage of plants and soil at regional level due to land use changes. Such tool could help planners target and maintain existing carbon storage in plants and soil.

The open-access study is published in the International Journal of Life Cycle Assessment.

Credit: 
Aalto University

Aquaponics treatment system inspired by sewage plants grows tastier crops and keeps fish healthy

A current challenge for sustainable aquaculture is how to increase the quantities of farmed fish while also reducing waste products that can lead to the accumulation of harmful fish sludge. New research aims to understand how this fish waste can be treated for use in aquaponics systems, by removing excessive carbon, yet preserving the mineral nutrients required by plants to grow.

In this study in Frontiers in Plant Science, researchers from the Department of Marine Sciences at the University of Gothenburg, Sweden, demonstrate a novel and effective way to convert this fish sludge into plant fertilizer and therefore improving the nutrients available for plants in hydroponic plant cultivation. The lead author, Mr Victor Lobanov explains:

"Fish sludge is a waste product made up of uneaten food and fish feces and is normally broken down by bacteria in the water. In addition to physically harming fish gills, excess carbon in the solids leads to excessive bacterial growth - diminishing oxygen in the water and hampering the ability of the fish to breathe. We wanted to find out whether this waste could be used to fertilize plants in aquaponics systems by removing the excessive carbon, yet preserving the minerals needed for growing crops."

Fish waste as fertilizer

The researchers investigated a potential solution inspired by sewage and wastewater treatment plants found around the world, called enhanced biological phosphorus removal (EBPR). This process was adapted by the researchers so that the risk of bacteria build up in the water was reduced, but the minerals from the fish waste were soluble in the water and could therefore be biologically available for plants to take up.

They found that the solid treatment system they developed was highly effective at delivering nutrients from the fish waste to the aquaponic system in the form of a liquid fertilizer equally as efficiently as a commercial nutrient solution. Although the fertilizer did not meet plant needs entirely as some nutrients such as manganese were missing, the researchers hope to optimize this system in future studies:

"Hopefully we can scale the system more efficiently in the future, not just for lettuce as used in this study but as well as for other plants, with the right number of fish corresponding to the size of the system. By further optimizing the breakdown of fish solids by the solid treatment system, we can also achieve a faster treatment rate and make the whole process more efficient," explains Mr Lobanov.

Commercial fertilizer solutions often have very high levels of nitrogen, stimulating crops to swell and absorb large amounts of water and giving the appearance of improved growth but often decreasing the amounts of minerals in the plant. Despite the fertilizer created by the solids treatment system containing lower levels of nitrogen than commercially available chemical fertilizers, plants were not nutrient deficient. This suggests that the high nitrogen levels commonly used are in excess of what the plants need. The authors hope that this finding will stimulate further research into the connections between plant nutrients, health and taste:

"Our work shows that this type of cultivation is not only more sustainable, but it is also capable of providing nutrients in a form that is easily accessible to plants. Farmers can take this system and optimize it for their specific crops and production volumes, potentially even supplementing with additional nutrients if required."

Credit: 
Frontiers

Making a meal of DNA in the seafloor

DNA is an abundant and nutritious food source for microbes

The diet of microbes is vast: They are able to use different molecules as nutrients, including biomolecules such as proteins and lipids of dead and decaying organisms. This includes so called extracellular DNA molecules which are not or no longer present in intact cells. "From the bacteria's perspective DNA is particularly nutritious," says Kenneth Wasmund, a microbiologist at the Centre for Microbiology and Environmental Systems Science (CMESS) at the University of Vienna and lead author of the study. "It's essentially a fertilizer. After all, it is a chain of millions of pieces of sugar and phosphorus- and nitrogen-containing bases." Extracellular DNA is common in the environment because when any organism dies, its contents, including DNA, are released into the environment. The microbes that degrade such abundant biomolecules are critical for global biogeochemical cycles as they recycle organic material settling from ocean waters, thereby also influencing how much carbon ultimately remains in the ocean floor. Yet, not all microbes are capable of using DNA as a nutrient.

Marine sediments are a massive habitat for undescribed microbes

The muddy sediments of the sea floor are a massive global habitat for these ecologically important microorganisms; after all, our oceans cover more than 70 percent of the earth's surface. Thousands of microbial species live here, most of which are still largely unknown. "Our study identifies some of these microbial players and reveals their lifestyles. At the same time, it tells us something about what happens to the vast amounts of DNA that are constantly released into the environment but do not accumulate anywhere and, accordingly, are obviously somehow being recycled," Kenneth Wasmund explains. Previous research has shown that microorganisms grown in the laboratory might use DNA as an energy source. "Our research has now focused on microbes that actually live and actively function in the seafloor, while using DNA as a food source," he adds.

Deciphering bacteria that use DNA for food by functional microbiome analyses

To this end, colleagues from the University of Calgary in Canada collected samples from the seafloor in the Baffin Bay, a marginal sea of the Atlantic Ocean between Greenland and Canada. To identify and characterise DNA-foraging microbes in these samples, the research team used an array of experimental, analytical, and bioinformatic methods. "In this collaboration of all four divisions at CMESS, we made full use of the excellent research infrastructure and unleashed the full expertise for functional microbiome analyses that is present at our Centre," says Alexander Loy, head of the research group at the University of Vienna.

In laboratory incubations, the researchers fed purified DNA that was isotopically-labelled with heavy carbon atoms (13C) to the sediment bacteria. Using stable isotope probing, including a specific isotope imaging technique, they were then able to track the heavy carbon and as a result could see which bacteria degraded the labelled DNA. In addition, the scientists reconstructed the genetic information present in the cells, i.e. the genomes, of the DNA-eating microorganisms to learn about their functional potential and distribution in the world's oceans.

Novel DNA-eating bacteria in the seafloor

The metagenomic analysis showed that the bacteria were equipped with DNA-degrading enzymes that enable them to chop-up DNA into small pieces to help them take it up and consume it. One bacterial species stood out as it had a particularly sophisticated set of tools for degrading DNA. Their appetite for DNA, also called nucleic acid, is now borne in their name: The research team named them Izemoplasma acidinucleici.

Credit: 
University of Vienna