Earth

Climate change can destabilize the global soil carbon reservoir, new study finds

image: A new study from scientists at WHOI and other institutions shows that climate change can destabilize the global soil carbon reservoir. (Narayani River in the Himalayas, a Tributary to the Ganges River. )

Image: 
©Valier Galy/Woods Hole Oceanographic Institution)

The vast reservoir of carbon that is stored in soils probably is more sensitive to destabilization from climate change than has previously been assumed, according to a new study by researchers at WHOI and other institutions.

The study found that the biospheric carbon turnover within river basins is vulnerable to future temperature and precipitation perturbations from a changing climate.

Although many earlier, and fairly localized, studies have hinted at soil organic carbon sensitivity to climate change, the new research sampled 36 rivers from around the globe and provides evidence of sensitivity at a global scale.

"The study results indicate that at the large ecosystem scale of river basins, soil carbon is sensitive to climate variability," said WHOI researcher Timothy Eglinton, co-lead author of the paper in the Proceedings of the National Academy of Sciences of the United States of America. "This means that changing climate, and particularly increasing temperature and an invigorated hydrological cycle, may have a positive feedback in terms of returning carbon to the atmosphere from previously stabilized pools of carbon in soils."

The public is generally aware that climate change can potentially destabilize and release permafrost carbon into the atmosphere and exacerbate global warming. But the study shows that this is true for the entire soil carbon reservoir, said WHOI researcher Valier Galy, the other co-lead author of the study.

The soil carbon reservoir is a key component in keeping the atmosphere in check in terms of how much carbon dioxide is in the air. The amount of carbon stored in terrestrial vegetation and soils is three times more than how much the atmosphere holds, and it consumes more than a third of the anthropogenic carbon that is emitted to the atmosphere.

To determine the sensitivity of terrestrial carbon to destabilization from climate change, researchers measured the radiocarbon age of some specific organic compounds from the mouths of a diverse set of rivers. Those rivers--including the Amazon, Ganges, Yangtze, Congo, Danube, and Mississippi--account for a large fraction of the global discharge of water, sediments and carbon from rivers to the oceans.

Terrestrial carbon, however, is not so simple to isolate and measure. That's because carbon in rivers comes from a variety of sources, including rocks, organic contaminants such as domestic sewage or petroleum that differ widely in their age, and vegetation. To determine what's happening within the rivers' watersheds, and to measure radiocarbon from the terrestrial biosphere, researchers focused on two groups of compounds: the waxes of plant leaves that serve a protective function for the plants' leaf surface and lignin, which is the woody "scaffolding" of land plants.

Taking these measurements showed a relationship between the age of the terrestrial carbon in the rivers and the latitude where the rivers reside, researchers found. That latitudinal relationship prompted researchers to infer that climate must be a key control in the age of the carbon that is exported from the terrestrial biosphere to these rivers, and that temperature and precipitation are primary controls on the age of that carbon.

"Why this study is powerful is because this large number of rivers, the wide coverage, and the wide range of catchment properties give a very clear picture of what's happening at the global scale," said Galy. "You could imagine that by going after lots of rivers, we would have ended up with a very complicated story. However, as we kept adding new river systems to the study, the story was fairly consistent."

"In many respects, Earth scientists see rivers as being a source signal that is sent to sedimentary records that we can interpret," said Eglinton. "By going to sedimentary records, we have the opportunity to look at how the terrestrial biosphere has responded to climate variability in the past. In addition, by monitoring rivers in the present day, we can also use them as sentinels in order to assess how these watersheds may be changing."

Credit: 
Woods Hole Oceanographic Institution

Sea-level rise in 20th century was fastest in 2,000 years along much of East Coast

image: Sea-level rise leads to increased flooding at the Edwin B. Forsythe National Wildlife Refuge in New Jersey. The photos show approximately the same view in October (left) and September 2016. Both photos show the same round hill in the upper left corner.

Image: 
Jennifer S. Walker, Rutgers University

The rate of sea-level rise in the 20th century along much of the U.S. Atlantic coast was the fastest in 2,000 years, and southern New Jersey had the fastest rates, according to a Rutgers-led study.

The global rise in sea-level from melting ice and warming oceans from 1900 to 2000 led to a rate that's more than twice the average for the years 0 to 1800 - the most significant change, according to the study in the journal Nature Communications.

The study for the first time looked at the phenomena that contributed to sea-level change over 2,000 years at six sites along the coast (in Connecticut, New York City, New Jersey and North Carolina), using a sea-level budget. A budget enhances understanding of the processes driving sea-level change. The processes are global, regional (including geological, such as land subsidence) and local, such as groundwater withdrawal.

"Having a thorough understanding of sea-level change at sites over the long-term is imperative for regional and local planning and responding to future sea-level rise," said lead author Jennifer S. Walker, a postdoctoral associate in the Department of Earth and Planetary Sciences in the School of Arts and Sciences at Rutgers University-New Brunswick. "By learning how different processes vary over time and contribute to sea-level change, we can more accurately estimate future contributions at specific sites."

Sea-level rise stemming from climate change threatens to permanently inundate low-lying islands, cities and lands. It also heightens their vulnerability to flooding and damage from coastal and other storms.

Most sea-level budget studies are global and limited to the 20th and 21st centuries. Rutgers-led researchers estimated sea-level budgets for longer timeframes over 2,000 years. The goal was to better understand how the processes driving sea-level have changed and could shape future change, and this sea-level budget method could be applied to other sites around the world.

Using a statistical model, scientists developed sea-level budgets for six sites, dividing sea-level records into global, regional and local components. They found that regional land subsidence - sinking of the land since the Laurentide ice sheet retreated thousands of years ago - dominates each site's budget over the last 2,000 years. Other regional factors, such as ocean dynamics, and site-specific local processes, such as groundwater withdrawal that helps cause land to sink, contribute much less to each budget and vary over time and by location.

The total rate of sea-level rise for each of the six sites in the 20th century (ranging from 2.6 to 3.6 millimeters per year, or about 1 to 1.4 inches per decade) was the fastest in 2,000 years. Southern New Jersey had the fastest rates over the 2,000-year period: 1.6 millimeters a year (about 0.63 inches per decade) at Edwin Forsythe National Wildlife Refuge, Leeds Point, in Atlantic County and 1.5 millimeters a year (about 0.6 inches per decade) at Cape May Court House, Cape May County. Other sites included East River Marsh in Guilford, Connecticut; Pelham Bay, The Bronx, New York City; Cheesequake State Park in Old Bridge, New Jersey; and Roanoke Island in North Carolina.

Credit: 
Rutgers University

Why commercialization of carbon capture and sequestration has failed and how it can work

There are 12 essential attributes that explain why commercial carbon capture and sequestration projects succeed or fail in the U.S., University of California San Diego researchers say in a recent study published in Environmental Research Letters.

Carbon capture and sequestration (CCS) has become increasingly important in addressing climate change. The Intergovernmental Panel on Climate Change (IPCC) relies greatly on the technology to reach zero carbon at low cost. Additionally, it is among the few low-carbon technologies in President Joseph R. Biden's proposed $400 billion clean energy plan that earns bipartisan support.

In the last two decades, private industry and government have invested tens of billions of dollars to capture CO2 from dozens of industrial and power plant sources. Despite the extensive support, these projects have largely failed. In fact, 80 percent of projects that seek to commercialize carbon capture and sequestration technology have ended in failure.

"Instead of relying on case studies, we decided that we needed to develop new methods to systematically explain the variation in project outcome of why do so many projects fail," said lead author Ahmed Y. Abdulla, research fellow with UC San Diego's Deep Decarbonization Initiative and assistant professor of mechanical and aerospace engineering at Carleton University. "Knowing which features of CCS projects have been most responsible for past successes and failures allows developers to not only avoid past mistakes, but also identify clusters of existing, near-term CCS projects that are more likely to succeed."

He added, "By considering the largest sample of U.S. CCS projects ever studied, and with extensive support from people who managed these projects in the past, we essentially created a checklist of attributes that matter and gauged the extent to which each does."

Credibility of incentives and revenues is key

The researchers found that the credibility of revenues and incentives--functions of policy and politics--are among the most important attributes, along with capital cost and technological readiness, which have been studied extensively in the past.

"Policy design is essential to help commercialize the industry because CCS projects require a huge amount of capital up front," the authors, comprised of an international team of researchers, note.

The authors point to existing credible policies that act as incentives, such as the 2018 expansion of the 45Q tax credit. It provides companies with a guaranteed revenue stream if they sequester CO2 in deep geologic repositories.

The only major incentive companies have had thus far to recoup their investments in carbon capture is by selling the CO2 to oil and gas companies, who then inject it into oil fields to enhance the rate of extraction--a process referred to as enhanced oil recovery.

The 45Q tax credit also incentivizes enhanced oil recovery, but at a lower price per CO2 unit, compared to dedicated geologic CO2 storage.

Beyond selling to oil and gas companies, CO2 is not exactly a valuable commodity, so few viable business cases exist to sustain a CCS industry on the scale that is necessary or envisioned to stabilize the climate.

"If designed explicitly to address credibility, public policy could have a huge impact on the success of projects," said David Victor, co-lead of the Deep Decarbonization Initiative and professor of industrial innovation at UC San Diego's School of Global Policy and Strategy.

Results with expert advice from project managers with real-world experience

While technological readiness has been studied extensively and is essential to reducing the cost and risk of CCS, the researchers looked beyond the engineering and engineering economics to determine why CCS continues to be such a risky investment. Over the course of two years, the researchers analyzed publicly available records of 39 U.S. projects and sought expertise from CCS project managers with extensive, real-world experience.

They identified 12 possible determinants of project outcomes, which are technological readiness, credibility of incentives, financial credibility, cost, regulatory challenges, burden of CO2 removal, industrial stakeholder opposition, public opposition, population proximity, employment impact, plant location, and the host state's appetite for fossil infrastructure development.

To evaluate the relative influence of the 12 factors in explaining project outcomes, the researchers built two statistical models and complemented their empirical analysis with a model derived through expert assessment.

The experts only underscored the importance of credibility of revenues and incentives; the vast majority of successful projects arranged in advance to sell their captured CO2 for enhanced oil recovery. They secured unconditional incentives upfront, boosting perceptions that they were resting on secure financial footing.

The authors conclude models in the study--especially when augmented with the structured elicitation of expert judgment--can likely improve representations of CCS deployment across energy systems.

"Assessments like ours empower both developers and policymakers," the authors write. "With data to identify near-term CCS projects that are more likely to succeed, these projects will become the seeds from which a new CCS industry sprouts."

Credit: 
University of California - San Diego

UMass Amherst researchers develop ultra-sensitive flow microsensors

image: Jinglei Ping is an assistant professor of mechanical and industrial engineering at UMass Amherst.

Image: 
UMass Amherst

A team of scientists at the University of Massachusetts Amherst have developed the thinnest and most sensitive flow sensor, which could have significant implications for medical research and applications, according to new research published recently in Nature Communications.

The research was led by Jinglei Ping, assistant professor of mechanical and industrial engineering, along with a trio of mechanical engineering Ph.D. students: Xiaoyu Zhang, who fabricated the sensor and made the measurement, Eric Chia and Xiao Fan. The findings pave the way for future research on all-electronic, in-vivo flow monitoring in investigating ultra-low-flow life phenomena that is yet to be studied in metabolism processes, retinal hemorheology and neuroscience.

Flow sensors, also known as flowmeters, are devices used to measure the speed of liquid or gas flows. The speed of biofluidic flow is a key physiological parameter but existing flow sensors are either bulky or lack precision and stability. The new flow sensor developed by the UMass Amherst team is based on graphene, a single layer of carbon atoms arranged in honeycomb lattice, to pull in charge from continuous aqueous flow. This phenomenon provides an effective flow-sensing strategy that is self-powered and delivers key performance metrics higher than other electrical approaches by hundreds of times. The graphene flow sensor can detect flow rate as low as a micrometer per second, that is, less than four millimeter per hour, and holds the potential to distinguish minimal changes in blood flow in capillary vessels. The performance of the graphene flow sensor has been stable for periods exceeding half a year.

Ping says the device his team created is the first one to be self-powered and high-performance, and it holds the potential to be implanted for long-term biofluidic flow monitoring. The most straightforward application, he added, may be in healthcare. To implant a micro flow monitor like the one his team developed in a small blood vessel is much simpler and safer than existing flowmeters, which are not suitable for low-flow measurement and need to be installed in a larger blood vessel. Ping added that scientists and doctors may find it useful for their research and clinical applications, such as monitoring the blood flow velocity in deep-brain vessels to understand the functioning of neurons that control the flow of blood.

Graphene is the key material in development of the sensor, Ping said. The unique combination of intrinsic properties of graphene, such as ultra-high sensitivity, ultra-low electrical noise, minimal contact electrification with aqueous solutions, outstanding stability in chemical and mechanical behaviors and immunity to biofouling, work together to induce the high performance of the flow sensor.

Next steps for Ping and his team include integrating the flow sensor into a self-sustained flow monitoring device and exploring the application of the device in healthcare.

Credit: 
University of Massachusetts Amherst

Penguin hemoglobin evolved to meet oxygen demands of diving

image: Nebraska researchers Jay Storz (left) and Anthony Signore with penguins at Omaha's Henry Doorly Zoo. Storz, Signore and their colleagues resurrected two ancient versions of hemoglobin, demonstrating how the blood of penguins evolved to help them better hold their breath while hunting for seafood.

Image: 
Craig Chandler, University of Nebraska-Lincoln

Call it the evolutionary march of the penguins.

More than 50 million years ago, the lovable tuxedoed birds began leaving their avian relatives at the shoreline by waddling to the water's edge and taking a dive in the pursuit of seafood.

Webbed feet, flipper-like wings and unique feathers all helped penguins adapt to their underwater excursions. But new research from the University of Nebraska-Lincoln has shown that the evolution of diving is also in their blood, which optimized its capture and release of oxygen to ensure that penguins wouldn't waste their breath while holding it.

Relative to land-dwelling birds, penguin blood is known to contain more hemoglobin: the protein that picks up oxygen from the lungs and transports it through the bloodstream before dropping it off at various tissues. That abundance could partly explain the underwater proficiency of, say, the emperor penguin, which dives deeper than any bird and has been documented holding its breath for more than 30 minutes while preying on krill, fish and squid.

Still, the particulars of their hemoglobin -- and how much it actually evolved to help penguins become fish-gobbling torpedoes that spend up to half of their lives underwater -- remained open questions. So Nebraska biologists Jay Storz and Anthony Signore, who often study the hemoglobin of birds that survive miles above sea level, decided to investigate the birds most adept at diving beneath it.

"There just wasn't a lot of comparative work on blood-oxygen transport as it relates to diving physiology in penguins and their non-diving relatives," said Signore, a postdoctoral researcher in Storz's lab.

Answering those questions meant sketching in the genetic blueprints of two ancient hemoglobins. One belonged to the common ancestor of all penguin species, which began branching from that ancestor about 20 million years ago. The other, dating back roughly 60 million years, resided in the common ancestor of penguins and their closest non-diving relatives -- albatrosses, shearwaters and other flying seabirds. The thinking was simple: Because one hemoglobin originated before the emergence of diving in the lineage, and the other after, any major differences between the two would implicate them as important to the evolution of diving in penguins.

Actually comparing the two was less simple. To start, the researchers literally resurrected both proteins by relying on models that factored in the gene sequences of modern hemoglobins to estimate the sequences of their two ancient counterparts. Signore spliced those resulting sequences into E. coli bacteria, which churned out the two ancient proteins. The researchers then ran experiments to evaluate the performance of each.

They found that the hemoglobin from the common ancestor of penguins captured oxygen more readily than did the version present in the blood of the older, non-diving ancestor. That stronger affinity for oxygen would mean less chance of leaving behind traces in the lungs, an especially vital issue among semi-aquatic birds needing to make the most of a single breath while hunting or traveling underwater.

Unfortunately, the very strength of that affinity can present difficulties when hemoglobin arrives at tissues starved for the oxygen it's carrying.

"Having a greater hemoglobin-oxygen affinity sort of acts like a stronger magnet to pull more oxygen from the lungs," Signore said. "It's great in that context. But then you're at a loss when it's time to let go."

Any breath-holding benefits gained by picking up extra oxygen, in other words, can be undone if the hemoglobin struggles to relax its iron-clad grip and release its prized cargo. The probability that it will is dictated in part by acidity and carbon dioxide in the blood. Higher levels of either make hemoglobins more likely to loosen up.

As Storz and Signore expected, the hemoglobin of the recent penguin ancestor was more sensitive to its surrounding pH, with its biochemical grip on oxygen loosening more in response to elevated acidity. And that, Signore said, made the hemoglobin more biochemically attuned to the exertion and oxygen needs of the tissues it served.

"It really is a beautiful system, because tissues that are working hard are becoming acidic," he said. "They need more oxygen, and hemoglobin's oxygen affinity is able to shift in response to that acidity to provide more oxygen.

"If pH drops by, say, 0.2 units, the oxygen affinity of penguin hemoglobin is going to decrease by more than would the hemoglobin of their non-diving relatives."

Together, the findings indicate that as penguins took to the seas, their hemoglobin evolved to maximize both the pick-up and drop-off of available oxygen -- especially when it was last inhaled five, or 10, or even 20 minutes earlier. They also illustrate the value of resurrecting proteins that last existed 20, or 40, or even 60 million years ago.

"These results demonstrate how the experimental analysis of ancestral proteins can reveal the mechanisms of biochemical adaptation," Storz said, "and also shed light on how organismal physiology evolved in response to new environmental challenges."

Credit: 
University of Nebraska-Lincoln

Lung cancer resistance: the key is glucose

image: Histological staining of a lung adenocarcinoma, which is made of tumor cells as well as cells of the immune microenvironment including tumor-associated neutrophils.

Image: 
Caroline Contat (EPFL).

Cancers are not only made of tumor cells. In fact, as they grow, they develop an entire cellular ecosystem within and around them. This "tumor microenvironment" is made up of multiple cell types, including cells of the immune system, like T lymphocytes and neutrophils.

The tumor microenvironment has predictably drawn a lot of interest from cancer researchers, who are constantly searching for potential therapeutic targets. When it comes to the immune cells, most research focuses on T lymphocytes, which have become primary targets of cancer immunotherapy - a cancer therapy that turns the patient's own immune system against the tumor.

But there is another type of immune cell in the tumor microenvironment whose importance in cancer development has been overlooked: neutrophils, which form part of the body's immediate or "innate" immune response to microbes. The question, currently debated among scientists, is whether neutrophils help or inhibit the tumor's growth.

Now, a team of researchers led by Etienne Meylan at EPFL's School of Life Sciences has discovered that the metabolism of neutrophils determines their tumor-supportive behavior in lung cancer development. The study is published in Cancer Research, a journal of the American Association for Cancer Research.

What intrigued the scientists was that cell metabolism in cancer becomes deregulated. Being neutrophil specialists, they considered the possibility that when these cells reside within the tumor microenvironment, their metabolism may also change, and that could affect how they contribute to the cancer's growth.

Focusing on glucose metabolism in a genetically-engineered mouse model of lung adenocarcinoma, the scientists isolated tumor-associated neutrophils (TANs) and compared them to neutrophils from healthy lungs.

What they found was surprising: the TANs take-up and metabolize glucose much more efficiently than neutrophils from healthy lungs. The researchers also found that TANs express a higher amount of a protein called Glut1, which sits on the cell's surface and enables increased glucose uptake and use.

To understand the importance of Glut1 in neutrophils during lung tumor development in vivo, we used a sophisticated system to remove Glut1 specifically from neutrophils," says Pierre-Benoit Ancey, the study's first author. "Using this approach, we identified that Glut1 is essential to prolong neutrophil lifespan in tumors; in the absence of Glut1, we found younger TANs in the microenvironment."

Using X-ray microtomography to monitor adenocarcinomas, the researchers found that removing Glut1 from TANs led to lower tumor growth rate but also increased the efficacy of radiotherapy, a common treatment for lung cancer. In other words, the ability of TANs to metabolize glucose efficiently seems to bestow the tumor with the ability to resist treatment - at least in lung cancer.

The scientists think that, because Glut1 loss diminishes the lifespan of TANs, their "age" determines whether they play a pro- or anti-tumor role. "Usually, we don't know how to target neutrophils, because they are so important in innate immunity," says Etienne Meylan. "Our study shows that their altered metabolism in cancer could be a new Achilles heel to consider in future treatment strategies. Undoubtedly, we are only beginning to learn about these fascinating cells in cancer."

Credit: 
Ecole Polytechnique Fédérale de Lausanne

Does 'harsh parenting' lead to smaller brains?

image: MRI images of smaller brain structures in youth who have experienced harsh parenting practices.

Image: 
Sabrina Suffren

Repeatedly getting angry, hitting, shaking or yelling at children is linked with smaller brain structures in adolescence, according to a new study published in Development and Psychology. It was conducted by Sabrina Suffren, PhD, at Université de Montréal and the CHU Sainte?Justine Research Centre in partnership with researchers from Stanford University.

The harsh parenting practices covered by the study are common and even considered socially acceptable by most people in Canada and around the world.

"The implications go beyond changes in the brain. I think what's important is for parents and society to understand that the frequent use of harsh parenting practices can harm a child's development," said Suffren, the study's lead author. "We're talking about their social and emotional development, as well as their brain development."

Emotions and brain anatomy

Serious child abuse (such as sexual, physical and emotional abuse), neglect and even institutionalization have been linked to anxiety and depression later in life.

Previous studies have already shown that children who have experienced severe abuse have smaller prefrontal cortexes and amygdala, two structures that play a key role in emotional regulation and the emergence of anxiety and depression.

In this study, researchers observed that the same brain regions were smaller in adolescents who had repeatedly been subjected to harsh parenting practices in childhood, even though the children did not experience more serious acts of abuse.

"These findings are both significant and new. It's the first time that harsh parenting practices that fall short of serious abuse have been linked to decreased brain structure size, similar to what we see in victims of serious acts of abuse," said Suffren, who completed the work as part of her doctoral thesis at UdeM's Department of Psychology, under the supervision of Professors Françoise Maheu and Franco Lepore.

She added that a study published in 2019 "showed that harsh parenting practices could cause changes in brain function among children, but now we know that they also affect the very structure of children's brains."

Children monitored since birth at CHU Sainte-Justine

One of this study's strengths is that it used data from children who had been monitored since birth at CHU Saint-Justine in the early 2000s by Université de Montréal's Research Unit on Children's Psychosocial Maladjustment (GRIP) and the Quebec Statistical Institute. The monitoring was organized and carried out by GRIP members Dr. Jean Séguin, Dr. Michel Boivin and Dr. Richard Tremblay.

As part of this monitoring, parenting practices and child anxiety levels were evaluated annually while the children were between the ages of 2 and 9. This data was then used to divide the children into groups based on their exposure (low or high) to persistently harsh parenting practices.

"Keep in mind that these children were constantly subjected to harsh parenting practices between the ages of 2 and 9. This means that differences in their brains are linked to repetitive exposure to harsh parenting practices during childhood," said Suffren who worked with her colleagues to assess the children's anxiety levels and perform anatomical MRIs on them between the ages of 12 and 16.

This study is the first to try to identify the links between harsh parenting practices, children's anxiety and the anatomy of their brains.

Credit: 
University of Montreal

Virtues of modeling many faults: New method illuminates shape of Alaskan quake

image: Summary of study area and result. Upper panels summarise the regional map of the study area, showing plate boundary (dashed line), seafloor fracture zones (solid lines), the epicentre (star) of the 2018 Gulf of Alaska earthquake and the aftershocks (dots). Lower-left panel show the enlarged map of our result. Blue lines are our estimate of the faults, along with the fault movements indicated as arrows. Lower-right panel shows the spatiotemporal distribution of the slip migration, projected along the north-south direction. The dashed rectangles highlight the rupture events recognised by this study.

Image: 
University of Tsukuba

Tsukuba, Japan - An earthquake is generally viewed to be caused by a rupture along a fault that is transmitted outward from its point of origin in a uniform, predictable pattern. Of course, given the complexity of the environments where these ruptures typically occur, the reality is often much more complicated.

In a new study published in Scientific Reports, a research team led by the University of Tsukuba developed a new method to model the details of complex earthquake rupture processes affecting systems of multiple faults. They then applied this method to the magnitude 7.9 earthquake that struck the Gulf of Alaska near Kodiak Island on January 23, 2018.

As study co-author Professor Yuji Yagi explains, "Our method uses a flexible finite-fault inversion framework with improved smoothness constraints. This approach allows us to analyze seismic P waves and estimate the focal mechanisms and rupture evolution of geometrically complex earthquakes involving rupture of multiple fault segments."

Based on the distribution of aftershocks within one week of the main shock of the Gulf of Alaska earthquake, this method was applied to represent slip along a horizontal plane at a depth of 33.6 km.

The main rupture stage of the earthquake, which lasted for 27 seconds, affected fault segments oriented both north-south and east-west.

"Our results confirm previous reports that this earthquake ruptured a conjugate fault system in a multi-shock sequence," says study first author Shinji Yamashita. "Our model further suggests that this rupture tended to occur along weak zones in the sea floor: fracture zones that extend east-west, as well as plate-bending faults that run parallel to north-south-oriented magnetic lineaments."

These features caused discontinuities in the fault geometry that led to irregular rupture behavior. "Our findings show that irregular rupture stagnation 20 kilometers north of the earthquake's epicenter may have been promoted by a fault step across the seafloor fracture zone," explains co-author Assistant Professor Ryo Okuwaki, "They also indicate a causal link between rupture evolution and pre-existing bathymetric features in the Gulf of Alaska."

This method represents a promising step forward in modeling earthquake rupture processes in complex fault systems based only on seismic body waves, which may improve modeling of seismic wave propagation and mapping of complex fault networks in tectonically active areas.

Credit: 
University of Tsukuba

Direct reprogramming of oral epithelial cells into mesenchymal-like cells

image: The expression levels of Cd31, a marker for endothelial cells, and Cd90, a marker for mesenchymal stem cells, in the endothelial- and mesenchymal stem cell-like cells differentiated from progenitor-stem-like cells were similar to those in the endothelial and mesenchymal stem cells.

Image: 
©?Scientific Reports

Point

Epithelial cell rests of Malassez derived from the periodontal ligament were transformed into progenitor stem-like cells by stimulation with epigenetic agents.

Subsequently, the progenitor stem-like cells were directly differentiated into endothelial, mesenchymal stem, and osteogenic cells that constitute the periodontal ligament.

Background

Stem cells derived from the dental pulp or periodontal ligament have been used for regenerative dentistry. Although it is relatively easy to collect the dental pulp stem cells, it is difficult to obtain adequate numbers of good quality cells; a stable supply is required for their application in regenerative dentistry. In the present study, we generated the progenitor-stem-like cells from ERM cells in the periodontal ligament using the epigenetic modifications without gene transfer. The progenitor stem-like cells were differentiated into endothelial, mesenchymal stem, and osteogenic cells--which constitute the periodontal ligament--using a direct reprogramming method that induces the differentiation of progenitor stem cells into the target cells.

Methods and Results

The isolated ERM cells were induced to differentiate into progenitor stem-like cells, which were similar to stem cells, following stimulation with the epigenetic agents, 5-Azacytidine and Valproic acid, for 1 week. The progenitor stem-like cells expressed the protein stem cell markers NANOG and OCT3/4, which were also observed in the iPS cells (Figure 1).

The progenitor stem-like cells were differentiated into endothelial, mesenchymal stem, and osteogenic cells. The expression of the specific marker for each cell type was confirmed (Figures 2 and 3).

Future prospects

The findings of the present study may contribute to the development of new periodontal regenerative therapy. Epigenetic agents have been successfully applied in various human diseases, including cancers. Nonetheless, further investigations are needed to confirm the findings of this study.

Credit: 
Health Sciences University of Hokkaido

Researchers' algorithm designs soft robots that sense

image: MIT researchers have developed a deep learning neural network to aid the design of soft-bodied robots, such as these iterations of a robotic elephant.

Image: 
Courtesy of Alexander Amini, Andrew Spielberg, Daniela Rus, Wojciech Matusik, Lillian Chin, et. al

There are some tasks that traditional robots -- the rigid and metallic kind -- simply aren't cut out for. Soft-bodied robots, on the other hand, may be able to interact with people more safely or slip into tight spaces with ease. But for robots to reliably complete their programmed duties, they need to know the whereabouts of all their body parts. That's a tall task for a soft robot that can deform in a virtually infinite number of ways.

MIT researchers have developed an algorithm to help engineers design soft robots that collect more useful information about their surroundings. The deep-learning algorithm suggests an optimized placement of sensors within the robot's body, allowing it to better interact with its environment and complete assigned tasks. The advance is a step toward the automation of robot design. "The system not only learns a given task, but also how to best design the robot to solve that task," says Alexander Amini. "Sensor placement is a very difficult problem to solve. So, having this solution is extremely exciting."

The research will be presented during April's IEEE International Conference on Soft Robotics and will be published in the journal IEEE Robotics and Automation Letters. Co-lead authors are Amini and Andrew Spielberg, both PhD students in MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). Other co-authors include MIT PhD student Lillian Chin, and professors Wojciech Matusik and Daniela Rus.

Creating soft robots that complete real-world tasks has been a long-running challenge in robotics. Their rigid counterparts have a built-in advantage: a limited range of motion. Rigid robots' finite array of joints and limbs usually makes for manageable calculations by the algorithms that control mapping and motion planning. Soft robots are not so tractable.

Soft-bodied robots are flexible and pliant -- they generally feel more like a bouncy ball than a bowling ball. "The main problem with soft robots is that they are infinitely dimensional," says Spielberg. "Any point on a soft-bodied robot can, in theory, deform in any way possible." That makes it tough to design a soft robot that can map the location of its body parts. Past efforts have used an external camera to chart the robot's position and feed that information back into the robot's control program. But the researchers wanted to create a soft robot untethered from external aid.

"You can't put an infinite number of sensors on the robot itself," says Spielberg. "So, the question is: How many sensors do you have, and where do you put those sensors in order to get the most bang for your buck?" The team turned to deep learning for an answer.

The researchers developed a novel neural network architecture that both optimizes sensor placement and learns to efficiently complete tasks. First, the researchers divided the robot's body into regions called "particles." Each particle's rate of strain was provided as an input to the neural network. Through a process of trial and error, the network "learns" the most efficient sequence of movements to complete tasks, like gripping objects of different sizes. At the same time, the network keeps track of which particles are used most often, and it culls the lesser-used particles from the set of inputs for the networks' subsequent trials.

By optimizing the most important particles, the network also suggests where sensors should be placed on the robot to ensure efficient performance. For example, in a simulated robot with a grasping hand, the algorithm might suggest that sensors be concentrated in and around the fingers, where precisely controlled interactions with the environment are vital to the robot's ability to manipulate objects. While that may seem obvious, it turns out the algorithm vastly outperformed humans' intuition on where to site the sensors.

The researchers pitted their algorithm against a series of expert predictions. For three different soft robot layouts, the team asked roboticists to manually select where sensors should be placed to enable the efficient completion of tasks like grasping various objects. Then they ran simulations comparing the human-sensorized robots to the algorithm-sensorized robots. And the results weren't close. "Our model vastly outperformed humans for each task, even though I looked at some of the robot bodies and felt very confident on where the sensors should go," says Amini. "It turns out there are a lot more subtleties in this problem than we initially expected."

Spielberg says their work could help to automate the process of robot design. In addition to developing algorithms to control a robot's movements, "we also need to think about how we're going to sensorize these robots, and how that will interplay with other components of that system," he says. And better sensor placement could have industrial applications, especially where robots are used for fine tasks like gripping. "That's something where you need a very robust, well-optimized sense of touch," says Spielberg. "So, there's potential for immediate impact."

"Automating the design of sensorized soft robots is an important step toward rapidly creating intelligent tools that help people with physical tasks," says Rus. "The sensors are an important aspect of the process, as they enable the soft robot to "see" and understand the world and its relationship with the world."

Credit: 
Massachusetts Institute of Technology

Diamond color centers for nonlinear photonics

image: The nonlinear emission spectrum from diamond crystal with NV centers (NV diamond) excited with IR laser (1350 nm). Both SHG and THG are simultaneously generated at 675 nm and 450 nm, respectively. An inset photograph was taken during the nonlinear emission (SHG and THG) from the NV diamond.

Image: 
University of Tsukuba

Tsukuba, Japan - Researchers from the Department of Applied Physics at the University of Tsukuba demonstrated second-order nonlinear optical effects in diamonds by taking advantage of internal color center defects that break inversion symmetry of diamond crystal. This research may lead to faster internet communications, all-optical computers, and even open a route to next generation quantum sensing technologies.

Current fiber optical technology uses light pulses to transfer broad-bandwidth data that let you check your email, watch videos, and everything else on the Internet. The main drawback is that light pulses hardly interact with each other, so the information must be converted into electrical signals to allow your computer to handle it. An "all optical" system with light-based logic processing would be much faster and more efficient. This would require new, easy to fabricate nonlinear optical materials that can mediate the combination or splitting of photons.

Now, a team of researchers at the University of Tsukuba have shown that synthetic diamonds can exhibit a second-order nonlinear response. Previously, scientists thought that the inversion-symmetric nature of diamond crystal lattice could only support weaker, odd-order nonlinear optical effects, which depend on the electric field amplitude raised to the power of three, five, and so on. But the team showed diamonds can support second-order nonlinear optical effects when color centers--so-called nitrogen-vacancy (NV) centers--are introduced. In these cases, two adjacent carbon atoms in the diamond's rigid lattice are replaced with a nitrogen atom and a vacancy. This breaks the inversion symmetry and permits even-order nonlinear processes to occur, which include more useful outcomes that scale as the electric field squared. "Our work allows us to produce powerful second-order nonlinear optical effects, such as second harmonic generation and electro-optic effect, in bulk diamonds," senior author Professor Muneaki Hase says.

The team used chemical vapor-deposited single-crystal diamonds (from Element Six), with extra nitrogen ions implanted to encourage the formation of NV centers. The emission spectrum they observed when the diamonds were excited with 1350-nm light showed clear second- and third-order harmonic peaks (Figure 1). These observations represent the merging of two or three photons, respectively, into a single photon of higher energy. "In addition to new photonic devices, second-order nonlinear optical effect by NV centers in diamonds might be used as the basis of quantum sensing of electromagnetic fields," first author Dr. Aizitiaili Abulikemu says. Because diamonds are already used in industrial applications, they have the advantage of being relatively easily applicable to optical uses.

Credit: 
University of Tsukuba

Night owls with gestational diabetes may face higher risk of pregnancy complications

WASHINGTON--Among women who develop diabetes during pregnancy, night owls have a higher risk of complications for mother and baby than early birds do, according to a study whose results will be presented at ENDO 2021, the Endocrine Society's annual meeting.

Compared with other pregnant women with gestational diabetes, those with a preference for evening activity had three times higher the chance of having preeclampsia, which is pregnancy-induced high blood pressure, and four times the rate of their newborns being treated in a neonatal intensive care unit, the study investigators reported.

These findings suggest a new potential health risk from disturbances in the body's 24-hour internal clock, specifically the sleep-wake cycle, said lead investigator Cristina Figueiredo Sampaio Facanha, M.D., an endocrinologist with the diabetes center of the Federal University of Ceara (Universidade Federal do Ceara) in Fortaleza, Brazil.

"Circadian rhythm disturbances could add an additional risk of poor pregnancy outcomes in women with gestational diabetes," said Facanha, who heads the center's department of diabetes in pregnancy.

Gestational diabetes, a form of diabetes during pregnancy, can on its own raise the mother's risk of premature delivery and preeclampsia, as well as the baby's risk of growing too large in the womb or having breathing problems after birth. This type of diabetes affects four to eight of every 100 pregnant women in the U.S and 10 of every 100 pregnant women in Brazil, according to estimates from the Hormone Health Network and the International Diabetes Federation.

Past research shows associations between adverse health effects and chronotype, an individual's propensity for when to sleep and when to feel peak energy. An evening chronotype describes when someone feels more active in the evening and goes to bed very late, whereas early risers have a morning chronotype. People with the evening chronotype tend to have more depression, anxiety, night eating and unhealthy lifestyle than do those with a morning chronotype, Facanha noted.

"Hormones, blood pressure and glucose, or blood sugar, metabolism follow circadian rhythms that synchronize with a master clock in the brain," Facanha said. "When the circadian rhythm for the sleep-wake cycle is thrown off, it not only can create sleep problems but also can interfere with glucose metabolism, affecting pregnancy health."

The researchers wanted to learn whether chronotype influences pregnancy complications in women with gestational diabetes and their newborns. They studied 305 women with this type of diabetes during the second and third trimesters of pregnancy. The women completed questionnaires about their chronotype preferences, sleep quality, daytime sleepiness and symptoms of depression.

Nearly half of study participants--151 women--had a morning chronotype, consistent with recent research findings that pregnancy induces an earlier chronotype, Facanha said. Another 21 women had the evening chronotype. The remaining 133 participants had no strong chronotype and were classified as having an intermediate type.

Compared with women who had other chronotypes, women with an evening preference reported significantly greater symptoms of depression both before and after pregnancy as well as worse sleep quality, insomnia and daytime sleepiness, the data showed. Even after the researchers controlled for depression symptoms and sleep variables in their statistical analyses, evening chronotype remained an independent risk factor for preeclampsia, Facanha said.

She suggested that women with gestational diabetes should receive screening to determine their chronotype using a simple questionnaire as part of routine prenatal care, "it might be helpful in the prediction of complications in pregnancy," Facanha said.

"Women may be able to reduce their evening preference," Facanha said. "A change in habits and increased exposure to morning natural light, exercise and a reduction in blue screen light is an accessible form of treatment that can potentially improve health measures in pregnancy."

Credit: 
The Endocrine Society

Sleep disturbances may contribute to weight gain in menopause

WASHINGTON--Addressing sleep symptoms during menopause may reduce susceptibility to weight gain, according to a small study presented virtually at ENDO 2021, the Endocrine Society's annual meeting.

"Our findings suggest that not only estrogen withdrawal but also sleep disturbances during menopause may contribute to changes in a woman's body that could predispose midlife women to weight gain," said lead researcher Leilah Grant, Ph.D., of Brigham and Women's Hospital in Boston, Mass. "Helping women sleep better during menopause may therefore reduce the chances a woman will gain weight, which in turn will lower her risk of diabetes and other related diseases."

Rates of obesity increase in women around the age of menopause. Menopause-related weight gain is often thought to be caused by the withdrawal of the female hormone estrogen. Estrogen is unlikely to be to be the only contributing factor, however, since all women stop producing estrogen in menopause while only about half of women gain weight, Grant said. Another common symptom, also affecting around half of women during menopause, is sleep disturbance, which has independently been linked to changes in metabolism that might increase the risk of weight gain.

To better understand the role of sleep disturbances and hormonal changes in menopausal weight gain, the researchers studied 21 healthy pre-menopausal women. They used an experimental model simulating the sleep disturbance experienced in menopause to examine the effects of poor sleep on the body's use of fat.

Participants had two nights of uninterrupted sleep followed by three nights of interrupted sleep, where they were woken by an alarm every 15 minutes for 2 minutes each time. The researchers then restudied a subset of nine participants in the same sleep interruption protocol after they were given a drug called leuprolide, which temporarily suppressed estrogen to levels similar to menopause.

Compared to a normal night of sleep, after three nights of disturbed sleep there was a significant reduction in the rate at which the women's bodies used fat. A similar reduction in fat utilization was also seen when estrogen was suppressed, even during normal sleep. The combination of low estrogen and sleep disturbance also reduced fat utilization, but the effect was not larger than either exposure on their own.

"In addition to estrogen withdrawal, sleep disturbances decrease fat utilization," Grant said. "This may increase the likelihood of fat storage and subsequent weight gain during menopause."

Credit: 
The Endocrine Society

'Hunger hormone' ghrelin affects monetary decision making

WASHINGTON--Higher levels of the stomach-derived hormone ghrelin, which stimulates appetite, predict a greater preference for smaller immediate monetary rewards over larger delayed financial rewards, a new study finds. The study results will be presented at ENDO 2021, the Endocrine Society's annual meeting.

This research presents novel evidence in humans that ghrelin, the so-called "hunger hormone," affects monetary decision making, said co-investigator Franziska Plessow, Ph.D., assistant professor of medicine at Massachusetts General Hospital and Harvard Medical School, Boston. She said recent research findings in rodents suggested that ghrelin may play a part in impulsive choices and behaviors.

"Our results indicate that ghrelin might play a broader role than previously acknowledged in human reward-related behavior and decision making, such as monetary choices," Plessow said. "This will hopefully inspire future research into its role in food-independent human perception and behavior."

Ghrelin signals the brain for the need to eat and may modulate brain pathways that control reward processing. Levels of ghrelin fluctuate throughout the day, depending on food intake and individual metabolism.

This study included 84 female participants ages 10 to 22 years: 50 with a low-weight eating disorder, such as anorexia nervosa, and 34 healthy control participants. Plessow's research team tested blood levels of total ghrelin before and after a standardized meal that was the same for all participants, who had fasted beforehand. After the meal, participants took a test of hypothetical financial decisions, called the delay discounting task. They were asked to make a series of choices to indicate their preference for a smaller immediate monetary reward or a larger delayed amount of money, for instance, $20 today or $80 in 14 days.

Healthy girls and young women with higher ghrelin levels were more likely to choose the immediate but smaller monetary reward rather than waiting for a larger amount of money, the researchers reported. This preference indicates more impulsive choices, Plessow said.

The relationship between ghrelin level and monetary choices was absent in age-matched participants with a low-weight eating disorder. People with this eating disorder are known to have ghrelin resistance, and Plessow said their finding might be another indicator of a disconnect between ghrelin signaling and behavior in this population.

The study received funding from the National Institutes of Health and a Charles A. King Trust Research Fellowship Award to Plessow. Naila Shiraliyeva, M.D., a research fellow at Massachusetts General Hospital, will present the study findings at the meeting.

Credit: 
The Endocrine Society

Real "doodles of light" in real-time mark leap for holograms at home

video: Handwriting on a tablet is converted into 3D images in real-time using a standard desktop PC.

Image: 
Tokyo Metropolitan University

Tokyo, Japan - Researchers from Tokyo Metropolitan University have devised and implemented a simplified algorithm for turning freely drawn lines into holograms on a standard desktop CPU. They dramatically cut down the computational cost and power consumption of algorithms that require dedicated hardware. It is fast enough to convert writing into lines in real-time, and makes crisp, clear images that meet industry standards. Potential applications include hand-written remote instructions superimposed on landscapes and workbenches.

Flying cars, robots, spaceships...whatever sci-fi future you can imagine, there is always a common feature: holograms. But holography isn't just about aesthetics. Its potential applications include important enhancements to vital, practical tasks, like remote instructions for surgical procedures, electronic assembly on circuit boards, or directions projected on landscapes for navigation. Making holograms available in a wide range of settings is vital to bringing this technology out of the lab and into our daily lives.

One of the major drawbacks of this state-of-the-art technology is the computational load of hologram generation. The kind of quality we've come to expect in our 2D displays is prohibitive in 3D, requiring supercomputing levels of number crunching to achieve. There is also the issue of power consumption. More widely available hardware like GPUs in gaming rigs might be able to overcome some of these issues with raw power, but the amount of electricity they use is a major impediment to mobile applications. Despite improvements to available hardware, the solution is not something we can expect from brute-force.

A key solution is to limit the kind of images that are projected. Now, a team led by Assistant Professor Takashi Nishitsuji have proposed and implemented a solution with unprecedented performance. They specifically chose to exclusively draw lines in 3D space. Though this may sound drastic at first, the number of things you can do is still impressive. In a particularly elegant implementation, they connected a tablet to a PC and conventional hologram generation hardware i.e. a laser and a spatial light modulator. Their algorithm is fast enough that handwriting on the tablet could be converted to images in the air in real-time. The PC they used was a standard desktop with no GPU, significantly expanding where it might be implemented. Though the images were slightly inferior in quality to other, more computationally intensive methods, the sharpness of the writing comfortably met industry standards.

All this means that holograms might soon be arriving in our homes or workplaces. The team is especially focused on implementations in heads-up displays (HUDs) in helmets and cars, where navigation instructions might be displayed on the landscape instead of voice instructions or distracting screens. The light computational load of the algorithm significantly expands the horizons for this promising technology; that sci-fi "future" might not be the future for much longer.

Credit: 
Tokyo Metropolitan University