Earth

Engaging in family meals starts with healthy family communication

audio: Keely J. Pratt, PhD, talks about the first study specifically looking at family meal practices among adult patients enrolled in weight-management or weight-loss surgery programs.

Image: 
Journal of Nutrition Education and Behavior

Philadelphia, June 8, 2020 - Engaging in family meals may be a matter of improving communication and support at home. A new study in the Journal of Nutrition Education and Behavior, published by Elsevier, connects less family discouragement and better family communication with a higher likelihood to eat evening family meals and family breakfasts together, and not in front of a television.

Researchers studied 259 parents who were also patients at either The Ohio State University or Wake Forest University accredited weight management and bariatric surgery facilities. They found parents who had better family communication and lower discouragement about trying to improve their eating habits were more likely to participate in family meals.

"It's important to note all family members in the home have influence," lead study author Keeley J. Pratt, PhD, The Ohio State University, Columbus, OH, USA, said of the findings that any family member can influence the adoption and maintenance of healthy patterns and behaviors in the home. "Even if someone doesn't have the most power to influence the family (like children), they are all influencing each other."

Previous research has shown parental obesity is typically the strongest risk factor for children to have an obese weight status over time. The study's authors also found parents who perceived their child to be overweight or obese were more than four times as likely to talk to them about the child's weight, also called "weight talk."

While open communication with children about health is beneficial, "it's important to ensure communication directly about children's weight is not harmful in their development of a healthy body image and behaviors. That includes older children and adolescents who are at greater risk of developing eating disorders and disordered eating behaviors," Professor Pratt said.

There was no significant difference between male and female children in this study other than families with female children were more likely to eat dinner together without a television five to seven times a week. Families with younger children, regardless of gender, were more likely to eat family dinners and breakfasts together, and parents of older children were more likely to talk about their own weight with the child.

This was the first study specifically looking at family meal practices among adult patients enrolled in weight-management or weight-loss surgery programs.

"Understanding these associations will provide essential evidence needed to design future family-based interventions for these patients to help in their behavior change and weight loss, prevent the onset of obesity in children, and enhance positive family meal practices and healthy communication about weight," Professor Pratt said.

Credit: 
Elsevier

Disjunct distribution across the equator

image: Holotype specimen

Image: 
Pensoft publishing

The family Elmidae (riffle beetles) is well studied and the existence of 17 genera with 57 species have been reported in Japan so far.
The genus Podonychus Jäch & Kodada, 1997 (Elminae, Macronychini) was hitherto known only to inhabit Siberut Island, Indonesia, and had been regarded as monotypic. This genus is unique in having 6-segmented antennae, which is the smallest number of antennomeres within the family Elmidae. Unexpectedly in 2018, some specimens of this genus were collected by the NPO Kitakyushu Gyobu in Kyushu, Japan. After repeated field investigation and closer examination, it was clear that the specimens represent a new species, closely related to the species P. sagittarius Jäch & Kodada, 1997. In the present paper, this new species, including the endophallic structures and the larva, is described as Podonychus gyobu sp. nov.. The name "gyobu" is in honor of NPO Kitakyushu Gyobu.
The genus Podonychus is currently known to inhabit only Indonesia and Kyushu, Japan, representing a disjunct distribution across the equator. The reason for this disjunctive distribution is unknown at the present time, but it is expected that additional species of this genus will be discovered in east and southeast Asia.

Credit: 
Ehime University

What do electric vehicle drivers think of the charging network they use?

With electric vehicles making their way into the mainstream, building out the nationwide network of charging stations to keep them going will be increasingly important.

A new study from the Georgia Institute of Technology School of Public Policy harnesses machine learning techniques to provide the best insight yet into the attitudes of electric vehicle (EV) drivers about the existing charger network. The findings could help policymakers focus their efforts.

In the paper, published in the June 2020 issue the journal Nature Sustainability, a team led by Assistant Professor Omar Isaac Asensio describes training a machine learning algorithm to analyze unstructured consumer data from 12,270 electric vehicle charging stations across the U.S.

The study demonstrates how machine learning tools can be used to quickly analyze streaming data for policy evaluation in near-real time. Streaming data refers to data that comes in a continuous feed, such as user reviews from an app. The study also revealed surprising findings about how EV drivers feel about charging stations.

For instance, the conventional wisdom that drivers prefer private stations to public ones appears to be wrong. The study also finds potential problems with charging stations in larger cities, presaging challenges yet to come in creating a robust charging system that meets all drivers' needs.

"Based on evidence from consumer data, we argue that it is not enough to just invest money into increasing the quantity of stations, it is also important to invest in the quality of the charging experience," Asensio wrote.

Perceived Lack of Charging Stations a Barrier to Adoption

Electric vehicles are considered a crucial part of the solution to climate change: transportation is now the leading contributor of climate-warming emissions. But one major barrier to broader adoption of electric vehicles is the perception of a lack of charging stations, and the attending "range anxiety" that makes many drivers nervous about buying an EV.

While that infrastructure has grown considerably in recent years, the work hasn't taken into account what consumers actually want, Asensio said.

"In the early years of EV infrastructure development, most policies were geared to using incentives to increase the quantity of charging stations," Asensio said. "We haven't had enough focus on building out reliable infrastructure that can give confidence to users."

This study helps rectify that shortcoming by offering evidence-based, national analysis of actual consumer sentiment, as opposed to indirect travel surveys or simulated data used in many analyses.

Asensio directed the study with a team of five students in public policy, engineering, and computing. Two were from Georgia Tech: Catharina Hollauer, a recent graduate of the H. Milton School of Industrial and Systems Engineering, and Sooji Ha, a dual Ph.D. student in the School of Civil and Environmental Engineering and the School of Computational Science and Engineering.

The other three were participants in the 2018 Georgia Tech Civic Data Science Fellows program, which draws talented students from around the country to the Georgia Tech campus for a summer of research and learning. They are Kevin Alvarez of North Carolina State University, Arielle Dror of Smith College, and Emerson Wenzel of Tufts University.

EV Charging Sore Spots Revealed

Asensio's team used deep learning text classification algorithms to analyze data from a popular EV users smartphone app. It would have taken most of a year using conventional methods. But the team's approach cut the task down to minutes while classifying sentiment with accuracy similar to that of human experts.

The study found that workplace and mixed-use residential stations get low ratings, with frequent complaints about lack of accessibility and signage. Fee-based charging stations tend to get more poor reviews than free charging stations. But it is stations in dense urban centers that really draw complaints, according to the study.

When researchers controlled for location and other characteristics, stations in dense urban areas showed a 12 - 15% increase in negative sentiment compared to nonurban locations.

This could indicate a broad range of service quality issues in the largest EV markets, including things like malfunctioning equipment and an insufficient number of chargers, Asensio said.

The highest rated stations are often located at hotels, restaurants, and convenience stores, a finding that may support incentive-based management practices in which chargers are installed to draw customers. Stations at public parks and recreation facilities, RV parks, and visitor centers also do well, according to the study.

But, contrary to theories predicting that private stations should provide more efficient services, the study found no statistically significant difference in user preferences when it comes to public versus private chargers.

That finding could be an inducement to invest in public charging infrastructure to meet future growth, Asensio said. Such a network was cited in a study by the National Research Council as key to helping overcome barriers to EV adoption.

Improving Policy Evaluation Beyond EV's

Overall, Asensio said the study points to the need to prioritize consumer data when considering how to build out infrastructure, especially when it comes to requirements for charging stations in new buildings.

But EV policy is not the only way the study's deep learning techniques can be used to analyze this kind of material. They could be adapted to a broad range of energy and transportation issues, allowing researchers to deliver rapid analysis with just minutes of computation, compared to time lags measured sometimes in months or years using more traditional methods.

"The follow-on potential for energy policy is to move toward automated forms of infrastructure management powered by machine learning, particularly for critical linkages between energy and transportation systems and smart cities," Asensio said.

Credit: 
Georgia Institute of Technology

Patterns in permafrost soils could help climate change models

image: Lead author Michael O'Connor sampling permafrost soils on the North Slope of Alaska.

Image: 
Bayani Cardenas/The University of Texas at Austin.

The Arctic covers about 20% of the planet. But almost everything hydrologists know about the carbon-rich soils blanketing its permafrost comes from very few measurements taken just feet from Alaska's Dalton Highway.

The small sample size is a problem, particularly for scientists studying the role of Arctic hydrology on climate change. Permafrost soils hold vast amounts of carbon, which could turn into greenhouse gasses. But the lack of data makes it difficult to predict what will happen to water and carbon as the permafrost melts due to warming temperatures.

New research led by scientists at The University of Texas at Austin may help solve that problem.

The scientists spent the past four summers measuring permafrost soils across a 5,000 square-mile swath of Alaska's North Slope, an area about the size of Connecticut. While working to buildup a much-needed soil dataset, their measurements revealed an important pattern: The hydrologic properties of different permafrost soil types are very consistent, and can be predicted based on the surrounding landscape.

"There is a vast swath of land that is eminently predictable," said Michael O'Connor, who led the research while earning his doctoral degree from the UT Jackson School of Geosciences. "Our paper shows that over an enormous study area, these very simple patterns in these properties hold true."

The study was published in the journal Geophysical Research Letters. Co-authors include researchers from the Jackson School, UT's Cockrell School of Engineering, Utah State University and the University of Michigan.

The researchers examined nearly 300 soil samples from different types of terrain. They found that soil types and their thickness are closely associated with the landscape, with the researchers classifying the landscapes into five categories based on the dominant vegetation and whether the environment was on a hill slope or near the bottom of a river valley.

They also found that each of the three soil types had distinct properties that impacted how easily the soil could transfer heat and water - which determine how carbon dioxide and methane, another powerful greenhouse gas, are released.

The findings will allow scientists to look to the landscape to understand how carbon and greenhouse gasses are moving through the soil below.

While the study does not make predictions about carbon release, co-author Bayani Cardenas, a professor in the Jackson School's Department of Geological Sciences, said that it provides a research framework.

"Our data fills a knowledge gap that has been around for 30 years," Cardenas said. "The community studying permafrost and climate change will appreciate its inherent value."

Permafrost locks away about as much carbon as what is already in the atmosphere. However, until this study, climate modelers lacked direct permafrost soil information, with the research record limited to about a dozen samples taken along the Dalton Highway and engineering reports that studied permafrost for road and pipeline construction.

Improving the data available to climate scientists was the primary motivation behind the permafrost collection campaign, said O'Connor. The North Slope of Alaska is almost pure wilderness. The research team relied on a helicopter to get around and an 18-inch breadknife to slice blocks of soil from the earth.

"We were in some places that probably no human had set foot on." Cardenas said.

Finding a pattern between the landscape and the permafrost soil patterns did not come as a surprise. Plant ecologists working in the region had mentioned it anecdotally. But the newly published data is something the entire research community can draw on.

Cathy Wilson, a hydrologist and climate modeler at Los Alamos National Laboratory who also conducts permafrost research in Alaska, said that the study is a big step for climate models, and that she is looking forward to applying study techniques in her own work.

"This allows us to really start to scale-up this valuable information on soil properties to at least the North Slope, the foothills of mountain ranges, and beyond," she said.

Credit: 
University of Texas at Austin

You are what you eat is as important for fish as it is for people

image: Zooplankton from Strait of Georgia

Image: 
Brian Hunt/ UBC

There is truth in the saying "you are what you eat"; even more so if you are a salmon or herring swimming off the British Columbia coast, a recent UBC study discovered.

Juvenile salmon and herring will feed on any organisms in the ocean that are in a suitable size range; generally something the size of a marble or smaller. In other words, zooplankton. While researchers know that the amount of, and availability of, zooplankton prey is important to the young fish, little was known about the importance of its nutritional quality.

A new study by scientists at the University of British Columbia, together with scientists from Fisheries and Oceans Canada, measured the biochemical properties of zooplankton in the Strait of Georgia, including essential fatty acids, specifically to answer questions around nutritional value of different zooplankton species, and its spatial and seasonal variability in British Columbia.

"We showed that seasonal changes in plankton food web pathways drive variability in plankton fatty acid composition," said David Costalago, lead author of the study, and, at the time of the study, a MITACS-Pacific Salmon Foundation postdoctoral fellow with the Pelagic Ecosystems Lab at UBC's Institute for the Oceans and Fisheries. "This seasonal shift conferred a higher nutritional value to zooplankton in the summer, indicating better quality prey for juvenile salmon and herring during this period. Essentially, juvenile salmon and herring out-migrating from the rivers into the Strait of Georgia late in the season may encounter better food than the bulk of migrants coming out to the ocean in the earlier May-June period."

In addition to seasonal and spatial differences in zooplankton prey quality, this study found that there are huge differences in nutritional value among zooplankton species. From a juvenile salmon perspective, not all zooplankton are equal.

"Differences in the quality of zooplankton as prey are really important for juvenile salmon. They depend on nutritious food for healthy development," said Brian Hunt, UBC Hakai Professor in Oceanography at the Institute for the Oceans and Fisheries. "The differences we found in the fatty acid composition of zooplankton species in this study tell us that climate driven changes in the composition of zooplankton communities may be an important factor in salmon declines."

The growth and survival of fish are conditioned by the nutritional quality of their food, and the fish that grow quickly during early life stages are more likely to reproduce. This study identified a need for better monitoring of nutrition in the ocean, and further research to understand the nutritional pathways that connect the base of the food-web to the rest of the ecosystem.

"By determining the availability of high-quality prey for these commercially important groups of fish we can improve estimates of herring and salmon productivity and help inform fisheries management of these species in the region," said Costalago. "Routine surveying of the plankton throughout the Strait of Georgia, using the tools our study described, may be useful in assessing ecosystem health and guiding management of economically and ecologically important marine populations."

Credit: 
University of British Columbia

Early childhood intervention programs may reap benefits across generations

Youth programs designed to prevent drug use and delinquency and support healthy development can reap lasting benefits not only for participants, but also for their kids, according to a decades-long study published June 10 in the Journal of the American Medical Association (JAMA) Pediatrics.

"This is the first published study to show that a broadly implemented, early childhood prevention program can have positive effects on the next generation," said lead author Karl Hill, director of the Problem Behavior and Positive Youth Development Program at the University of Colorado Boulder. "Previous studies have shown that childhood interventions can demonstrate benefits well into adulthood. These results show that benefits may extend into the next generation as well."

For the study, Hill and collaborators at the University of Washington assessed children whose parents had participated in a program called Raising Healthy Children (RHC) from first through sixth grades in the 1980s.

Set in public elementary schools serving high-crime neighborhoods in Seattle, the program was among the first to test the idea that problem behaviors could be prevented with specialized training for teachers, parents and young children.

"Teachers were taught how to better manage their classrooms, parents were taught to better manage their families, and kids were taught how to better manage their emotions and decision making," said Hill, who got involved in the research, known as the Seattle Social Development Project, in the 1990s while a professor at UW.

Previous studies have shown that by age 18 those who had gone through the program demonstrated better academic achievement than non-participants and were less likely to engage in violence, substance use or unsafe sex. By their 30s, they had gone farther in school, tended to be better off financially, and scored better on mental health assessments.

School programs that pay it forward

"We started thinking, if they are growing up to be healthier adults, maybe they are also better parents and maybe we can measure that impact on their kids," said Hill.

Beginning in 2002, the researchers started following the first-born children of program participants via questionnaires for their teachers and parents and, beginning at age 6, annual interviews with the children.

A total of 182 kids were studied for the new paper, including 72 whose parents had gone through the program and 110 whose parents had not.

Those whose parents had participated in RHC had fewer developmental delays in the first five years of life, fewer behavior problems, fewer symptoms of attention deficit hyperactivity disorder (ADHD) and better cognitive, academic and emotional maturity in the classroom. They were also significantly less likely to report using drugs or alcohol as a teenager.

"We already know that if you can prevent kids from getting involved in the criminal justice system, engaging in underage drinking and drug use, and experiencing depression and anxiety, you can save governments and families a lot of money," said co-author Jen Bailey, assistant director of the Social Development Research Group at University of Washington. "Our results suggest these programs, by delivering cross-generational effects, may be working even better than we thought."

Hill, a psychology and neuroscience professor with the Institute of Behavioral Science, notes that children whose parents had gone through the program in the 80s also showed less "oppositional defiance" and "externalizing behaviors"--two common precursors to serious violence later in life. This suggests such interventions could play a role in stemming the tide of school violence.

The researchers caution that the study was a non-randomized controlled trial set in only one region of the country and needs to be replicated before broad conclusions can be drawn.

But amid a pandemic, when youth depression and anxiety are on the rise but budgets are being slashed and lawmakers may have a tendency to place prevention at a lower priority, Hill hopes the findings send a message.

"By investing in kids now and continuing to invest in them, we could be making generations to come more resilient for when the next national emergency comes around."

Credit: 
University of Colorado at Boulder

New study: Chemists at the University of Halle are able to induce uniform chirality

Chirality is a fundamental property of many organic molecules and means that chemical compounds can appear in not only one form, but in two mirror-image forms as well. Chemists at Martin Luther University Halle-Wittenberg have now found a way to spontaneously induce chirality in crystalline, liquid-crystalline and liquid substances, without requiring any external influence. The findings could be significant for the development of new active substances and for materials science. The study was recently published in Chemical Science an international journal published by the Royal Society of Chemistry.

Chirality is found in almost all molecules occurring in nature. "Molecules are spatial arrangements of interconnected atoms. Many molecules, however, have not only one form, but at least two," explains Professor Carsten Tschierske, a chemist at MLU. When these forms are mirror images of each other it is called chirality.

Both mirror-image forms are produced in equal numbers during normal chemical reactions in the laboratory. "However, things occur differently in nature: carbohydrates, amino acids and nucleic acids only have one dominant form," explains Tschierske. And with good reason: for example, nucleic acids carry information about our DNA. Even the slightest changes to our genetic material can lead to serious diseases. "If each nucleic acid had two forms, the structure of our DNA would be chaotic because there would be too many possible variations. Life as we know it would be impossible," states Tschierske.

The exact process that once created the uniform chirality in these molecules is still unknown. Furthermore, it was long assumed that mixtures of mirror-image molecules can only separate spontaneously in crystalline materials. However, in a study published in "Nature Chemistry" in 2014, Tschierske's team was able to show that this phenomenon of chiral cleavage can also be observed in liquids. "This is significant because the origins of life are found in liquid aqueous systems," explains the chemist.

In this new study, his team went one step further. The researchers found a way to not only generate chirality in liquids, but also to specifically transfer it to liquid-crystalline and crystalline materials without incurring any losses. To do this, the scientists used benzil, a molecule that is normally achiral, in other words, has no mirror image, but can be twisted in such a way to make it chiral. "We already knew that benzil could crystallize in a uniform chiral shape," says Tschierske. By modifying this molecule, the researchers were able to spontaneously generate molecules with uniform chirality even in a liquid state - and to maintain this state during conversions. "These findings contribute to our understanding of the formation of uniform biochirality. At the same time, our approach can also be used to synthesize chiral molecules and materials - without requiring expensive chiral precursors," explains Tschierske.

The study conducted in Halle contributes to our understanding of how uniform biochirality might have developed millions of years ago. At the same time, it provides new insights into how chirality can be spontaneously generated. There is a broad range of applications: for example, chiral substances can be used as active ingredients in medicine. The research findings could also be used in a wide variety of materials, for example in optical information processing.

Credit: 
Martin-Luther-Universität Halle-Wittenberg

Novel computer-assisted chemical synthesis method cuts research time and cost

image: An illustration displaying the novel computer-assisted chemical synthesis method.

Image: 
WPI-ICReDD

Hokkaido University scientists have succeeded in synthesizing an α,α-difluoroglycine derivative, a type of α-amino acid, based on a reaction path predicted by quantum chemical calculations. This novel method, combining experimental chemistry and computational chemistry, could innovate the development of new chemical reactions.

In addition to being the basic constituents of peptides and proteins in our body, α-amino acids are essential for our daily life. They are used for nutritional supplements, food additives and many other products. While natural α-amino acids can be cheaply synthesized through conventional fermentation processes, non-natural α-amino acids, which could add new properties to peptides and proteins, are generally made through chemical syntheses.

In the current study published in Chemical Science, a group of scientists at the university's Institute for Chemical Reaction Design and Discovery (WPI-ICReDD) focused on the chemical synthesis of α,α-difluoroglycine, a non-natural fluorinated α-amino acid, which could enhance the molecule's metabolic stability and biological activity. However, an effective synthesis method for the amino acid has been elusive.

Designing new reactions based on conventional organic synthesis requires numerous trials and errors in experiments and the insights of expert chemists. Consequently, a huge amount of time and money has been required to develop an innovative reaction.

To overcome this problem, WPI-ICReDD, the university's new research hub, adopted its core technology dubbed Artificial Force Induced Reaction (AFIR) method. The AFIR is a computational method applying virtual intermolecular or intramolecular forces to perform systematic search for chemical reaction pathways. The group applied the AFIR method to conduct so-called retrosynthetic analysis which uses quantum chemical calculations to find the decomposition paths of a desired product before proposing synthetic pathways in a reverse manner.

They searched for decomposition pathways of α,α-difluoroglycine and selected a group including three basic and simple compounds -- amine, difluorocarbene, and carbon dioxide. Their calculation predicted these three compounds are capable of producing the target compound in 99.99% yield.

After having fine-tuned various reaction conditions, the experimental chemists successfully synthesized an α,α-difluoroglycine derivative in 80% yield. "We first failed to obtain the target product, but we were still confident that the synthesis was successful because the computational prediction was so concrete. It encouraged us to go on," says Tsuyoshi Mita of the group. "It took only two months to achieve the synthesis, which is significantly faster than a typical development process. It saves significant research time because a computer predicts the feasibility of synthesizing target compounds and their chemical yields."

"The AFIR method is instrumental in conducting next-generation organic syntheses," says Satoshi Maeda, who devised the method. "We expect our method will be applied to effectively producing fine chemicals, functional materials, and discovering new drugs." The group is making a database of pathways found through the AFIR method and hopes to use it with information scientists to accelerate the development of novel chemical reactions.

Credit: 
Hokkaido University

Scientists analyze spatio-temporal differentiation of spring phenology in China from 1979 to 2018

image: Three spatial patterns of first bloom date (FBD) in China and their temporal dynamics

Image: 
©Science China Press

Spatial and temporal differentiations are important features in the study of phenology of ecosystems. Plant phenology studies the life circle phases in plants driven by environmental factors and the study of its long-term patterns and dynamics is significant to reveal the responses of vegetation in different regions of China to global changes. By partitioning data elements into groups and considering them at an abstract level, clustering is one of the most widely used methods to study the spatio-temporal differentiation of phenology. However, differentiations identified by only using spatial clustering are incapable to illustrate the time-varying behavior in the phenology and vice versa. A recent research used co-clustering analysis to explore spatio-temporal differentiation of spring phenology in China.

The recently published paper in SCIENCE CHINA Earth Sciences, "Spatio-temporal differentiation of spring phenology in China driven by temperatures and photoperiod from 1979 to 2018", were written by Professor Cheng Changxiu, Professor Song Changqing and Dr. Wu Xiaojing of Beijing Normal University. The researchers analyzed the long-term first bloom dates (FBD) dataset in China using Bregman block average co-clustering algorithm with I-divergence (BBAC_I) and Extended spring indices (SIx), and the result revealed the spatio-temporal differentiation of spring phenology in China of the recent 40 years (1979~2018).

The results first resulted in three spatial patterns of FBD in China: Spatial patterns 1~3 represent three typical patterns of FBD during the movement from the South to North of China. Their temporal dynamics for 40 years (1979~2018) can be divided into 3 time periods (Figure 1): (1) Early stage (1978~1996): spatial patterns of FBD in China vary between Spatial pattern1 and Spatial pattern2, in which areas located in Jiangxi, northern Xinjiang and middle Inner Mongolia experienced the trend of fluctuant increasing spring onsets; (2) Middle stage (1996~2012): spatial patterns of FBD in China vary between Spatial pattern2 and Spatial pattern3, in which areas located in Fujian, Hunan and eastern Heilongjiang exhibited the trend of fluctuant increasing spring onsets; (3) Late stage (2013~2018): spatial patterns of FBD in China stayed at Spatial pattern3.

The research also displayed 15 temporal patterns of spring phenology over the study period and their spatial delineation in China (Figure 2). These temporal patterns can be divided into five categories according to the changing state of FBD: stable, first stable and then fluctuant, first fluctuant and then stable, frequently fluctuant and drastic fluctuant. According to the spatial delineation, most areas in China belong to the stable state (green region), while northern Guizhou, Hunan, southern Hubei (blue region) belong to the state of first stable in 1979~1997 and then fluctuant; areas of eastern Sichuan, southeastern Hunan and northern Jiangxi (orange region) belong to the state of first fluctuant in 1979~1997 and then stable.

Results of this research revealed the spatio-temporal differentiation of spring phenology in China for the recent 40 years by displaying the spatial patterns of FBD and their temporal dynamics, temporal patterns of FBD and their spatial delineation. Furthermore, the results have certain directive significance on the design of existing observational sites in Chinese Phenological Network.

Credit: 
Science China Press

NASA calculates soaking rainfall in Tropical Depression Cristobal

image: The GPM's core satellite passed over Cristobal on June 8 at 7:46 a.m. EDT (1146 UTC). GPM found heaviest rainfall (orange) north of center falling at rates of 1 inch (25 mm) per hour over northern Louisiana, southern Arkansas and northern Mississippi. Light rain appears around the entire system (light blue), falling at less than 0.2 inches (less than 5 millimeters) per hour.

Image: 
NASA/NRL

When Tropical Storm Cristobal made landfall in southern Louisiana yesterday, June 7, it dropped a lot of rain, and continues to as it weakens and moves inland. NASA's GPM satellite provided a look at the rainfall rates in the now depression.

The Global Precipitation Measurement mission or GPM core satellite passed over Cristobal on June 8 at 7:46 a.m. EDT (1146 UTC). GPM found heaviest rainfall north of center falling at rates of 1 inch (25 mm) per hour over northern Louisiana, southern Arkansas and northern Mississippi. Light rain appears around the entire system, falling at less than 0.2 inches (less than 5 millimeters) per hour.

Large Rainfall Totals Expected

The heavy rainfall rates that GPM observed are expected to add up to large amounts of rain on the ground, and the National Hurricane Center provided an estimate in the latest forecast on June 8, 2020.

NHC said, "Cristobal is expected to produce storm total rainfall accumulations of 5 to 10 inches across portions of the central to eastern Gulf Coast into the Lower Mississippi Valley, with isolated amounts to 15 inches.  Rainfall totals of 2 to 4 inches with local amounts to 6 inches are expected across portions of the mid-to-Upper Mississippi Valley and Northern Plains near and in advance of Cristobal. This rainfall has led to flash flooding and forecast widespread river flooding across portions of the central Gulf Coast into the Lower Mississippi Valley.  Smaller streams across southeast Louisiana and southern Mississippi have begun to rise and are forecast to crest mid-week. New and renewed significant river flooding is possible across the mid and upper Mississippi Valley."

Gusty winds and isolated tornadoes are possible today with the depression.  Gusty winds could also occur Tuesday night and Wednesday over portions of the Midwest and western Great Lakes as Cristobal becomes an extratropical low. Isolated tornadoes are possible today and tonight across Mississippi, Alabama, southeastern Louisiana, eastern Arkansas, western Tennessee, and southeastern Missouri. Ocean swells generated by Cristobal are still affecting portions of the northern and eastern Gulf coast and are likely causing life-threatening surf and rip current conditions.

Cristobal's Status on Monday, June 8, 2020

At 8 a.m. EDT (1200 UTC), the center of Tropical Depression Cristobal was located near latitude 31.8 degrees north and longitude 91.6 degrees west. That puts the center of circulation about 50 miles (75 km) south-southeast of Monroe, Louisiana. The depression is moving toward the north-northwest near 10 mph (17 km/h) and this motion should continue today. Maximum sustained winds were 35 mph (55 kph) with higher gusts. The estimated minimum central pressure based on surface observations is 994 millibars.

Cristobal's Forecast Path

NHC forecasters expect that Cristobal will weaken through Tuesday. However, Cristobal is expected to strengthen some as it becomes an extratropical low pressure area Tuesday [June 9] night and Wednesday [June 10].

NHC expects a turn toward the north tonight, followed by a faster north-northeast motion Tuesday and Wednesday. On the forecast track, the center of Cristobal should move through northeastern Louisiana today, through Arkansas and eastern Missouri tonight and Tuesday, and reach Wisconsin and the western Great Lakes by Wednesday.

Tropical cyclones/hurricanes are the most powerful weather events on Earth. NASA's expertise in space and scientific exploration contributes to essential services provided to the American people by other federal agencies, such as hurricane weather forecasting.

GPM is a joint mission between NASA and the Japan Aerospace Exploration Agency, JAXA. The Suomi NPP satellite is a joint mission with NASA and NOAA.

For updated forecasts, visit: http://www.nhc.noaa.gov

Credit: 
NASA/Goddard Space Flight Center

Army researchers enhance communications for multi-agent teaming

Army researchers are collaborating to enhance multi-agent teaming capabilities for the Soldier that will lead to improved situational awareness and communication capabilities on the battlefield.

Scientists from the U.S. Army Combat Capabilities Development Command's Army Research Laboratory have improved distributed algorithms for multi-agent coordination, and provide a framework for information collection in limited bandwidth scenarios to enhance battlefield situational awareness and communication capabilities for the Soldier.

The team presented the research virtually at the 45th International Conference on Acoustics, Speech, and Signal Processing.

The lab's Drs. Jemin George and James Hare are working at the forefront of this research.

George's research, conducted in collaboration with North Carolina State University and the laboratory's postdoctoral fellow Dr. Anjaly Parayil, is applied to collaborative target tracking and directed communication, and aims at maturing distributed algorithms for autonomous coordination of intelligent multi-agent systems to provide better situational awareness and communication capabilities to the Soldiers.

Presented in two separate papers, this research looks at the problem of autonomous multi-agent coordination for directed communication and persistent surveillance.

The first paper, A Model-Free Approach to Distributed Transmit Beamforming, looks at how multi-agent systems, acting as distributed antenna array elements, coordinate their position and transmitted signal phase so that the broadcasted signal coherently adds up in the desired direction while canceling in others.

"This multi-agent approach allows for establishing a directed communication channel without having access to a physically connected antenna array," George said. "Such communication capability is a severely needed covert communication technology for our Soldiers."

The second paper, Distributed Tracking and Circumnavigation Using Bearing Measurements, looks at the problem of tracking a maneuvering target using multiple unmanned aerial vehicles.

"Studies have shown that overhead surveillance provides minimal awareness in a dense urban terrain, and 75% of air missions fail to drop ordnance in dense urban areas due to fleeting targets," George said. "We hope to change this by developing techniques that would allow multiple UAVs to coordinate their efforts such that they collectively provide persistent eyes on a maneuvering target."

Though there are numerous research efforts on distributed beamforming, he said, almost all of them involve a physically connected antenna array and assume a well-known model for how the environment affects the transmitted signal.

"There is very little prior work on a multi-agent approach to the problem," George said. "Our multi-agent approach is a closed-loop solution to the problem that doesn't require knowing the channel model, which is a model for how the environment influences the transmitted signal. Rather, the closed-loop approach relies on the measurement feedback. Similarly, our distributed tracking work does not make any restricting assumptions regarding the target motion."

According to George, though these are distributed techniques, they do not require constant communication among the participating agents like the typical distributed algorithms. Such distributed algorithms are very useful in a contested and communication constrained battlefield.

"Our efforts in distributed beamforming and directed communication directly support the modernization of Army network technologies," George said. "This covert/directed communication capability could be a game changer in a contested environment. Our efforts in distributed tracking can be leveraged to support various technology challenges in Long Range Precision Fires."

To be able to precisely strike a moving target downrange, he said, Soldiers need eyes on the target, and that is exactly what this current effort offers. The distributed tracking effort also has the potential to provide early indicators and warnings as well as better situational awareness to the decision maker.

Hare's related research, sponsored by an Office of the Secretary of Defense's Laboratory University Collaboration Initiative led by the lab's Dr. Lance Kaplan in collaboration with the Massachusetts Institute of Technology, focuses on a distributed hypothesis testing algorithm that accounts for limited training data and minimizes the number of messages communicated among a network of agents, and is applicable to scenarios with high communication costs.

According to Hare, transmitting messages can cost a large amount of energy, and if the agents are battery limited, the amount of energy consumed must be limited to extend the lifetime of the agent. Another aspect is when there is not enough bandwidth to transmit the entire message. This results in the agents having to transmit multiple messages, which is costly over time.

"This is a very theoretical effort that is investigating fundamentally new ideas to incorporate model uncertainty in distributed hypothesis testing," Hare said.

This particular research effort developed an algorithm that allows a network of social agents to identify the fundamental true state of the world based on a stream of local/private observations and repeated social interactions in a bandlimited environment.

"Each agent collectively debates on which of their predefined models, i.e., hypotheses, best matches the statistical distribution of observations privately observed for applications of situational awareness," Hare said. "However, unlike all the other methods, we assume that the agents do not know the exact statistical distribution of their predefined models and must estimate them based on a small set of training data, resulting in statistical models that are highly uncertain."

The overall goal of the network, he said, is to collectively determine which hypothesis best matches the true state of the world in a distributed manor.

Additionally, Hare and the team proposed a social learning rule that allows the agents to collectively evaluate each hypothesis, while minimizing the number of messages communicated during each time step, due to a bandwidth-limited environment with high communication cost.

"This proposed solution significantly reduces the overall network's communication cost, while allowing the agents to measure their uncertainty in the hypothesis that is consistent with the true state of the world," Hare said.

According to Hare and Kaplan, this research supports the Army Modernization Priorities as the researchers move forward in developing uncertainty-aware artificial intelligence/machine learning algorithms that support Soldier/agent collaboration in applications that involve situation awareness.

"There are examples of improperly confident AI systems providing disastrous recommendations when operating in conditions different from how they were trained, e.g., autonomous car accidents such as the Uber incident in Arizona," Kaplan said. "The Army problem amplifies the issue, and it will be critical for Soldiers to understand the limitations of AI recommendations."

This research effort is fundamental in that it is the first study that the researchers are aware of to consider epistemic uncertainty for distributed hypothesis testing.

"We do hope to eventually develop a new framework of hypothesis testing for general autonomous recognition with a team of distribution agents," Hare said. "This research is so important because the future war effort will consist of teams of autonomous agents identifying the situation at hand to better inform Soldiers."

The next step in this research effort is to generalize the problem for various machine learning techniques, understand the tradeoffs of model calibration and data fusion for decision making, and identify applications that meet the Army modernization priorities.

"With finite resources, errors due to limited information is inevitable," Hare said. "It is important for the decision maker to understand these errors to decide whether to take a kinetic action, collect more observations, or improve AI models in light of tight timelines to take action. The technology of sensor networks will benefit from this effort since the agents, i.e., sensors, possess a limited power supply and cannot waste their energy communicating frequently. Incorporating algorithms that minimize communication cost can significantly extend the overall network's lifetime."

As the Army's corporate laboratory, the researchers' primary objective is to operationalize science for transformational overmatch.

"The research efforts we discussed here are our humble attempts to push current state-of-the-art to enable disruptive technologies for transformational overmatch," George said. "However, the current efforts are still in its infancy stage, meaning significant experimental and test and evaluation efforts are needed to mature these technologies so that it can be integrated into existing as well future multi-agent platforms."

Credit: 
U.S. Army Research Laboratory

New algorithm helps select patients for urgent surgery or chemotherapy during pandemic

A new approach to better select breast cancer patients in need of urgent surgery or chemotherapy during the COVID-19 pandemic has been developed by researchers at The Royal Marsden and the Breast Cancer Now Research Centre at The Institute of Cancer Research, London, in collaboration with colleagues in the UK, Germany and US.

The innovative algorithm, using data from multiple international trials, can identify postmenopausal patients with primary ER+HER2- breast cancer (c.70% of cases) who have less endocrine-sensitive tumours and who should be prioritised for early surgery or neoadjuvant chemotherapy.

The COVID-19 pandemic has led to an international need to prioritise the number of cancer surgeries and chemotherapy treatments to the most urgent patients to protect staff and vulnerable patients. While patients diagnosed with triple negative and Her2 positive breast cancer have still been going forward for urgent surgery or chemotherapy, for a large group of patients deferring these treatments and prescribing neoadjuvant endocrine therapy (NeoET), i.e. treatment to reduce the stimulation of the disease by oestrogen without the surgical removal of the breast tumour, has been identified as the best course of treatment.

Development of the new treatment algorithm was led in the UK by researchers working in the Ralph Lauren Centre for Breast Cancer Research at The Royal Marsden and the Breast Cancer Now Toby Robins Research Centre at The Institute of Cancer Research (ICR).

Professor Mitch Dowsett, Head of the Ralph Lauren Centre for Breast Cancer Research at The Royal Marsden and Professor of Biochemical Endocrinology at the ICR, led the collaboration published in NPJ Breast Cancer this week. The work highlighted that while 85% of patients in whom treatment by surgery is deferred would be safe to remain on NeoET treatment for up to six months, 15% can be identified who are resistant to this treatment and risk disease spread.

Professor Dowsett said: "NeoET can block the tumour from growing successfully for many women but for one in six who are resistant there is a risk the tumour will continue to grow and spread elsewhere.

"By accessing unpublished results from clinical trials involving thousands of patients, with colleagues here and abroad we have developed a new way of directing patients' treatment in this global crisis. Using the data on oestrogen receptor, progesterone receptor and proliferation from the tumour of newly diagnosed patients, our simple new calculator can be used by fellow clinicians worldwide to immediately identify the best course of treatment for about 80% of their patients.

"Then, by drawing upon our earlier research, we can help the other 20% by measuring Ki67 (a protein that measures the number of cells dividing in the tumour) a few weeks after starting their NeoET. Overall, we can identify the 15% of the women who are most at risk of relapsing on just NeoET treatment and should be prioritised for surgery or neoadjuvant chemotherapy.

"The speed and openness of this collaboration to help our patients as rapidly as possible has been unparalleled in my 30 years' experience."

Consultant Breast Surgeon at The Royal Marsden, Peter Barry said: "It is important we treat as many patients that require urgent treatment /surgery as safely as possible during the COVID-19 pandemic. This innovative algorithm will help clinicians offer the best treatment for their patients during these unprecedented times. I have already identified patients that would have been deferred to receive NeoET, that may well have been at risk of progression within the six months."

Baroness Delyth Morgan, Chief Executive at Breast Cancer Now, said: "It's fantastic that this approach could help guide the best possible treatment for thousands of NHS breast cancer patients during the pandemic, and could also now help inform best practice globally.

"This landmark guidance could now help to identify women that must be prioritised for surgery or chemotherapy urgently, and those that could safely be given hormone therapy to delay further treatment during the pandemic. It is a real testament not only to UK science but to the rapid collaboration of researchers globally to help ensure breast cancer patients can get the best possible care while minimising the risks to them at this time."

Credit: 
The Royal Marsden NHS Foundation Trust

Drug researcher develops 'fat burning' molecule

image: Webster Santos (center) and the mitochondrial uncoupler team. Photo courtesy of Webster Santos.

Image: 
Virginia Tech's Webster Santos

Obesity affects more than 40 percent of adults in the United States and 13 percent of the global population. With obesity comes a variety of other interconnected diseases including cardiovascular disease, diabetes, and fatty liver disease, which makes the disease one of the most difficult - and most crucial - to treat.

"Obesity is the biggest health problem in the United States. But, it is hard for people to lose weight and keep it off; being on a diet can be so difficult. So, a pharmacological approach, or a drug, could help out and would be beneficial for all of society," said Webster Santos, professor of chemistry and the Cliff and Agnes Lilly Faculty Fellow of Drug Discovery in the College of Science at Virginia Tech.

Santos and his colleagues have recently identified a small mitochondrial uncoupler, named BAM15, that decreases the body fat mass of mice without affecting food intake and muscle mass or increasing body temperature. Additionally, the molecule decreases insulin resistance and has beneficial effects on oxidative stress and inflammation.

The findings, published in Nature Communications on May 14, 2020, hold promise for future treatment and prevention of obesity, diabetes, and especially nonalcoholic steatohepatitis (NASH), a type of fatty liver disease that is characterized by inflammation and fat accumulation in the liver. In the next few years, the condition is expected to become the leading cause of liver transplants in the United States.

The mitochondria are commonly referred to as the powerhouses of the cell. The organelle generates ATP, a molecule that serves as the energy currency of the cell, which powers body movement and other biological processes that help our body to function properly.

In order to make ATP, nutrients need to be burned and a proton motive force (PMF) needs to be established within the mitochondria. The PMF is generated from a proton gradient, where there is a higher concentration of protons outside of the inner membrane and a lower concentration of protons in the matrix, or the space within the inner membrane. The cell creates ATP whenever protons pass through an enzyme called ATP synthase, which is embedded in the membrane. Hence, nutrient oxidation, or nutrient burning, is coupled to ATP synthesis.

"So anything that decreases the PMF has the potential to increase respiration. Mitochondrial uncouplers are small molecules that go to the mitochondria to help the cells respire more. Effectively, they change metabolism in the cell so that we burn more calories without doing any exercise," said Santos, an affiliated member of the Fralin Life Sciences Institute and the Virginia Tech Center for Drug Discovery.

Mitochondrial uncouplers transport protons into the matrix by bypassing ATP synthase, which throws off the PMF. To reestablish the gradient, protons must be exported out of the mitochondrial matrix. As a result, the cell begins to burn fuel at higher than necessary levels.

Knowing that these molecules can change a cell's metabolism, researchers wanted to be sure that the drug was reaching its desired targets and that it was, above all, safe. Through a series of mouse studies, the researchers found that BAM15 is neither toxic, even at high doses, nor does it affect the satiety center in the brain, which tells our body if we are hungry or full.

In the past, many anti-fat drugs would tell your body to stop eating. But as a result, patients would rebound and eat more. In the BAM15 mouse studies, animals ate the same amount as the control group - and they still lost fat mass.

Another side effect of previous mitochondrial uncouplers was increased body temperature. Using a rectal probe, researchers measured the body temperature of mice who were fed BAM15. They found no change in body temperature.

But one issue arises concerning the half-life of BAM15. The half-life, or the length of time that a drug is still effective, is relatively short in the mouse model. For oral dosing in humans, the optimal half-life is much longer.

Even as BAM15 has some serious potential in mouse models, the drug won't necessarily be successful in humans - at least not this same exact molecule.

"We are essentially looking for roughly the same type of molecule, but it needs to stay in the body for longer to have an effect. We are tweaking the chemical structure of the compound. So far, we have made several hundred molecules related to this," said Santos.

The penultimate goal of the Santos lab is to transition the anti-fat treatment from animal models to a treatment for NASH in humans. The lab has used their better compounds in animal models of NASH, which have been proven to be effective as anti-NASH compounds in mice.

Working alongside Santos is Kyle Hoehn, an assistant professor of pharmacology from the University of Virginia and an associate professor of biotechnology and biomolecular sciences at the University of New South Wales in Australia. Hoehn is a metabolic physiology expert who is in charge of conducting the animal studies. Santos and Hoehn have been collaborating for several years now and they even founded a biotech company together.

Co-founded by Santos and Hoehn in 2017, Continuum Biosciences aims to improve the ways in which our bodies burn fuel and fight back against our bodies ability to store excess nutrients as we age. These promising NASH treatment compounds are licensed by their company and are patented by Virginia Tech.

The company is looking to use mitochondrial uncouplers for more than just obesity and NASH. The molecules also have a unique anti-oxygen effect that can minimize the accumulation of reactive oxygen species, or oxidative stress, in our bodies, which ultimately results in neurodegeneration and aging.

"If you just minimize aging, you could minimize the risk of Alzheimer's disease and Parkinson's disease. All of these reactive oxygen species-related or inflammation-related diseases could benefit from mitochondrial uncouplers. So, we could see this heading that way," said Santos.

Credit: 
Virginia Tech

Better detection of a type of ovarian cancer could lead to better treatments

image: Professor Bryan Hennessy is the study's senior author and associate professor at RCSI.

Image: 
RCSI

Scientists have found that a specific type of ovarian cancer could possibly benefit from existing platinum-based chemotherapy and new DNA repairing treatments, following better testing.

The study, led by researchers from RCSI University of Medicine and Health Sciences, is published in the Journal of the National Cancer Institute and was funded by St Luke's Institute for Cancer Research, North East Cancer Research and Education Trust (NECRET).

BRCA1 and BRCA2 are genes that that prevent cancer by repairing damaged DNA. Inherited mutations in these tumour suppressor genes make someone much more likely to develop cancer in their lifetime, particularly breast and ovarian cancer.

Ovarian cancer patients with a mutated BRCA1 gene live longer with platinum chemotherapy and new DNA repairing treatments than patients who don't have the mutation. This led the researchers to investigate whether these treatments could also benefit other ovarian cancer patients whose BRCA1 gene has been modified in a different way.

Instead of having an inherited mutated BRCA1 gene, some patients have this gene modified in a way that is thought to silence it. The researchers analysed 2,636 patients with ovarian cancer from 15 international studies to see if those with this silenced BRCA1 gene had similar outcomes to those with the mutated gene.

They found that both the mutated and silenced BRCA1 gene were found in serous ovarian cancers and arise at a younger age compared to cancer patients without a mutated or silenced gene.

They also found that the patients with the silenced gene display faulty DNA repair more than the patients with the mutated gene. However, unlike those with the mutated BRCA1 gene, patients with the silenced gene did not respond any better to platinum chemotherapy or have a better prognosis than those with the normal functioning BRCA1 gene.

The researchers believe this difference was the case due to the different methods used in each study to detect the silenced gene. The studies that used a specific test found that patients with a BRCA1 silenced gene lived longer on the platinum chemotherapy treatments compared to those patients whose did not have a silenced or mutated gene.

"We found that the studies that used a specific methylation PCR test showed the results that we would expect for those with truly silenced BRCA1 gene. This suggests that researchers need to refine and standardise the way they test for silencing of this gene," said Roshni Kalachand, an RCSI PhD student and the study's lead author.

"This will enable them to detect 'true' cases of patients that have this gene silenced. Only then will we be able to successfully treat this subgroup of ovarian cancer with drugs targeting DNA repair."

Professor Bryan Hennessy, the study's senior author and associate professor at RCSI, said: "Ovarian cancer ranks among the top ten diagnosed and top five deadliest cancers in most countries. Unfortunately, approximately 80% of patients present at an advanced stage of the disease.

"Therefore, it is critical that clinicians are provided with as many treatment options as possible which can target this disease, both as a stand-alone therapy and in combination with existing therapies."

Approximately 410 women in Ireland each year are diagnosed with ovarian cancer. The main treatment for ovarian cancer is surgery, while other treatments include chemotherapy and radiotherapy.

Credit: 
RCSI

Integrating nanomaterial with light-absorbing molecule powers hydrogen production from water and sun

image: A new photocatalyst consisting of nanoscale metal oxide sheets and a ruthenium dye molecule can generate H2 from water by utilizing visible light.

Image: 
Tokyo Tech

In line with the depletion of fossil fuels and the environmental problems our planet faces due to their combustion, developing technology for clean energy generation is a topic of global interest. Among the various methods proposed to generate clean energy, photocatalytic water splitting is showing much promise. This method utilizes solar energy to split water (H2O) molecules and obtain dihydrogen (H2). The H2 can then be used as a carbon-free fuel or as raw material in the production of many important chemicals.

Now, a research team led by Kazuhiko Maeda at Tokyo Tech has developed a new photocatalyst consisting of nanoscale metal oxide sheets and a ruthenium dye molecule, which works according to a mechanism similar to dye-sensitized solar cells. While metal oxides that are photocatalytically active for overall water splitting into H2 and O2 have wide band gaps, dye-sensitized oxides can utilize visible light, the main component of sunlight (Figure 1). The new photocatalyst is capable of generating H2 from water with a turnover frequency of 1960 per hour and an external quantum yield of 2.4%.

These results are the highest recorded for dye-sensitized photocatalysts under visible light, bringing Maeda's team a step closer to the goal of artificial photosynthesis -- replicating the natural process of using water and sunlight to sustainably produce energy.

The new material, reported in Journal of the American Chemical Society, is constructed from high-surface-area calcium niobate nanosheets (HCa2Nb3O10) intercalated with platinum (Pt) nanoclusters as H2-evolving sites. However, the platinum-modified nanosheets do not work alone, as they do not absorb sunlight efficiently. So a visible light-absorbing ruthenium dye molecule is combined with the nanosheet, enabling solar-driven H2 evolution (Figure 2).

What makes the material efficient is the use of nanosheets, which can be obtained by chemical exfoliation of lamellar HCa2Nb3O10. The high-surface-area and structural flexibility of the nanosheets maximize dye-loadings and density of H2 evolution sites, which in turn improve H2 evolution efficiency. Also, to optimise performance, Maeda's team modified the nanosheets with amorphous alumina, which plays an important role in improving electron transfer efficiency. "Unprecedentedly, the alumina modification for nanosheets promotes dye-regeneration during the reaction, without hindering electron injection from the excited-state dye to the nanosheet ¬-- the primary step of dye-sensitized H2 evolution," Maeda says.

"Until just recently, it was considered very difficult to achieve H2 evolution via overall water splitting under visible light using a dye-sensitized photocatalyst with high efficiency," explains Maeda. "Our new result clearly demonstrates that this is indeed possible, using a carefully designed molecule-nanomaterial hybrid."

More research still needs to be done, as it will be necessary to further optimize the design of the hybrid photocatalyst to improve the efficiency and long-term durability. Photocatalytic water splitting may be a crucial means of meeting society's energy demands without further harming the environment, and studies like this one are essential stepping stones to reaching our goal of a greener future.

Credit: 
Tokyo Institute of Technology