Brain

HKU team identifies areas of top priority for deep-sea monitoring

image: Responses were collected from 112 leading deep-sea scientists around the world regarding deep-sea monitoring, conservation and management by an international research team. The results of the survey are presented in Nature Ecology & Evolution, identifying key areas on which future conservation and management strategies should be focused. The article highlights priorities for monitoring, including large animals and habitat-forming species like deep-sea corals, and the impact of human activities such as mining on this vulnerable ecosystem.

Image: 
Lisa Levin

To classify the most important ecological and biological components of the deep sea, an international team including Professor Roberto Danovaro from Stazione Zoologica Anton Dohrn Napoli, Italy and Dr Moriaki Yasuhara from The Swire Institute of Marine Science (SWIMS) and School of Biological Sciences, The University of Hong Kong (HKU) sent a questionnaire-based survey to the world's leading deep-sea scientists around the world. They then analysed the responses received from 112 scientists so as to create an expert-led list of priorities covering all aspects of deep-sea monitoring, conservation and management.

The results of the survey are presented in a Perspective in the leading journal Nature Ecology & Evolution, identifying key areas on which future conservation and management strategies should be focused. The article highlights priorities for monitoring, including large animals and habitat-forming species like deep-sea corals, and the impact of human activities such as mining on this vulnerable ecosystem.

The deep sea (defined here as below 200-metre depth) represents the largest, but least explored, type of environment, or biome, on Earth. It is home to many ecologically rare, unique and unknown species that are increasingly under multiple threats from industrial deep-sea mining, deep-sea fishing, climate change and plastic pollution.

The results of the survey indicated that habitat-forming species such as deep-sea corals were considered the most important area for conservation efforts. Large and medium-bodied organisms were considered the top priority for biodiversity monitoring in deep-sea habitats. Other equally significant factors for monitoring the deep-sea environment include habitat degradation and recovery as a measure of ecosystem health, and classification of the food-web structure of deep-sea communities to monitor the functioning of whole ecosystems. Identifying shifts in the depth ranges of different species was also a priority for monitoring responses to climate change.

"People tended to think deep sea is immune from human impacts or climate changes but scientific studies have been proving that it is untrue. Indeed, a better and standardised monitoring framework to prepare and manage for future is needed," said Dr Yasuhara. South China Sea is, of course, not an exemption from the threats. SWIMS is an Asian hub of marine conservation and contributes to global deep-sea issues, for example, through this global collaborative paper and also through the ongoing 2nd World Ocean Assessment led by United Nations.

"The results are useful as a guideline for future deep-sea research, conservation and monitoring. The endorsements and adoptions of the proposed deep-sea essential ecological variables by industry, governments and non-governmental organisations would also help guide more sustainable management of oceans," the authors concluded.

Credit: 
The University of Hong Kong

Light burns with new acids

image: Light irradiation onto the ambient inert new PLAG generates Lewis Acid as a versatile catalyst. As the image depicts this process is like flipping a deck of cards and always coming up with the all-mighty card.

Image: 
Tsuyoshi Kawai

Researchers at Nara Institute of Science and Technology (NAIST) report a photo-acid generator (PAG) that generates Lewis acids with a quantum yield that is vastly superior to PAGs that generate Brønsted acids. The new PAG is based on photo-chemical 6π-percyclization and is demonstrated to initiate the polymerization of epoxy monomers and catalyze Mukaiyama-aldol reactions.

PAGs are chemical species that release strong acids, either in solution or solid state, upon exposure to light. These acids can then be used to activate various biological and photo-polymer systems.

Most PAGs form Brønsted acids and do so with great efficiency. However, Brønsted acids limit options in terms of substrates and reaction mechanisms, particularly for organic synthesis when compared to Lewis acids. Some of chemical substances are easily decomposed with Brønsted acids but not with Lewis acids.

"Lewis acid catalysts are much useful but often unstable and require careful introduction to reaction systems. Preferably, we would induce Lewis acid catalysts remotely, like optical exposure," explains NAIST Associate Professor Takuya Nakashima, one of the lead researchers in the project.

The new PAG depends on the addition of a triflate group to terarylene. The triflate group showed a high propensity to release from the terarylene upon exposure to UV light.

"We have focused on terarylene because its very high light-sensitivity, compatibility into polymer films and no oxygen/moisture inhibition. We have also found non-linear responses that can be used to greatly enhance the light-sensitivity," says NAIST Professor Tsuyoshi Kawai, another lead researcher. Indeed, the photo-chemical quantum yield was 0.5, which is much higher than standard PAGs.

The resulting Lewis acid was generated without any radical intermediate and sustained for more than 100 days. It was then used to initiate the polymerization of epoxy monomers, SU-8, and Mukaiyama-aldol reactions for benzaldehyde and silyl enolates to produce silyl aldols of different stereoisomers, that were not possible with Brønsted acids formed with PAGs.

"This system opens new opportunities for Lewis-acid reactions. It is the first to generate photo-activated Mukaiyama-aldol reactions," says Kawai.

Credit: 
Nara Institute of Science and Technology

Linguistics: The pronunciation paradox

Learners of foreign languages can hear the errors in pronunciation that fellow learners tend to make, but continue to fall foul of them themselves despite years of practice. A new study of Ludwig-Maximilians-Universitaet (LMU) in Munich shows that everyone believes their own pronunciation to be best.

One of the most difficult aspects of learning a foreign language has to do with pronunciation. Learners are typically prone to specific sets of errors, which differ depending on the learners first language. For instance, Germans typically have trouble articulating the initial 'th' in

English, as evidenced by the classical expression 'Senk ju vor träwelling' familiar to passengers on German railways. Conversely, native speakers of English tend to have difficulty with the German 'ü', which they tend to pronounce as 'u'. Many people laugh at these mistakes in pronunciation, even though they make the same mistakes themselves. But this reaction in itself points to a paradox: It demonstrates that learners register errors when made by others. Nevertheless, the majority of language learners finds it virtually impossible to eliminate these typical errors even after years of practice. A study carried out by LMU linguists Eva Reinisch and Nikola Eger, in collaboration with Holger Mitterer from the University of Malta, has now uncovered one reason for this paradox. "Learners have a tendency to overestimate the quality of their own pronunciation," says Reinisch. "As a rule, they believe that their English is better than that spoken by their fellow students at language schools, although they make the same set of errors." This exaggerated assessment of one's
own ability is an important factor that helps explaining why it is so difficult to learn the sounds of a foreign language.

In the study, the researchers asked 24 female German learners of English to read out 60 short sentences, such as "The family bought a house", "The jug is on the shelf", and "They heard a funny noise". Several weeks later, the same learners were invited back to the lab and asked to listen to recordings of four learners - three others and themselves. Specifically, they were asked to grade the pronunciation of each sentence. In order to ensure that participants would not recognize their own productions, the recordings were manipulated in such a way that the female speakers sounded like male speakers. "This element of the
experimental design is crucial. It was essential that none of listeners would be aware that their own productions were included in the test sample; otherwise their assessments
couldn't be taken as unbiased," says Holger Mitterer. The results of this test were unambiguous. In all cases, the listeners rated their own pronunciation as better than others did, even though they were unable to recognize that it was their own recording. "We were surprised that the experiment so clearly pointed to the significance of overestimation of one's own abilities in this context," says Reinisch.

There are several possible explanations for these findings. Previous research has shown that familiar accents are easier to understand than accents that are less familiar. "One is best acquainted with the sound of one's own voice, and has no difficulty understanding it," says Reinisch, who is at LMU's Institute of Phonetics and Language Processing. "Perhaps this familiarity leads us to regard our pronunciation as being better than it actually is." Another possible contributory factor is what is known as the 'mere exposure' effect. This term refers to the fact that we tend to rate things with which we are more familiar - such as the sound of our own voice - as more congenial.

The results of the study underline the importance of external feedback in language courses, because it increases the learners; awareness of deficits in language production and comprehension. "As long as we believe that we are already pretty good, we are not going to put in more effort to improve," Reinisch points out. A lack of feedback increases the risk of what researchers refer to as 'fossilization'. Leaners feel that they have already mastered the unfamiliar articulation patterns in the new language, although that is in fact not the case. They therefore see no reason why they should invest more time in improving their pronunciation. The authors of the new study are not likely to fall into this sort of error. They are already considering ways to improve the situation with the aid of apps that generate the necessary external feedback - irrespective of how users rate their own performance. PLOS ONE 2020

Credit: 
Ludwig-Maximilians-Universität München

In vitro organ model research trends

image: Published 24 times per year, the flagship provides a fundamental understanding of structure-function relationships in normal and pathologic tissues with the ultimate goal of developing biological substitutes.

Image: 
Mary Ann Liebert, Inc., publishers

New Rochelle, NY, February 7, 2020--Two distinct approaches are predominantly used to recapitulate physiologically relevant in vitro human organ models. Organoids use stem cells to grow self-assembled replica organs through directed differentiation, whereas the organ-on-a-chip approach involves microfluidics and carefully controlled, 3D-printed architecture and assembly. It is difficult to assess and compare each strategy's overall influence with the increasing pace of discovery, but a new study using bibliometric analysis of nearly 3,000 research and review articles illuminates research trends. This work is reported in Tissue Engineering, a peer-reviewed journal from Mary Ann Liebert, Inc., publishers. Click here to read the article for free on the Tissue Engineering website through March 7, 2020.

In "Global Trends of Organoid and Organ-on-a-chip in the Past Decade: A Bibliometric & Comparative Study", Pu Chen, PhD, Wuhan University School of Basic Medical Sciences, China, and coauthors present the results of their literature-based investigation. The authors identify research hotspots and their evolution, different scientific areas being influenced, and global trends for both organoid and organ-on-a-chip models. A thorough record is included of the most cited studies, influential authors and institutions, and the most relevant journals for each technique. Ultimately, the authors provide a useful framework for appreciating the unique trajectory of both approaches and also reveal a growing trend of combining the two methods.

"Organoids and Organ-on-a-chip mimic the cellular organization and physiology of native tissue," says Tissue Engineering Methods Co-Editor-in-Chief John A. Jansen, DDS, PhD, Professor and Head, Radboud University Medical Center, Netherlands. "Therefore, they are one of the major breakthrough technology platforms for tissue engineering studies."

Credit: 
Mary Ann Liebert, Inc./Genetic Engineering News

Supervisors share effective ways to include people with disabilities in the workplace

image: Dr. Phillips is a research assistant professor at the University of New Hampshire Institute of Disability, where she research addresses issues of disability and employment.

Image: 
UNHIOD

East Hanover, NJ. February 7, 2020. A multidisciplinary team of researchers at Kessler Foundation and the University of New Hampshire, Institute on Disability (UNH-IOD), has authored a new article that describes the practices that employers use to facilitate the inclusion of employees with disabilities in their workplaces. "The effectiveness of employer practices to recruit, hire, and retain employees with disabilities: Supervisor perspectives, (DOI: 10.3233/JVR-191050) was published by the Journal of Vocational Rehabilitation on November 19, 2019.

Link to abstract: https://content.iospress.com/articles/journal-of-vocational-rehabilitation/jvr191050

The article is based on initial findings from the 2017 Kessler Foundation National Employment and Disability Survey: Supervisor Perspectives (KFNEDS: SP), the first national survey to examine the effectiveness of the processes and practices used by employers to include people with disabilities in their workplaces, from the unique perspective of supervisors of employees with disabilities. The authors are Kimberly G. Phillips, PhD, and Andrew Houtenville, PhD, of the University of New Hampshire Institute on Disability, and John O'Neill, PhD, and Elaine E. Katz, MS, CCC-SLP, of Kessler Foundation.

The 2017 KFNEDS:SP, which was based on a Qualtrics business-to-business panel, comprised 6,530 supervisors at U.S. organizations with a minimum of 25 employees. The majority of respondents had experience with disability, either personally or through a close relationship, and many had hired and supervised workers with disabilities.

Information elicited included the existence of employment-related processes (e.g., recruiting process), whether these processes were effective, and comparison of the effectiveness of these processes for people with and without disabilities. Several questions gauged the supervisors' commitment to the inclusion of people with disabilities in their organization, and their view of the commitment of their upper management. Questions addressed whether organizations had specific employment practices in place, and if so, whether they were effective. If a practice was not in place, supervisors were asked whether they felt it would be feasible to implement it. Supervisors also responded to open-ended questions about processes and practices at their organization, and the potential challenges and successes for their implementation for employees with disabilities.

Among the survey's findings were processes and practices that were effective for people with disabilities, but underutilized by organizations, according to Dr. Phillips, research assistant professor at the University of New Hampshire. "For example, partnering with a disability organization was identified as a highly effective way to identify qualified candidates," she reported. "However, only 28.5% of organizations had implemented this as a means of recruiting employees with disabilities. Interestingly, 75% of supervisors said this would be feasible for their organization to implement." Other effective, but underutilized practices were auditing of hiring practices, supervisor training in accessible application and interview methods, job shadowing, onsite training, and job sharing.

The survey revealed that the commitment of upper management mirrored the attitudes of supervisors and was reflected in the organization's hiring goals for people with disabilities. "Our findings underscore the importance of the commitment of upper management to an inclusive workplace," said Dr. O'Neill, director of Employment and Disability Research at Kessler Foundation. "The greater the commitment, the greater the support for supervisors, and the more likely we are to see successful inclusion of employees with disabilities."

Credit: 
Kessler Foundation

Mindfulness helps obese children lose weight

Mindfulness-based therapy may help reduce stress, appetite and body weight in children with obesity and anxiety, according to a study published in Endocrine Connections. They reported that obese children on a calorie-restricted diet alongside mindfulness therapy lose more weight and are less stressed and hungry, than children on a calorie-restricted diet alone. These findings suggest that mindfulness has potential to help obese children lose more weight through dieting and may reduce their risk of serious health issues, such as high blood pressure or stroke, although further research is needed to confirm this.

Childhood obesity increases the risk of a number of detrimental medical conditions, such as heart disease and diabetes, and can also be associated with stress and anxiety. Despite this common association, most treatment strategies don't address psychological factors and focus solely on diet and exercise. Previous studies suggest many eating disorders associated with obesity, such as binge eating, can be driven by elevated stress levels that make it more difficult to stick to dietary regimes.

Mindfulness is a psychological technique that uses meditation to increase personal awareness, and has successfully helped reduce stress associated with other diseases, such as cancer and anorexia nervosa. Therefore, combining both diet and mindfulness treatment strategies may lead to improved weight loss results in obese children, than a restricted diet alone.

In this study, Dr Mardia López-Alarcón investigated the effect of mindfulness-based therapy on stress, appetite and body weight of children with obesity and anxiety. Children selected for the study completed a self-report questionnaire to measure levels of anxiety and their body mass index was recorded. A group of 33 children were taught mindfulness skills in 2-hour guided sessions, once a week, for eight weeks, alongside a typical calorie-restricted diet. Another group of 12 children completed an eight week calorie-restricted diet only. The combined therapy led to significantly greater reductions in weight, anxiety and in the levels of two hormones related to stress and appetite, cortisol and ghrelin. Whereas an increase in anxiety and a smaller weight reduction was observed in the group on a calorie-restricted diet alone.

"Our results suggest that restricted diets may in fact increase anxiety in obese children. However, practicing mindfulness, as well dieting, may counteract this and promote more efficient weight loss," Dr López-Alarcón comments.

These findings provide evidence that mindfulness may have potential for managing anxiety and weight in obese children on calorie-restricted diets, by reducing appetite and stress hormones. The increased levels of anxiety observed in the calorie-restricted only group, suggest that current weight loss strategies should consider psychological factors, as well as physical and lifestyle factors, in order to achieve better results.

Dr López-Alarcón recommends, "The potential counter effect anxiety may have on weight loss should be considered when children are undergoing dietary restriction. Our research supports the inclusion of mindfulness as a strategy to reduce anxiety and increase the chance of successful weight loss."

However, this preliminary data compared just 33 children on the combined therapy with 12 dieting alone. Dr López-Alarcón and her team now plan to assess the potential benefits of this technique in larger groups of children.

Credit: 
Society for Endocrinology

Psychology: High volumes of mental health-related tweets associated with crisis referrals

Referrals to two mental healthcare providers in London for patients requiring urgent help were significantly greater on days with a higher than average number of tweets discussing topics around mental health, according to a study published in Scientific Reports. The study used data collected between January 2010 and December 2014 at South London and Maudsley NHS Foundation Trust (SLAM) and Camden and Islington NHS Foundation Trust (C&I).

Previous studies have shown that social media use, portrayal of mental illness in the media and public discussions around mental health may be associated with negative mental health outcomes. However, research thus far has primarily focused on high-profile events reported by the news media. Associations with mentions of mental health on social media have remained understudied.

Robert Stewart and colleagues compared the number of tweets containing keywords associated with two important health disorders - depression and schizophrenia - with recorded referrals for 'crisis episodes' to SLAM and C&I. Between January 2010 and December 2014, 48,691 and 32,689 crisis episodes were recorded by SLAM and C&I, respectively. On days with a higher than average number of tweets mentioning depression, schizophrenia or showing support for either illness, the authors observed 5-15% increases in the number of mental health-related crisis episodes referred to SLAM or C&I.

Credit: 
Scientific Reports

Tel Aviv university researchers demonstrate optical backflow of light

Researchers at Tel Aviv University have for the first time demonstrated the backflow of optical light propagating forward. The phenomenon, theorized more than 50 years ago by quantum physicists, has never before been demonstrated successfully in any experiment -- until now.

"This 'backflow' phenomenon is quite delicate and requires exquisite control over the state of a particle, so its demonstration was hindered for half a century," explains Dr. Alon Bahabad of the Department of Physical Electronics at TAU's School of Electrical Engineering, who led the research for the study. "This phenomenon reveals an unintuitive behavior of a system comprised of waves, whether it's a particle in quantum mechanics or a beam of light.

"Our demonstration could help scientists probe the atmosphere by emitting a laser beam and inducing a signal propagating backward toward the laser source from a given point in front of the laser source. It's also relevant for cases in which fine control of light fields is required in small volumes, such as optical microscopy, sensing and optical tweezers for moving small particles," Dr. Bahabad says.

The study, published on January 16 in Optica, was conducted by Dr. Bahabad's graduate students Dr. Yaniv Eliezer, now at Yale University, and Thomas Zacharias.

Light is similar to quantum particles in that both can be constructed from interfering waves. Such a construction, in which several waves are added together to produce a new wave, is known as a superposition. If a special superposition of waves, all propagating forward, is constructed, the overall wave can realize what's called "optical backflow."

In their holography experiment, the scientists split and reassembled a laser beam in the form of light waves that propagated at positive angles with respect to an axis. The different light beams had to be constructed very carefully, with precise values for their strength and delay. Once the superposition was created, a small slit was set and moved perpendicularly to the beam to, in effect, measure the direction of the beam in different locations.

The light escaping from the slit was revealed in most locations as moving at a positive angle. But in some locations, the light escaping the slit propagated at a negative angle, even though the light hitting the other side of the slit was comprised of a superposition of beams all propagating at a positive angle.

"We used holography to create a clear manifestation of the backflow effect," adds Dr. Bahabad. "We realized at some point that we can utilize a previous study of ours, where we discovered the mathematical phenomenon known as suboscillation, to help us design a beam of light with backflow.

"To conclude, if interfering waves, all going in one direction, are constructed in a special manner, and you were to measure the direction of propagation of the overall wave at specific locations and times, you just might find the wave going backward. This wave can describe a particle using quantum mechanics. This surprising behavior violates any intuition that we gained from our daily experience with the movement of macroscopic objects. Nevertheless, it still obeys the laws of nature."

Credit: 
American Friends of Tel Aviv University

Design approach may help fix bias in artificial intelligence

Bias in artificial intelligence (AI) and machine learning programs is well established. Researchers from North Carolina State University and Pennsylvania State University are now proposing that software developers incorporate the concept of "feminist design thinking" into their development process as a way of improving equity - particularly in the development of software used in the hiring process.

"There seem to be countless stories of ways that bias in AI is manifesting itself, and there are many thought pieces out there on what contributes to this bias," says Fay Payton, a professor of information systems/technology and University Faculty Scholar at NC State. "Our goal here was to put forward guidelines that can be used to develop workable solutions to algorithm bias against women, African American and Latinx professions in the IT workforce.

"Too many existing hiring algorithms incorporate de facto identity markers that exclude qualified candidates because of their gender, race, ethnicity, age and so on," says Payton, who is co-lead author of a paper on the work. "We are simply looking for equity - that job candidates be able to participate in the hiring process on an equal footing."

Payton and her collaborators argue that an approach called feminist design thinking could serve as a valuable framework for developing software that reduces algorithmic bias in a meaningful way. In this context, the application of feminist design thinking would mean incorporating the idea of equity into the design of the algorithm itself.

"Compounding the effects of algorithmic bias is the historical underrepresentation of women, Black and Latinx software engineers to provide novel insights regarding equitable design approaches based on their lived experiences," says Lynette Yarger, co-lead author of the paper and an associate professor of information sciences and technology at Penn State.

"Essentially, this approach would mean developing algorithms that value inclusion and equity across gender, race and ethnicity," Payton says. "The practical application of this is the development and implementation of a process for creating algorithms in which designers are considering an audience that includes women, that includes Black people, that includes Latinx people. Essentially, developers of all backgrounds would be called on to actively consider - and value - people who are different from themselves.

"To be clear, this is not just about doing something because it is morally correct. But we know that women, African Americans and Latinx people are under-represented in IT fields. And there is ample evidence that a diverse, inclusive workforce improves a company's bottom line," Payton says. "If you can do the right thing and improve your profit margin, why wouldn't you?"

Credit: 
North Carolina State University

Scientists reveal whole new world of chemistry by stepping indoors

image: HOMEChem lead researcher Delphine Farmer, right, and graduate student Erin Boedicker, look at a droplet-measurement instrument.

Image: 
Callie Richmond

Colorado State University atmospheric chemist Delphine Farmer had spent her entire career probing the complexities of outdoor air - how gases and particles in the atmosphere move, interact and change, and how human activities perturb the air we breathe.
Then, she went inside.

That is, the Department of Chemistry associate professor turned her attention to the less-studied realm of indoor air. And she's come to discover that the chemistry inside can be vastly more complex than that of outdoor air systems.

More than two years ago, Farmer and over 60 collaborators from 13 universities set in motion a first-of-its-kind experiment attempting to map the airborne chemistry of a typical home, subjected to typical home activities like cooking and cleaning. The effort was dubbed HOMEChem - House Observations of Microbial and Environmental Chemistry - and was led by Farmer and Marina Vance, a mechanical engineer at University of Colorado Boulder.

Now, as the team sifts through the reams of data they collected, Farmer and her CSU research team have published their first major study from HOMEChem. The paper, appearing in Environmental Science and Technology, reports what they learned about chemical reactions that occurred while mopping floors with a common bleach solution.

On HOMEChem, her first foray into indoor chemistry, Farmer "became a convert when I heard the statistic that we spend 90 percent of our lives indoors."

"It's puzzling, really, that all our health outcomes are tied to outdoor air," Farmer said. "It made me curious as a scientist when I realized just how little we know about chemistry indoors."

Her team of graduate students and postdocs is now busy crunching more data and compiling potential follow-up studies.

In the Test House

Backed by $1.1 million from the Sloan Foundation's Chemistry of Indoor Environments program, the HOMEChem team descended on the perfect location for their experiments: the Test House at University of Texas at Austin, a full-size, manufactured "home" that serves as a kind of blank slate for scientific experiments. The team occupied the house for most of June 2018, simulating activities in an average Western home. Their efforts are detailed in an overview paper in Environmental Science: Processes & Impacts.

Their experimental run-of-show, which read very much like a family chore list, included things like cooking vegetable stir-fry, scrubbing surfaces with household products, and wet-mopping floors. One session was even dedicated to cooking a typical Thanksgiving meal while recording resulting emissions. All this, while operating hundreds of thousands of dollars' worth of sensitive equipment that could detect everything in the air from single-nanometer particles, to hundreds of different volatile organic compounds.

Farmer's team from CSU included graduate students Jimmy Mattila, Matson Pothier and Erin Boedicker, and postdoctoral researchers Yong Zhou and Andy Abeleira. The team deployed 12 separate instruments for tracking three broad categories of compounds: organics, oxidants and particles. Postdoctoral researcher and data scientist Anna Hodshire recently joined Farmer's team and will be responsible for managing the large datasets the researchers gathered over the course of HOMEChem.

Bleach cleaning results

For the bleach-cleaning study, Farmer's team recorded the airborne and aqueous chemistry from several consecutive days of mopping a floor with bleach, diluted to manufacturer's specifications. On some days, they also observed how that chemistry was affected when floors were mopped following a cooking session.

According to the paper, the researchers observed sharp, albeit short-lived, spikes in hypochlorous acid, chlorine and nitryl chloride in the air, which are compounds more typically associated, at lower levels, with the outdoor air of coastal cities.

Mattila, the paper's first author and graduate student who operated a chemical ionization mass spectrometer during HOMEChem, said the team was surprised to learn that multi-phase chemistry - not just the gas phase - controls the production and removal of inorganic compounds in the air during bleach cleaning. The bleach in the mop water, applied to the floor, would react with the molecules in the house's surfaces and walls to create new compounds. It turns out such surfaces - and the layer of muck many homes accumulate from years of living - can act as reservoirs for a wide variety of acidic and basic molecules that can then interact with substances like bleach.

"You would intuitively think that since we're making these fumes in the air, and there's other stuff in the air, they're probably just reacting," Mattila said. "It turns out that indoor multiphase chemistry, in the bleach solution and on various indoor surfaces, is what's actually driving the observations."

The group collaborated with scientists at UC Irvine to develop a model for understanding how the aqueous and surface molecules lead to secondary chemistry.

When they mopped after cooking, they also observed interactions of nitrogen and ammonia emissions from the food with the cleaning products. They saw low levels of chloramines, considered harmful to human health, which are made when chlorine mixes with ammonia. Humans also breathe out trace amounts of ammonia.

"If you look on any bottle of bleach, you'll see a serious warning not to mix chlorine and ammonia, because it will make a dangerous set of compounds called chloramines," Farmer said. "What we found is there was enough ambient ammonia to still make some of these compounds, even without mixing them. Not to the point where it was dangerous, but it was interesting to see that chemistry happening."

An obvious takeaway from the researchers: When cleaning with bleach, open a window or use a fan to increase ventilation. And always appropriately dilute the solution; cleaning with straight bleach could create dangerous breathable compounds, depending on what else is in the air or on the walls.

A baseline for future studies

The entire HOMEChem experiment was unprecedented in its scope. The study is an attempt at establishing a baseline understanding of what a person at home, doing typical home activities, can expect to be breathing. Among the key takeaways from the experiments as a whole was that combining different indoor activities leads to very different chemistry in the house.

"For example, we see that cleaning with bleach after you clean indoors with a terpene solution, like Pine Sol, can actually lead to some chemistry you wouldn't normally see with bleach alone," Mattila said. "That was kind of unexpected, and could be potentially harmful, because it could lead to the production of secondary organic aerosols."

HOMEChem was a measurement experiment and did not involve epidemiologists. The researchers believe their data will serve as a useful starting point for inquiries into human health outcomes tied to indoor air environments.

Credit: 
Colorado State University

Both Sn and Zn single-atoms on CuO catalyst synergistically promote dimethyldichlorosilane synthesis

image: (A) AC HAADF-STEM image of Sn1/CuO, (B) AC HAADF-STEM and (C) HAADF-STEM images as well as the corresponding EDS mappings of 0.1Zn1-Sn1/CuO. The bright dots marked with the red circles in images A and B indicate the single atom.

Image: 
©Science China Press

Because of their maximum atom-utilization efficiency and unique catalytic properties, single-atom catalysts (SACs) have sparked intense interests in recent years. However, most of the reported SACs are limited to single-site active components, with rare reports on catalyst promoters in their single forms. Because promoters are essential components in many industrial catalysts, the exploration of the preparation of single-site promoters should be of great interest in catalysis, both in fundamentals and application researches. Similar to SACs, these single sited promoters have the structural simplicity and homogeneity, and its synergistic effect on the catalytic reaction should be unique but yet clarified.

In a recent article published in the Beijing-based National Science Review, scientists at the General Research Institute for Nonferrous Metal (GRINM) in Beijing, China, GRIPM Advanced Materials Co., Ltd. In Beijing, China and Institute of Process Engineering, Chinese Academy of Sciences in Beijing, China, have designed and synthesized an atomically dispersed co-promoters of Sn and Zn on the CuO surface. As demonstrated, this catalyst exhibited greatly enhanced promoting effect in the industrially important Rochow reaction for dimethyldichlorosilane synthesis. Also, for the first time, the synergistic promotion mechanism has also been revealed.

The authors employed a facile hydrothermal method to synthesize Sn1/CuO with a large number of surface Cu vacancies. Furthermore, they investigated the structure of this new catalyst employing various characterization methods and proved the successful uploading of the two single-site promoters. The XPS data gave direct evidence that there is a strong interaction between Sn and Zn atoms. "After incorporation with Zn atoms, the binding energy of Cu 2p3/2 peak shifts to lower-energy side in comparison with that of CuO, and this shift is observed obviously in 0.1Zn1-Sn1/CuO, indicating an increase of the electron density on the Cu atoms with the coexistence of Sn and Zn atoms," they state. Direct experimental results showed that these defect sites generated by incorporating single-site Sn could further stabilize single-site Zn (see below figure). "Density functional theory (DFT) calculations also show that on Sn-doped CuO(110) surface, the formation energy of Cu vacancy is 0.78 eV lower than that on the clean CuO(110), which indicates it is easier to form Cu vacancies in the Sn-doped surface," they add. The calculation results also support that Zn prefers to fill in the nearby Cu vacancies caused by Sn doping to form Sn-Zn pairs.

Comparing with the conventional catalysts with promoters in the form of nanoparticles, this novel Zn1-Sn1/CuO catalyst has much higher activity, selectivity, and stability in the synthesis of dimethyldichlorosilane via the industrially important Rochow reaction. The enhanced catalytic performance is attributed to the synergistic interaction between single-site Sn and Zn co-promoters, which leads to the change in the electronic structure of CuO and thus promotes the adsorption of reactant molecules.

"These single-sited promoters not only help to elucidate their real promotion mechanism in catalytic reaction, but also open up a new path to optimize catalyst performance," they state in an article titled "Single-atom Sn-Zn pairs in CuO catalyst promote dimethyldichlorosilane synthesis."

This work got the supports of Dr. Wenxin Chen in Beijing Institute of Technology, China; Prof. Jianmin Ma in Hunan University, China; Prof. Ziyi Zhong in Guangdong Technion Israel institute of Technology (GTIIT), China; and Prof. Yadong Li in Tsinghua University, China.

"This work provides a new understanding of the synergistic effect among various promoters and will offer avenues to the design of new co-promoters in catalysts for industrial reactions," they believe.

Credit: 
Science China Press

iPS cells to regulate immune rejection upon transplantation

image: iPSC-derived thymic epithelial cells imaged by phase-contrast microscopy.

Image: 
Hokkaido University

Scientists suggest a new strategy that uses induced pluripotent stem cells (iPSCs) to regulate immune reaction to transplanted tissues.

The team, led by Professor Ken-ichiro Seino of Hokkaido University's Institute for Genetic Medicine, found that thymic epithelium cells derived from mouse induced pluripotent stem cells (iPSCs) can regulate immune response to skin grafts, extending their survival.

The thymus, located behind the sternum, is a crucial organ for generating T-cells. T-cells control immune response, including organ rejection, and are closely associated with immunological self-tolerance, the ability of the immune system to recognize self-produced antigens as non-threatening.

Pluripotent stem cells such as embryonic stem cells (ESCs) and iPSCs, which are capable of differentiating into various types of cells, are expected to be an alternative source of grafts for transplantation. But when an organ or tissue from a donor is transplanted, the grafts are rejected and eventually destroyed by the recipient's immune system. The same holds true for cells or tissues derived from pluripotent stem cells. In regenerative medicine, it is therefore important to regulate immune reaction for transplants to succeed.

While past research found that it is difficult to efficiently make thymic epithelium cells from iPSCs, the group discovered that the introduction of Foxn1, a pivotal gene in the thymus, to mouse iPSCs helps the efficient differentiation of such cells.

The team then transplanted to the recipient mice the thymic epithelium cells derived from iPSCs and skin grafts of genetically compatible donor mice.

The results showed that the skin grafts in the recipient mice survived longer when iPSC-derived thymic epithelium cells were transplanted in advance.

"This suggests it is possible to regulate immune rejection by transplanting immune-regulating cells derived from iPSCs before conducting cell or tissue transplants. This finding will contribute to the advancement of regenerative medicine using cells and tissues derived from iPSCs," says Ken-ichiro Seino of the research team.

Credit: 
Hokkaido University

How iron carbenes store energy from sunlight -- and why they aren't better at it

image: Experiments at SLAC showed that an inexpensive photosensitizer molecule, iron carbene, can respond in two competing ways when hit by light. Only one of those pathways (right) allows electrons to flow into devices or chemical reactions where they're needed. The molecules took this energy-producing path about 60% of the time.

Image: 
Greg Stewart/SLAC National Accelerator Laboratory

Photosensitizers are molecules that absorb sunlight and pass that energy along to generate electricity or drive chemical reactions. They're generally based on rare, expensive metals; so the discovery that iron carbenes, with plain old iron at their cores, can do this, too, triggered a wave of research over the past few years. But while ever more efficient iron carbenes are being discovered, scientists need to understand exactly how these molecules work at an atomic level in order to engineer them for top performance.

Now researchers have used an X-ray laser at the Department of Energy's SLAC National Accelerator Laboratory to watch what happens when light hits an iron carbene. They discovered that it can respond in two competing ways, only one of which allows electrons to flow into the devices or reactions where they're needed. In this case, the molecule took the energy-producing path about 60% of the time. The team published their results January 31 in Nature Communications.

In a solar cell, an iron carbene attaches to the semiconductor film on the surface of the cell with its iron atom sticking up. Sunlight hits the iron atom and liberates electrons, which flow into the carbene attachments. If they remain on those attachments long enough - 10 trillionths of a second or more - they can then move into the solar cell and boost its efficiency. In chemistry, the energy boost that photosensitizers provide helps drive chemical reactions, but requires even longer residence times for the electrons on the carbene attachments.

To pin down how this works, an international team led by researchers from the Stanford PULSE Institute at SLAC examined samples of iron carbene with X-ray laser pulses from the lab's Linac Coherent Light Source (LCLS). They simultaneously measured two separate signals that reveal how the molecule's atomic nuclei move and how its electrons travel in and out of the iron-carbene bonds.

The results showed that electrons were stored in the carbene attachments long enough to do useful work about 60% of the time; the rest of the time they returned to the iron atom too soon, accomplishing nothing.

PULSE's Kelly Gaffney said the long-term goal of this research is to get close to 100 percent of the electrons to stay on carbenes much longer, so the energy from light can be used to drive chemical reactions. To do that, scientists need to find design principles for tailoring iron carbene molecules to carry out particular jobs with maximum efficiency.

Credit: 
DOE/SLAC National Accelerator Laboratory

MOF@hollow mesoporous carbon spheres as bifunctional electrocatalysts

image: (a) Schematic illustration of synthetic procedure for ZIF@HMCS. (b) TEM image of ZIF@HMCS-25%. (c) HAADF-STEM images and EDS mappings of ZIF@HMCS-25%.

Image: 
©Science China Press

With the quick development of industrial technology, the energy crisis caused by the shortage of fossil energy has been a growing headache. The renewable and green energy source systems such as fuel cell and metal-air batteries are regarded as the reliable alternatives to fossil fuels. Oxygen reduction reaction (ORR) and oxygen evolution reaction (OER) are paramount semi-reactions in these applications. The noble metal catalysts are widely used for both ORR and OER. However, their scarcity, high cost, and poor durability strongly impede the large-scale application. Therefore, a rational design of bifunctional non-expensive oxygen electrocatalysts is highly desired.

Metal-organic frameworks (MOFs), a new class material with special chemical and physical properties, have attracted tremendous attention in recent years for their versatile potential applications. Recently, applying MOFs in electrochemical reactions has been an emerging research field because the high surface area of MOFs can maximize active site density, and the special chemical structures of MOFs provide a tailored microenvironment for controllable reaction within the pores. However, it is rarely reported to use MOFs directly in the electrocatalysis field due to their low ion transport and unfavourable electrical conductivity.

Encapsulating nanoparticles into hollow mesoporous carbon sphere (HMCS) is a classical design. This design is helpful to stabilize catalytic active sites, increase electrical conductivity and reduce mass transport lengths. The yolk-shell structure design such as metallic nanoparticles@carbon, metal oxide@carbon has been widely used in lithium batteries, catalysis and other flieds. However, the design of MOFs@HMCS yolk-shell structured hybrid material has not been reported yet. Therefore, it is believed that the elaborate combination of MOFs with HMCS to construct a yolk-shell structured hybrid material will effectively overcome the aforementioned shortcoming of MOFs materials in electrocatalysis field.

In response to this challenge, recently, the research team led by Prof. Cao Rong from Fujian Institute of Research on the Structure of Matter of the Chinese Academy of Sciences designed a yolk-shell structured ZIF-67@HMCS hybrid material by using ZIF-67 as core and hollow mesoporous carbon spheres (HMCS) as shell innovatively. The particle size of ZIF-67 is well controlled by utilizing the spatial confinement effect from HMCS, which shortens the diffusion paths and enhances the ion transportation. Encapsulating ZIF-67 in HMCS also increase its conductivity prominently. Moreover, the typical hierarchical pore structures of HMCS guarantee the diffusion of reactive species to the exposed active sites of ZIF-67 quickly and efficiently, and thus improve the electrochemical activity. The ZIF-67@HMCS hybrid material exhibits superior bifunctional electrocatalytic activity towards both ORR and OER. What's more, the assembled Zn-air battery by ZIF-67@HMCS as air-cathode also presents impressive performance and long-term stability.

This bifunctional yolk-shell structured hybrid material could be a promising candidate as electrocatalyst in fuel cells and electrolysers for renewable energy applications. This work also paves a new way toward designing stable MOFs used directly as high efficiency electrochemical catalysts in promising energy storage devices to meet the growing demand of stable energy supply.

Credit: 
Science China Press

Sediment loading key to predicting post-wildfire debris flows

image: A stream channel showing accumulation of fine sediment following the 2009 Station Fire in the San Gabriel Mountains, Southern California. Fine sediment is eroded off steep hillsides during and immediately after wildfire, and then mobilized by debris flows during winter storms.

Image: 
Roman DiBiase, Penn State

The mudslides that follow wildfires in Southern California can be deadly and difficult to predict. New research can help officials identify areas prone to these mudslides and respond before disaster occurs, according to scientists.

Mudslides, or debris flows, can occur when rainfall washes away the buildup of sediment in mountain channels. Roughly equal parts water and sediment, debris flows are strong enough to carry large boulders downhill and threaten communities on or near the mountains. The debris flows in January 2018 that hit Montecito, California, killed 23 people and caused hundreds of millions of dollars in damage. Authorities attributed the mudslides to the wildfires that swept through the area the previous month.

Vegetation holds back the sediment in these steep landscapes, but as the vegetation burns during a wildfire, gravity transports the sediment from the hillsides down to the channel in a process called dry sediment loading, said Roman DiBiase, assistant professor of geosciences at Penn State.

In steep landscapes like those in Southern California, dry sediment loading plays a more significant role in the severity of post-wildfire mudslides than rainfall, shallow landslides and burn severity, according to the scientists. They reported their findings in the February issue of Geology.

"The channels typically are covered with cobbles and boulders that don't move very easily or very often with storms," DiBiase said. "The transported sediment that was trapped behind the vegetation is much finer, so you end up filling the channel with fine-grained materials that make it easier to start a debris flow in the channel itself."

DiBiase and Michael Lamb, professor of geology at the California Institute of Technology, looked at three lidar datasets of the western San Gabriel Mountains. Lidar involves flying a laser scanner over a landscape and sensing the returning light. It allows scientists to reconstruct the topography at a high resolution.

The datasets were collected before the 2009 Station Fire, immediately following the fire and before the first rainfall, and then six years later, after rainfall had led to erosion and debris flows.

The researchers found widespread evidence that dry sediment loading added anywhere from 3 to 10 feet of debris to mountain channels, which were subsequently washed out by rainfall. Channel clearing was absent in areas where dry sediment loading did not occur.

Authorities can use lidar as a rapid response tool to map patterns of sediment loading after a fire and decide which mountain channels to clear before a storm hits, DiBiase said.

The study suggests that understanding how the landscape will respond to a changing climate depends more on soil regeneration rates than future storms.

"Most models assume that you have fully soil-covered conditions on the hillside, so you have an infinite reservoir of material to draw from," DiBiase said. "Our study shows that, at least in the steep landscapes, there's a finite amount of material on the hillside."

The current fire frequency -- or time between fires -- in Southern California landscapes gives the bedrock enough time to erode and become soil, and vegetation to grow back to hold the soil in place, continued DiBiase.

"If the fire frequency in Southern California doubles, when the next fire hits, the piles of sediment that have accumulated behind the plants are only half full," he said. "So, the volume of debris flows that comes out won't be too high. But we still have some work to do to understand what controls how fast the sediment is produced from bedrock and how quickly these piles fill up."

Credit: 
Penn State