Tech

Kids rice snacks in Australia contain arsenic above EU guidelines: Study

image: A sample of children's rice snacks found in Australian supermarkets. New research shows rice-based products for children in Australia have concentrations of arsenic that exceed the EU guideline for safe rice consumption for babies and toddlers.

Image: 
RMIT University

Rice snacks for kids found in Australian supermarkets contain arsenic at levels above European safety guidelines, a new study shows.

The research found 75% of rice-based products tested had concentrations of arsenic that exceeded the EU guideline for safe rice consumption for babies and toddlers.

The study, published in the International Journal of Environmental Research and Public Health, found Australian children who eat large amounts of rice-based food may be exposed to dangerous amounts of arsenic.

Senior researcher Associate Professor Suzie Reichman, an environmental toxicologist at RMIT University, said the research used European guidelines because Australia does not have safety standards specifically for children.

"While all the products we tested meet Australian guidelines, these do not reflect the latest scientific understanding on how arsenic affects the body," Reichman said.

"Children are far more vulnerable to the long-term toxic effect of metals like arsenic, but our rice guidelines are based on adults.

"The guidelines are also based on out-of-date dietary habits, when rice was generally eaten less often by Australian families.

"This study shows the need to develop new standards specifically for children and ensure our guidelines are in line with what we now know about safe rice consumption."

Reichman said rice-based products were a popular alternative for the growing number of children with gluten intolerances.

"Rice can be safely eaten as part of a well-rounded, balanced diet, but if it's a child's main source of carbohydrates, that could be a problem," she said.

"As a general rule, we recommend that children under five eat rice in moderation and parents should avoid serving rice at every meal, to minimise the risk of exposure to arsenic."

Minimising arsenic exposure

Arsenic is a naturally occurring metal widely found in air, soil and groundwater that comes in both organic and inorganic forms.

Organic arsenic is relatively safe, but inorganic arsenic is a carcinogen linked with cancers of the bladder and skin. Long-term exposure to high amounts of inorganic arsenic is dangerous to human health.

Because rice plants are known to accumulate arsenic more than similar crops, rice safety guidelines aim to minimise potential exposure.

The Australian rice guidelines are for total arsenic (organic and inorganic) and set a maximum level of 1mg/kg. This is more than 3 times higher than the standard for total arsenic set by the World Health Organisation of 0.3mg/kg.

Rather than looking at total arsenic, the European Union guideline for infants and young children focuses specifically on inorganic arsenic and sets a maximum level of 0.1mg/kg.

Product testing

The study tested 39 rice products for babies and toddlers found in Australian supermarkets, including milk formula powder, cereal, crackers and pasta made from brown, white, organic and non-organic rice.

The research found 75% of the products had levels of inorganic arsenic above the EU standard for children.

Among the findings, the study found there was more inorganic arsenic in brown rice crackers than white rice crackers, likely because arsenic is more concentrated in the rice bran that is removed in white rice.

Reichman said the results for brown rice were particularly concerning because it is generally seen by health-conscious parents as a better choice, due to its higher fibre and nutrient contents.

The research was part of a final-year capstone project by Bachelor of Environmental Engineering student, Zhuyun Gu, who is now undertaking a PhD at RMIT.

"The research completed by Zhuyun was of such high standard that it was accepted for publication in a peer-reviewed journal and highlighted in its special edition focusing on arsenic exposure in the environment and human health," Reichman said.

"This work is an important contribution to our understanding of safety issues around rice in our diets, and supports the need for updating arsenic guidelines in Australia.

"It's a fantastic example of how our students can shape the world by looking at practical problems and searching for real solutions."

Credit: 
RMIT University

Entrepreneurs have different storytelling styles for presenting business

Not all companies succeed for one reason or another and statistics highlight that every second Finnish company will cease operations within five years of founding. A new study entitled Post-Failure Impression Management: A Typology of Entrepreneurs' Public Narratives after Business Closure by researchers at the Aalto University School of Business' department of management studies shows that entrepreneurs tend to apply five different storytelling styles when communicating about closing down their business.

'Although many companies fail, there is very little research on what strategies entrepreneurs use when they communicate about business closure. For their future career, the entrepreneur should be able to communicate that she or he remains a competent, professional and credible partner despite his or her failure. That is why we wanted to explore what expressions and impression management strategies entrepreneurs use when communicating the decision to close down their business' says Assistant Professor Ewald Kibler.

The researchers analysed one hundred and eighteen public business-closure statements of IT and software companies from different parts of the world that had closed down their businesses.

The texts were similar in that the entrepreneurs emphasised their efforts, the positive achievements of their companies, and the lessons learned. Contrary to what has previously been found in experimental research, entrepreneurs did not blame other people or external factors for the failure of their business in these public texts.

'By emphasising positive things, entrepreneurs strive to maintain and strengthen their professional image and their relationships with stakeholders, such as company employees, clients and financiers. For the future, this is important because winding up a company often means ending an entrepreneur's current job and livelihood' says Post-doc researcher Virva Salmivaara.

Facts or emotions? Dwelling on the past or moving on for good?

Although the texts had much in common, the researchers also found some clear differences. The distinguishing factors were the semantic patterns and impression management strategies; i.e. the conscious or unconscious means by which entrepreneurs seek to make positive impressions. Different storytelling styles utilise one or more impression management strategies and differences in semantic patterns were, in turn, reflected in the texts as either focusing on people or events or on the past or the future whether the texts expressed emotions or not also distinguished the texts.

'We found five different storytelling styles: Triumph, Harmony, Embrace, Offset, and Show', Post-doc researcher Steffen Farny says.

The storytelling styles Triumph and Harmony both provided examples of competence and positive business outcomes. However, Triumph emphasised the entrepreneur's personal experience, while Harmony also highlighted stakeholders and their importance to the company. The other difference was that Triumph did not express feelings, whereas Harmony favoured expressing positive feelings.

In texts using the Offset style, the entrepreneur focused on future events and promised, for example, to do better in the future. At the same time, she/he could apologise to her/his clients for closing down the company. When using Embrace, the entrepreneur, for their part, emphasised her/his strengths and achievements, but also praised the company's partners and employees, whereas, in Show, the entrepreneur mainly highlighted facts and details. Descriptive of Show was also telling events of the past and expressing negative feelings.

The study of Kibler's research group is the first study based on public business-closure statements written by entrepreneurs, and it provides valuable new insight into how entrepreneurs operate in this challenging situation, both emotionally and professionally. At the same time, new questions have arisen.

'We do not know how the career of the entrepreneurs included in our data evolved after the closure of the companies they set up, so we cannot recommend any of the five narratives/storytelling styles we found. In the future, it would be interesting to study, among other things, whether the positivity and responsibility that entrepreneurs favoured in their statements helped them in their career moving forward' Kibler adds.

The School of Business' entrepreneurship researchers' study has just been published in the prestigious journal Human Relations, which publishes top research on leadership and strategy. The researchers analysed the expressions and impression management strategies used by entrepreneurs when announcing the closing down of the business they had set up. Besides, the researchers complemented the data by interviewing some of the entrepreneurs and by analysing the reactions to the termination announcements on social media.

Credit: 
Aalto University

Let the europium shine brighter

image: The europium Eu(III) complex with nanocarbon antenna emitting fine red light.

Image: 
WPI-ICReDD, Hokkaido University

A stacked nanocarbon antenna makes a rare earth element shine 5 times more brightly than previous designs, with applications in molecular light-emitting devices.

A unique molecular design developed by Hokkaido University researchers causes a europium complex to shine more than five times brighter than the best previous design when it absorbs low energy blue light. The findings were published in the journal Communications Chemistry, and could lead to more efficient photosensitizers with a wide variety of applications.

Photosensitizers are molecules that become excited when they absorb light and then transfer this excited energy to another molecule. They are used in photochemical reactions, energy conversion systems, and in photodynamic therapy, which uses light to kill some kinds of early-stage cancer.

The design of currently available photosensitizers often leads to inevitable energy loss, and so they are not as efficient in light absorption and energy transfer as scientists would like. It also requires high energy light such as UV for excitation.

Yuichi Kitagawa and Yasuchika Hasegawa of Hokkaido University's Institute for Chemical Reaction Design and Discovery (WPI-ICReDD) worked with colleagues in Japan to improve the design of conventional photosensitizers.

Their concept is based on extending the lifetime of a molecular energy state called the triplet excited state and reducing gaps between energy levels within the photosensitizer molecule. This would lead to more efficient use of photons and reduced energy loss.

The researchers designed a nanocarbon "antenna" made of coronene, a polycyclic aromatic hydrocarbon containing six benzene rings. Two nanocarbon antennas are stacked one on top of the other and then connected on either side to the rare Earth metal europium. Extra connectors are added to strengthen the bonds between the nanocarbon antennas and europium. When the nanocarbon antennas absorb light, they transfer this energy to europium, causing the complex to emit red light.

Experiments showed the complex best absorbed light with wavelengths of 450nm. When a blue LED (light-emitting diode) light was shone on the complex, it glowed more than five times brighter than the europium complex which until now had the strongest reported emission under blue light. The researchers also demonstrated that the complex can bear high temperatures above 300? thanks to its rigid structure.

"This study provides insights into the design of photosensitizers and can lead to photofunctional materials that efficiently utilize low energy light," says Yuichi Kitagawa of the research team. The new design could be applied to fabricate molecular light-emitting devices, among other applications, the researchers say.

Credit: 
Hokkaido University

New technique to study molecules and materials on quantum simulator discovered

A new technique to study the properties of molecules and materials on a quantum simulator has been discovered.

The ground-breaking new technique, by physicist Oleksandr Kyriienko from the University of Exeter, could pioneer a new pathway towards the next generation of quantum computing.

Current quantum computing methods for studying the properties of molecules and materials on such a minute scale rely upon an ideal fault-tolerant quantum computer or variational techniques.

This new proposed approach, instead relies on the implementation of quantum evolution that would be readily available in many systems. The approach is favourable for modern state-of-the-art quantum setups, notably including cold atom lattices, and can serve as a software for future applications in material science.

The study could pave the way to studying the properties of strongly correlated systems, including coveted Fermi-Hubbard model, which can potentially offer the explanation of high-temperature superconductivity.

The research is published in the new Nature journal npj Quantum Information.

Dr Kyriienko, part of the Physics department at the University of Exeter and lead author said: "So far I have seen that the ability to run quantum dynamics can be used for finding the ground state properties.

"The question, however, remains - can we use it for studying excited states? Can we devise other powerful algorithm based on the principles? The experience tells this is possible, and will be a subject of future efforts."

The idea of quantum simulation was proposed by Nobel Prize winner Richard Feynman in 1982, where he suggested that quantum models can be most naturally simulated if we use a well-controlled and inherently quantum system.

Developing on this idea, a separate branch of quantum information science has emerged, based on the notion of quantum computer - a universal quantum device where digital sequences of operations (quantum gates) allow to solve certain problems with superior scaling of required operation as compared to conventional classical computers.

However, the original Feynman's intention, which was later named analog quantum simulation, so far was mostly used for observing dynamical properties of quantum systems, while precluding finding the ground state associated to various computation tasks.

In the new study, Oleksandr Kyriienko has shown that it is possible to exploit sequential evolution of the system with wavefunction overlap measurements, such that effective study of ground state properties becomes possible with analog quantum simulators.

The main technique which allows to reach ground state is effective representation of non-unitary operator which "distils" the ground state by running the sum of unitary evolution operators for different evolution times.

Importantly, the study suggests that dynamics of the quantum system is a valuable resource for computation, as the ability to propagate the system paired with overlap measurements can give access to the low-temperature spectrum of a quantum system which define its behaviour.

The findings establish the framework with dynamics-based quantum simulation using programmable quantum simulators, and serve as a quantum software to many well-controlled quantum lattice systems where large number of atoms (~100) precludes classical simulation.

This in turn can revolutionize our understanding of complex condensed matter systems and chemistry.

Credit: 
University of Exeter

The Lancet Child & Adolescent Health: Mental health problems persist in adolescents five years after bariatric surgery despite substantial weight loss

Long-term study of adolescent mental health following bariatric surgery suggests that although the surgery can improve many aspects of health, alleviation of mental health problems should not be expected, and a multidisciplinary team should offer long-term mental health support after the operation.

Five years after weight-loss surgery, despite small improvements in self-esteem and moderate improvements in binge eating, adolescents did not see improvements in their overall mental health, compared to peers who received conventional obesity treatment, according to a study in Sweden with 161 participants aged 13-18 years published in The Lancet Child & Adolescent Health journal.

The number of bariatric procedures in adolescents with severe obesity is rapidly increasing. Previous research has shown that bariatric surgery is safe and effective in adolescents, and the 2018 guidelines from the American Society for Metabolic and Bariatric Surgery state that this type of surgery should be considered the standard of care in adolescents with severe obesity.

However, a substantial minority of adolescents with severe obesity have coexisting mental health problems and little is known about the long-term mental health consequences of bariatric surgery. Those who seek surgery might hope to see symptoms improve as a result of weight loss, but long-term outcomes are relatively unknown. A 2018 study found no alleviation of mental health disorders when adolescents were questioned two years following surgery. The new study is the first to take a longer-term view and to use records of psychiatric drug prescriptions and of specialist care for mental health disorders, in combination with self-reported data.

"The transition from adolescence to young adulthood is a vulnerable time, not least in adolescents with severe obesity," says Dr Kajsa Järvholm from Skåne University Hospital, Sweden. "Our results provide a complex picture, but what's safe to say is that weight-loss surgery does not seem to improve general mental health. We suggest that adolescents and their caregivers should be given realistic expectations in advance of embarking on a surgical pathway, and that as adolescents begin treatment, long-term mental health follow-up and support should be a requirement." [2]

In the study, participants were aged 13-18 years before treatment started. The researchers recruited 81 Swedish adolescents with severe obesity who underwent Roux-en-Y gastric bypass surgery between 2006 and 2009. Their average BMI before treatment was 45. As a control group, the authors recruited 80 adolescents with an average BMI of 42, who were given conventional lifestyle obesity management, including cognitive behavioural therapy and family therapy.

Data on psychiatric drugs dispensed and specialist treatment for mental and behavioural disorders before treatment and five years afterwards were retrieved from national registers with individual data. In addition, participants in the surgical group reported their mental health problems (such as self-esteem, mood, binge eating and other eating behaviours) using a series of questionnaires before surgery, and one, two and five years afterwards. The number of participants who completed questionnaires declined to 75 by year five.

Before treatment, the proportion of adolescents prescribed psychiatric drugs was similar in both groups and substantially higher than the proportion in the general population (20% in the surgical group and 15% in the control group, compared to 2% in the general population). Five years after surgery, the proportion of adolescents prescribed psychiatric drugs increased in both groups, and both groups also saw an increase in the proportion who received specialist mental health care. However, adolescents who had surgery went on to have significantly more hospital-based inpatient and outpatient care for mental health problems than those in the control group (36%, or 29 of 81 participants, compared to 21%, or 17 of 80). The authors explain that this does not necessarily mean that surgery exacerbates mental health problems. Instead, it could be that adolescents who undergo surgery are monitored more closely, and therefore get better access to mental health care.

Five years after treatment, self-reported measures of mental health improved slightly in the surgical group. Self-esteem improved from an average score of 19 pre-surgery (from a possible score of 0-30, with higher score indicating higher self-esteem) to a score of 22 at five years, while binge eating, emotional eating and uncontrolled eating were all reported less often.

Overall mood had not improved at the five-year follow-up. The average score was unchanged and 72% (54 of 75) of the adolescents and young adults questioned scored below the average of those of a similar age in the general population.

Overall mood had not improved at the five-year follow-up. The average score was unchanged and 72% (54 of 75) of the adolescents and young adults questioned scored below the average of those of a similar age in the general population.

The authors highlight several limitations to their study. For example, it was not randomised and there were not enough adolescents with the same degree of severe obesity to perfectly match the control and surgical groups. Adolescents in the surgical group had a slightly higher BMI before treatment and were slightly older, so psychosocial problems may have been more prevalent. The small sample size might have prevented the researchers from detecting other important differences between the groups.

Writing in a linked Comment, lead author Dr Stasia Hadjiyannakis (who was not involved in the study) from the University of Ottawa, Canada, says: "The high burden of mental health risk in youth with severe obesity needs to be better understood. Bariatric surgery does not seem to alleviate these risks, despite resulting in significant weight loss and other physical health benefits. Those living with obesity encounter weight bias and discrimination which in turn can negatively affect mental health. We must advocate for and support strategies aimed at decreasing weight bias and discrimination to begin to address mental health risk through upstream action."

Credit: 
The Lancet

Low power metal detector senses magnetic fingerprints

image: The magnetic gradient full-tensor fingerprints-based security system using anisotropic magnetoresistance sensor arrays.

Image: 
Huan Liu

WASHINGTON, January 21, 2020 -- Most traditional electromagnetic methods for detecting hidden metal objects involve systems that are heavy, bulky and require lots of electricity.

Recent studies have shown metallic objects have their own magnetic fingerprints based on size, shape and physical composition. In AIP Advances, from AIP Publishing, scientists look to leverage these observations to potentially create a smaller and cheaper system that is just as effective as their larger counterparts.

Researchers demonstrated the use of a new type of magnetic-based metal detection security system using magnetic fingerprinting to identify hidden metal objects more efficiently. By using materials in an emerging field known as weak magnetic detection, the device identified a wide variety of metallic objects, ranging from cellphones to hammers.

The early results helped establish magnetic fingerprinting as a feasible path forward in security detection.

"The achievement of applying magnetic anomaly detection technology in magnetic sensor arrays promises smart public security sensing systems with low cost, small size and low power budgets," said Huan Liu, an author on the paper. "Unlike other electromagnetic detection methods, it doesn't require someone to walk through a door framework and can be built in a compact size."

Most of today's security metal detectors only function when the user is actively searching for a metallic object, often by using some form of radiation. Such active screening requires machines to be bulky and demand a lot of energy.

In contrast, the group's device can operate in a passive mode, significantly reducing the energy required for operation. This also potentially allows the technology to be portable and not need to rely on the constrained, threshold type of metal detectors that the public are most familiar.

The approach integrates three arrays of anisotropic magnetoresistance sensors with a microcontroller, computer and battery. After 2D magnetic data is gleaned from the advice, the researchers developed a computer workflow that processes the data and its fingerprint, removing noise.

The approach was able to identify fingerprints for objects larger than 16 inches and identify multiple objects separated by less than 8 inches.

"The major challenge in designing a weak magnetic detection-based public security system may lie in the difficulty to distinguish the weak object signals, like scissors and hammers, from unknown interference, which would decrease the signal-to-noise ratio and the range of the detection zone," Liu said.

The group next hopes to better optimize the device's ability to accurately identify fingerprints from farther distances.

Credit: 
American Institute of Physics

Study finds flooding damage to levees is cumulative -- and often invisible

Recent research finds that repeated flooding events have a cumulative effect on the structural integrity of earthen levees, suggesting that the increase in extreme weather events associated with climate change could pose significant challenges for the nation's aging levee system.

"Traditionally, levee safety inspections are based on visible signs of distress on the surface," says Rowshon Jadid, a Ph.D. candidate at North Carolina State University and first author of a paper describing the research. "What we've found is that as a levee goes through repeated flood events, it gets weaker - but the damage may be invisible to the naked eye."

"This is particularly relevant now, since we're seeing severe flooding more often," says Brina Montoya, co-author of the paper and an associate professor of civil, construction and environmental engineering at NC State.

The study draws on data from the Princeville levee in North Carolina, as well as flooding associated with hurricanes Floyd and Matthew.

Levees are earth embankments that protect against flooding - and there are a lot of them. According to the U.S. Army Corps of Engineers, there are 45,703 levee structures in the United States, stretching for 27,881 miles. On average, they're 56 years old.

"Because these levees are aging, and we have limited resources available to maintain them, we need to determine which levees should be prioritized for rehabilitation efforts that will reduce their risk of failure," Jadid says.

"There are inspection regimes in place, where officials look for signs of distress and structural damage," says Mohammed Gabr, co-author of the paper and Distinguished Professor of Civil Engineering and Construction at NC State. "However, some of these visual signs can be missed and, in many cases, by the time we can see the problem, it's either too late or too expensive to fix.

"The work we've published here demonstrates the increased risk of levee failure with the repeated flooding cycles and serves to help the profession with identifying levees with the highest risk of failure before signs of distress are visually observed."

Researchers are in the process of using this study's findings, as well as additional data, to develop tools that can facilitate more accurate identification of levee damage and the development of more accurate failure criteria.

The paper, "Effect of repeated rise and fall of water level on seepage-induced deformation and related stability analysis of Princeville levee," is published in the journal Engineering Geology. The paper was co-authored by Victoria Bennett of Rensselaer Polytechnic Institute.

Credit: 
North Carolina State University

Study verifies a missing piece to urban air quality puzzle

image: Rishabh Shah and his 'oxidative flow reactor,' which speeds up atmospheric processing to quickly capture air's full potential to form secondary particles.

Image: 
CMU College of Engineering

Despite the prominent health threat posed by fine particulate pollution, fundamental aspects of its formation and evolution continue to elude scientists.

This is true especially for the organic fraction of fine particles (also called aerosol), much of which forms as organic gases are oxidized by the atmosphere. Computer models under-predict this so-called "secondary" organic aerosol (SOA) in comparison to field measurements, indicating that the models are either missing some important sources or failing to describe the physical processes that lead to SOA formation.

New research from Carnegie Mellon University in collaboration with the National Oceanic and Atmospheric Administration (NOAA) sheds light on an under-appreciated source of SOA that may help close this model-measurement gap. Published in Environmental Science & Technology, the study shows that volatile organic compounds (VOCs) not traditionally considered may contribute as much or more to urban SOA as long-accounted for sources like vehicle emissions and respired gases from tree leaves.

"Our experiment shows that, in areas where you have a lot of people, you can only explain about half of the SOA seen in the field with the traditional emissions from vehicles and trees," said Albert Presto, a professor in mechanical engineering and the study's corresponding author. "We attribute that other half to these non-traditional VOCs."

In 2018, researchers from NOAA made a splash in the journal Science when they detailed how non-traditional VOCs represent half of all VOCs in the urban atmosphere in U.S. cities. Non-traditional VOCs originate from a slew of different chemicals, industries, and household products, including pesticides, coatings and paints, cleaning agents, and even personal care products like deodorants. Such products typically contain organic solvents whose evaporation leads to substantial atmospheric emissions of VOCs.

"It's a lot of everyday stuff that we use," said Presto. "Anything you use that is scented contains organic molecules, which can get out into the atmosphere and react" where it can form SOA.

The prevalence of these VOCs represents a paradigm shift in the urban SOA picture. The transportation sector had long been the dominant source of VOCs in urban air, but vehicle emissions in the U.S. have decreased drastically (up to 90%) due to tailpipe regulations in recent decades, even as fuel consumption has risen. As transportation-related VOCs have faded in prominence, non-traditional VOCs have begun to make up a greater relative contribution to the urban atmosphere. While NOAA's research alerted to atmospheric science community to the magnitude of non-traditional VOCs in urban environments, they could only hypothesize that these gases were likely important for SOA formation; the idea still needed to be tested.

Testing how much SOA forms from these is not an easy task, however. SOA formation in the atmosphere plays out over the course of several days, making it difficult to track the journey of emitted gases as they are dispersed by winds and begin reacting with sunlight and other oxidants.

Rishabh Shah, a graduate student who studied with Presto and now works at NOAA, constructed a reactor to evaluate the full potential for SOA formation within a sample of air without having to track that air over time.

"The reactor is kind of like an app on your smartphone for SOA formation," said Shah. "You take your picture and the app shows you what you would look like a decade from now."

The reactor accelerates the meandering journey a gas takes by bombarding it with oxidants at much higher concentrations than are found in the atmosphere. This physically simulates in just a few seconds all of the reactions a gas molecule is subject to in the atmosphere over the course of a week. In just a moment's time, Shah's reactor can evaluate the full potential of the air it samples to form SOA.

The team mounted their reactor in a van, creating a mobile platform from which they could access air from different settings containing varying levels of non-traditional VOCs. These locations included sites downwind from a large industrial facility, next to a construction site, within the deep 'street canyons' created by the skyscrapers of a city center, and among low-rise buildings of an urban neighborhood.

In places with large amounts of non-traditional VOCs, the reactor formed large amounts of SOA. These locations included both downtown street-canyons and amongst the urban low-rises, both places where evaporation of consumer products like deodorants and conditioners are high, especially in the morning. Advanced gas-analyzers aboard the mobile platform allowed the team to detect the presence of many of these non-traditional VOCs.

Importantly, in these locations the standard state-of-the-art computer models could not predict the full amount of SOA they observed in their reactor. However, in other environments with fewer non-traditional VOCs, the model was able to accurately predict how much SOA formed in the reactor.

Together, these pieces of evidence form a compelling argument that non-traditional VOC emissions are responsible for a significant amount of urban SOA. Presto estimates that these non-traditional emissions have roughly the same contribution as transportation and biosphere emissions combined, in line with the hypothesis put forward by NOAA.

"Traditionally, we've focused a lot on power plants and vehicles for air quality, which have gotten way cleaner in the U.S.." said Presto. "What that means is that now, a substantial amount of the SOA is coming from this other 'everyday, everywhere' category that hasn't really been considered until recently."

Credit: 
College of Engineering, Carnegie Mellon University

Advanced polymers help streamline water purification, environmental remediation

image: Professor Xiao Su, left, graduate student Stephen Cotty, center, and postdoctoral researcher Kwiyong Kim have developed an energy-efficient device that selectively absorbs a highly toxic form of arsenic in water and converts it into a far less toxic form.

Image: 
Photo by Fred Zwicky

CHAMPAIGN, Ill. -- It takes a lot of energy to collect, clean and dispose of contaminated water. Some contaminants, like arsenic, occur in low concentrations, calling for even more energy-intensive selective removal processes.

In a new paper, researchers address this water-energy relationship by introducing a device that can purify and remediate arsenic-contaminated water in a single step. Using specialized polymer electrodes, the device can reduce arsenic in water by over 90% while using roughly 10 times less energy than other methods.

The findings of the new study are published in the journal Advanced Materials.
Arsenic is a naturally occurring element that enters aquifers, streams and lakes when water reacts with arsenic-containing rocks and is considered highly toxic, the researchers said. This is a global issue affecting more than 200 million people in 70 countries.

Not all arsenic is the same, said Xiao Su, a chemical and biomolecular engineering professor at the University of Illinois who directed the study. The most dangerous form of arsenic, known as arsenite, is highly reactive with biological tissues, but converts to a less toxic form, called arsenate, through a simple oxidation reaction.

"We can remove arsenite from water using absorbents, specialized membranes or evaporation, but these are all very energy-intensive processes that ultimately leave behind a lot of toxic waste," Su said. "By having a device that can capture arsenite with a high selectivity and convert it to a less toxic form, we can reduce the toxicity of the waste while purifying the water."

The proof-of-concept device works by integrating the contaminant separation and reaction steps within a single unit with an electrocatalytic cell - similar to a battery -using redox-active polymers. When the contaminated water enters the device, the first polymer electrode selectively captures the arsenite and sends it to the other polymer electrode, where it is stripped of two of its electrons - or oxidized - to form arsenate. Pure water then leaves the device, and the arsenate waste is concentrated for further disposal, Su said.

"The process is powered by electrochemical reactions, so the device does not require a lot of electricity to run and allows for the reuse of the electrodes based only on electrochemical potential," Su said. "Combining the separation and reaction steps into one device is an example of what we call processes intensification, which we believe is an important approach for addressing environmental concerns related to energy and water - in particular, the amount of energy it takes to purify and remediate contaminated water."

In addition to improved sustainability and energy efficiency, this elctrochemical approach has advantages for field deployment, the researchers said. Users can run the device using solar panels in areas where electricity is scarce - like in parts of rural Bangladesh, a country where over 60% of the population is affected by arsenic-contaminated water, the researchers said.

There are challenges to address before the device is ready for real-world implementation. "We need to increase the stability of the electrodes because this process will need to be cycled many times while running," Su said. "We're using very specialized, highly advanced polymer materials for the electrodes. However, we need to make sure we design them to be not only highly selective for arsenic, but also very stable and robust so that they do not need to be replaced constantly. This will require further chemical development to overcome."

Credit: 
University of Illinois at Urbana-Champaign, News Bureau

Air pollution in New York City linked to wildfires hundreds of miles away

image: Flames rise from an experimental forest fire in Canada's remote Northwest Territories.

Image: 
Stefan Doerr via Immageo, CC BY-ND 3.0

A new study shows that air pollutants from the smoke of fires from as far as Canada and the southeastern U.S. traveled hundreds of miles and several days to reach Connecticut and New York City, where it caused significant increases in pollution concentrations.

For the study, published 21 January in the European Geosciences Union (EGU) journal Atmospheric Chemistry and Physics, researchers in the lab of Drew Gentner, associate professor of chemical & environmental engineering, monitored the air quality at the Yale Coastal Field Station in Guilford, CT and four other sites in the New York metropolitan area. In August of 2018, they observed two spikes in the presence of air pollutants - both coinciding with New York-area air quality advisories for ozone. The pollutants were the kind found in the smoke of wildfires and controlled agricultural burning. Using three types of evidence - data from the observation sites, smoke maps from satellite imagery, and backtracking 3-D models of air parcels (both the maps and models were produced by the National Oceanic and Atmospheric Administration) - the researchers traced the pollutants' origin in the first event to fires on the western coast of Canada, and in the second event to the southeastern U.S.

Biomass burning, which occurs on a large scale during wildfires and some controlled burns, is a major source of air pollutants that impact air quality, human health, and climate. These events release numerous gases into the atmosphere and produce particulate matter (PM), including black carbon (BC) and other primary organic aerosols (POA) with a diameter of less than 2.5 micrometers. Known as PM2.5, it has been shown to have particularly serious health effects when inhaled.

While more reactive components are often chemically transformed closer to their place of origin, PM2.5 tends to last longer. In the case of this study, that allowed much of it to travel from the fires to the monitoring sites - a period ranging from a few days to about a week.

"Given the sensitivity of people to the health effects emerging from exposure to PM2.5, this is certainly something that needs to be considered as policy-makers put together long-term air quality management plans," Gentner said.

The impacts of wildfire smoke will likely become increasingly important in the coming years.

"When people are making predictions about climate change, they're predicting increases in wildfires, so this sort of pollution is likely going to become more common," said lead author Haley Rogers, who was an undergraduate student when the study was conducted. "So when people are planning for air pollution and health impacts, you can't just address local sources."

Although the levels of the PM2.5 decreased over time and distance, co-author Jenna Ditto, a graduate student in Gentner's lab, noted that awareness of its presence in the atmosphere is critical to public health.

"Studies indicate that there are no safe levels of PM2.5, so typically any level of it is worth taking a look at," she said.

Credit: 
European Geosciences Union

Less may be more in next-gen batteries

image: Rice University postdoctoral fellow Anulekha Haridas holds a full-cell lithium-ion battery built to test the effect of an alumina coating on the cathode. The nanoscale coating protects cathodes from degrading.

Image: 
Jeff Fitlow/Rice University

HOUSTON - (Jan. 21, 2020) - The process of developing better rechargeable batteries may be cloudy, but there's an alumina lining.

A slim layer of the metal oxide applied to common cathodes by engineers at Rice University's Brown School of Engineering revealed new phenomena that could lead to batteries that are better geared toward electric cars and more robust off-grid energy storage.

The study in the American Chemical Society's ACS Applied Energy Materials describes a previously unknown mechanism by which lithium gets trapped in batteries, thus limiting the number of times it can be charged and discharged at full power.

But that characteristic does not dampen hopes that in some situations, such batteries could be just right.

The Rice lab of chemical and biomolecular engineer Sibani Lisa Biswal found a sweet spot in the batteries that, by not maxing out their storage capacity, could provide steady and stable cycling for applications that need it.

Biswal said conventional lithium-ion batteries utilize graphite-based anodes that have a capacity of less than 400 milliamp hours per gram (mAh/g), but silicon anodes have potentially 10 times that capacity. That comes with a downside: Silicon expands as it alloys with lithium, stressing the anode. By making the silicon porous and limiting its capacity to 1,000 mAh/g, the team's test batteries provided stable cycling with still-excellent capacity.

"Maximum capacity puts a lot of stress on the material, so this is a strategy to get capacity without the same degree of stress," Biswal said. "1,000 milliamp hours per gram is still a big jump."

The team led by postdoctoral fellow Anulekha Haridas tested the concept of pairing the porous, high-capacity silicon anodes (in place of graphite) with high-voltage nickel manganese cobalt oxide (NMC) cathodes. The full cell lithium-ion batteries demonstrated stable cyclability at 1,000 mAh/g over hundreds of cycles.

Some cathodes had a 3-nanometer layer of alumina (applied via atomic layer deposition), and some did not. Those with the alumina coating protected the cathode from breaking down in the presence of hydrofluoric acid, which forms if even minute amounts of water invade the liquid electrolyte. Testing showed the alumina also accelerated the battery's charging speed, reducing the number of times it can be charged and discharged.

There appears to be extensive trapping as a result of the fast lithium transport through alumina, Haridas said. The researchers already knew of possible ways silicon anodes trap lithium, making it unavailable to power devices, but she said this is the first report of the alumina itself absorbing lithium until saturated. At that point, she said, the layer becomes a catalyst for fast transport to and from the cathode.

"This lithium-trapping mechanism effectively protects the cathode by helping maintain a stable capacity and energy density for the full cells," Haridas said.

Co-authors are Rice graduate students Quan Anh Nguyen and Botao Farren Song, and Rachel Blaser, a research and development engineer at Ford Motor Co. Biswal is a professor of chemical and biomolecular engineering and of materials science and nanoengineering.

Ford's University Research Program supported the research.

Credit: 
Rice University

Native Americans did not make large-scale changes to environment prior to European contact

image: The long-held belief that native people used fire to create a diverse landscape of woodlands, grasslands, heathlands, and shrublands in New England has led to a widespread use of prescribed fire as a conservation tool. Research by Oswald and colleagues indicates that these openlands actually arose following European contact, deforestation, and agricultural expansion. These landscapes and their critical habitats and species are best maintained through agricultural practices like grazing, as seen here on conservation land in Tisbury, Martha's Vineyard.

Image: 
David Foster

BINGHAMTON, N.Y. - Contrary to long-held beliefs, humans did not make major changes to the landscape prior to European colonization, according to new research conducted in New England featuring faculty at Binghamton University, State University of New York. These new insights into the past could help to inform how landscapes are managed in the future.

A theory that Native Americans actively managed landscapes, using fire to clear forests, has been growing in popularity in recent decades. Binghamton archaeologist Elizabeth Chilton and fellow researchers at Emerson College, Harvard University, University of Wyoming and The Public Archeology Laboratory, a New England cultural resource management firm, sought to explore the following key question: Over the past 14,000 years (since the last glaciation) were humans the major drivers of environmental change in southern New England or were they responding to changes in the climate?

"The paleo-climate, paleo-ecology and archaeological records suggest that native peoples were not modifying their immediate environments to a great degree," said Chilton. "And they certainly were not doing so with large-scale fire or clear-cutting of trees. The widespread and intensive deforestation and agriculture brought by Europeans in the 17th century was in clear contrast to what had come before. Previous conservation practices had been based on a presumption that Native Americans manipulated their environments using fire, and this research does not support that interpretation."

The team examined data from multiple sources over the past 14,000 years: pollen records, fire history (from charcoal in lake cores), hydrology and archaeology and combined these data using GIS to look for correlations in changes in the data sets over time. This marked the first time that archaeologists and ecologists worked together to analyze New England land use and climate history on a regional scale and over the entirety of pre-colonial Native American history.

"Much to my surprise, we found that, even though we know that Native Americans were in New England for at least 14,000 years with, at certain times in history, fairly large population densities, the ecological signal was essentially invisible," said Chilton. "If one did not know there had been humans on the landscape, it would be almost impossible to detect them on a regional scale. After the arrival of Europeans, large-scale cutting and burning of forests is very clear in the ecological record."

According to the researchers, this research provides important lessons for sustainability and contemporary conservation practice.

"Today, New England's species and habitat biodiversity is globally unique, and this research transforms our thinking and rationale for the best ways to maintain it. It also points to the importance of historical research to help us interpret modern landscapes and conserve them effectively into the future," said lead author Wyatt Oswald from Emerson College.

"Ancient native people thrived under changing forest conditions not by intensively managing them but by adapting to them and the changing environment," added Chilton.

This research was primarily concentrated in coastal New England. The researchers also have data from the interior of New England and are interested in comparing the data sets both within the region and with other regions with comparable data.

Credit: 
Binghamton University

Dozens of non-oncology drugs can kill cancer cells

video: What if we could take thousands of drugs already approved to safely treat disease, as well as other compounds that have been studied as potential drugs, and find new uses for these old medicines?

Image: 
Scott Sassone, Broad Institute

Drugs for diabetes, inflammation, alcoholism -- and even for treating arthritis in dogs -- can also kill cancer cells in the lab, according to a study by scientists at the Broad Institute of MIT and Harvard and Dana-Farber Cancer Institute. The researchers systematically analyzed thousands of already developed drug compounds and found nearly 50 that have previously unrecognized anti-cancer activity. The surprising findings, which also revealed novel drug mechanisms and targets, suggest a possible way to accelerate the development of new cancer drugs or repurpose existing drugs to treat cancer.

"We thought we'd be lucky if we found even a single compound with anti-cancer properties, but we were surprised to find so many," said Todd Golub, chief scientific officer and director of the Cancer Program at the Broad, Charles A. Dana Investigator in Human Cancer Genetics at Dana-Farber, and professor of pediatrics at Harvard Medical School.

The new work appears in the journal Nature Cancer. It is the largest study yet to employ the Broad's Drug Repurposing Hub, a collection that currently comprises more than 6,000 existing drugs and compounds that are either FDA-approved or have been proven safe in clinical trials (at the time of the study, the Hub contained 4,518 drugs). The study also marks the first time researchers screened the entire collection of mostly non-cancer drugs for their anti-cancer capabilities.

Historically, scientists have stumbled upon new uses for a few existing medicines, such as the discovery of aspirin's cardiovascular benefits. "We created the repurposing hub to enable researchers to make these kinds of serendipitous discoveries in a more deliberate way," said study first author Steven Corsello, an oncologist at Dana-Farber, a member of the Golub lab, and founder of the Drug Repurposing Hub.

The researchers tested all the compounds in the Drug Repurposing Hub on 578 human cancer cell lines from the Broad's Cancer Cell Line Encyclopedia (CCLE). Using a molecular barcoding method known as PRISM, which was developed in the Golub lab, the researchers tagged each cell line with a DNA barcode, allowing them to pool several cell lines together in each dish and more quickly conduct a larger experiment. The team then exposed each pool of barcoded cells to a single compound from the repurposing library, and measured the survival rate of the cancer cells.

They found nearly 50 non-cancer drugs -- including those initially developed to lower cholesterol or reduce inflammation -- that killed some cancer cells while leaving others alone.

Some of the compounds killed cancer cells in unexpected ways. "Most existing cancer drugs work by blocking proteins, but we're finding that compounds can act through other mechanisms," said Corsello. Some of the four-dozen drugs he and his colleagues identified appear to act not by inhibiting a protein but by activating a protein or stabilizing a protein-protein interaction. For example, the team found that nearly a dozen non-oncology drugs killed cancer cells that express a protein called PDE3A by stabilizing the interaction between PDE3A and another protein called SLFN12 -- a previously unknown mechanism for some of these drugs.

These unexpected drug mechanisms were easier to find using the study's cell-based approach, which measures cell survival, than through traditional non-cell-based high-throughput screening methods, Corsello said.

Most of the non-oncology drugs that killed cancer cells in the study did so by interacting with a previously unrecognized molecular target. For example, the anti-inflammatory drug tepoxalin, originally developed for use in people but approved for treating osteoarthritis in dogs, killed cancer cells by hitting an unknown target in cells that overexpress the protein MDR1, which commonly drives resistance to chemotherapy drugs.

The researchers were also able to predict whether certain drugs could kill each cell line by looking at the cell line's genomic features, such as mutations and methylation levels, which were included in the CCLE database. This suggests that these features could one day be used as biomarkers to identify patients who will most likely benefit from certain drugs. For example, the alcohol dependence drug disulfiram (Antabuse) killed cell lines carrying mutations that cause depletion of metallothionein proteins. Compounds containing vanadium, originally developed to treat diabetes, killed cancer cells that expressed the sulfate transporter SLC26A2.

"The genomic features gave us some initial hypotheses about how the drugs could be acting, which we can then take back to study in the lab," said Corsello. "Our understanding of how these drugs kill cancer cells gives us a starting point for developing new therapies."

The researchers hope to study the repurposing library compounds in more cancer cell lines and to grow the hub to include even more compounds that have been tested in humans. The team will also continue to analyze the trove of data from this study, which have been shared openly (https://depmap.org) with the scientific community, to better understand what's driving the compounds' selective activity.

"This is a great initial dataset, but certainly there will be a great benefit to expanding this approach in the future," said Corsello.

Credit: 
Broad Institute of MIT and Harvard

Magnetized molecules used to monitor breast cancer

A new type of scan that involves magnetising molecules allows doctors to see in real-time which regions of a breast tumour are active, according to research funded by Cancer Research UK* and published in Proceedings of the National Academy of Sciences today (Monday).

This is the first time researchers have demonstrated that this scanning technique, called carbon-13 hyperpolarised imaging, can be used to monitor breast cancer.

The team based at the Cancer Research UK Cambridge Institute and the Department of Radiology, University of Cambridge, tested the technique in seven patients from Addenbrooke's Hospital with various types and grades of breast cancer** before they had received any treatment.

They used the scan to measure how fast the patients' tumours were metabolising a naturally occurring molecule called pyruvate, and were able to detect differences in the size, type and grade of tumours - a measure of how fast growing, or aggressive the cancer is.

The scan also revealed in more detail the 'topography' of the tumour, detecting variations in metabolism between different regions of the same tumour.

Professor Kevin Brindle, lead researcher from the Cancer Research UK Cambridge Institute, said: "This is one of the most detailed pictures of the metabolism of a patient's breast cancer that we've ever been able to achieve. It's like we can see the tumour 'breathing'.

"Combining this with advances in genetic testing, this scan could in the future allow doctors to better tailor treatments to each individual, and detect whether patients are responding to treatments, like chemotherapy, earlier than is currently possible".

Hyperpolarised carbon-13 pyruvate is an isotope-labelled form of the molecule that is slightly heavier than the naturally occurring pyruvate which is formed in our bodies from the breakdown of glucose and other sugars.

In the study, the scientists 'hyperpolarised', or magnetised, carbon-13 pyruvate by cooling it to about one degree above absolute zero (-272°C) and exposing it to extremely strong magnetic fields and microwave radiation. The frozen material was then thawed and dissolved into an injectable solution.

Patients were injected with the solution and then received an MRI scan at Addenbrooke's Hospital. Magnetising the carbon-13 pyruvate molecules increases the signal strength by 10,000 times so that they are visible on the scan.

The researchers used the scan to measure how fast pyruvate was being converted into a substance called lactate.

Our cells convert pyruvate into lactate as part of the metabolic processes that produce energy and the building blocks for making new cells. Tumours have a different metabolism to healthy cells, and so produce lactate more quickly. This rate also varies between tumours, and between different regions of the same tumour.

The researchers showed that monitoring this conversion in real-time could be used to infer the type and aggressiveness of the breast cancer.

The team now hopes to trial this scan in larger groups of patients, to see if it can be reliably used to inform treatment decisions in hospitals.

Breast cancer is the most common type of cancer in the UK, with around 55,000 new cases each year. 80% of people with breast cancer survive for 10 years or more, however for some subtypes, survival is much lower.

Professor Charles Swanton, Cancer Research UK's chief clinician, said: "This exciting advance in scanning technology could provide new information about the metabolic status of each patient's tumour upon diagnosis, which could help doctors to identify the best course of treatment.

"And the simple, non-invasive scan could be repeated periodically during treatment, providing an indication of whether the treatment is working. Ultimately, the hope is that scans like this could help doctors decide to switch to a more intensive treatment if needed, or even reduce the treatment dose, sparing people unnecessary side effects".

Credit: 
Cancer Research UK

Tracking the scent of warming tundra

image: Mini-ecosystems were collected from the tundra in Abisko, Subarctic Sweden.

Image: 
Riikka Rinnan

Climate change is causing the subarctic tundra to warm twice as fast as the global average, and this warming is speeding up the activity of the plant life. Researchers from the University of Copenhagen, Denmark, and the Helmholtz Zentrum München, Germany, have now elucidated how this warming affects the tundra ecosystem and the origin of an increased amount of volatile compounds released from the tundra.

The results are published in the renowned scientific journal Global Change Biology.

The creeping bushes and lush mosses of the tundra emit a scent that consists of a complex mixture of volatile organic compounds (VOCs). VOCs are gases that include thousands of natural chemicals, including the fragrances in essential oils. The VOCs protect plant cells from environmental stresses, but they are also chemical messengers and function as a "language" between plants and animals. By releasing VOCs, plants can directly repel and attract insects, or warn neighboring plants of impending dangers such as insect infestation.

Field studies led by the Copenhagen team have shown that a temperature rise of just a few degrees doubles or triples the amount of VOCs released from tundra vegetation. Until now, it was not known whether this "gas bomb" is merely a consequence of the temperature-related release of vaporized essential oils stored in plant tissue or whether the enzymatic synthesis of VOCs is stimulated in plants.

- "Our new results show that the share of the VOC release from direct biosynthesis increases significantly with global warming. This leads to a shift in the composition and amount of VOCs released towards more reactive hydrocarbons", says Professor Riikka Rinnan from the University of Copenhagen.

Rinnan's team collected a large number of mini-ecosystems - blocks of tundra including naturally growing plants and the soil below them to the depth of 10 cm - and brought them to the unique phytotron facility at the Helmholtz Zentrum München, Germany. The facility has some of the best climate chambers for mimicking a natural environment, which can be used for experimentation with different climate scenarios. The mini-tundra ecosystems were then cultivated under the current or simulated future arctic climate, while monitoring the released volatiles. To study the processes taking place in the plants and the ecosystem, the mini-ecosystems were exposed to manipulated air, in which the CO2 absorbed by plants during photosynthesis was marked with isotopes so that it could be traced.

- "13C is a naturally occurring stable isotope of carbon. By feeding the plants with enriched levels of 13C-labelled CO2, we can follow the fate of atmospheric carbon dioxide. We simulated the future climate and traced CO2 from the atmosphere into the subarctic ecosystem", says the first author of the study, Dr. Andrea Ghirardo, from the phytotron facility in Munich.

Under these conditions the researchers could follow the carbon. They searched where the labeled carbon ended up in the different plant tissues, in soil, microorganisms, and the released VOCs. When the plants use CO2 to synthesize certain VOCs, they observed 13C appearing in the released VOCs. They hereby could distinguish the newly synthesized VOCs from those VOCs that simply evaporated from storage structures in the plants, or were formed in the soil.

The incorporation of carbon isotopes in tissues and soil helps the researchers understanding where the ecosystem allocates the recently-fixed carbon and allows the quantification of carbon sequestrated from the atmosphere. When this is done in the climate chambers under controlled environmental conditions, this gives some clear pieces to the puzzle of what will happen in arctic ecosystems in the future.

Credit: 
University of Copenhagen - Faculty of Science