Tech

Intervention shows promise for treating depression in preschool-aged children

Researchers funded by the National Institutes of Health have shown that a therapy-based treatment for disruptive behavioral disorders can be adapted and used as an effective treatment option for early childhood depression. Children as young as 3-years-old can be diagnosed with clinical depression, and although preschool-aged children are sometimes prescribed antidepressants, a psychotherapeutic intervention is greatly needed. The study, funded by the National Institute of Mental Health (NIMH), part of NIH, appears online June 20 in the American Journal of Psychiatry.

The researchers adapted Parent-Child Interaction Therapy (PCIT), which has been shown to be an effective way to treat disruptive behavioral disorders in young children. In standard PCIT treatment, parents are taught techniques for successfully interacting with their children. They then practice these techniques in controlled situations while being coached by a clinician.

To target the therapy for childhood depression, the researchers adapted this standard intervention by adding a new emotional development (ED) module to the treatment. This extra material used the basic techniques of PCIT to train parents to be more effective at helping their children regulate emotions and to be better emotion coaches for their children. The training was designed to help enhance the children's emotional competence and emotion regulation abilities.

"This study builds on programmatic research that has identified factors associated with the development and course of depression among very young children and in turn, represent targets for intervening," said Joel Sherrill, Ph.D., deputy director of the NIMH Division of Services and Intervention Research. "Using a modular approach that builds upon the well-established PCIT platform may ultimately help facilitate dissemination of the ED intervention."

Children ages 3-6 who met criteria for early childhood depression and their parents were randomly assigned to PCIT-ED treatment or a waitlist group. Children in the PCIT-ED group completed standard PCIT modules for a maximum of 12 treatment sessions, followed by an emotional development module lasting eight sessions. There are currently no empirically tested treatments that are widely used to treat early childhood depression; therefore, children in the waitlist group were monitored but received no active intervention. Children and their parents in the waitlist group were offered PCIT-ED treatment after completion of the study.

The researchers assessed before and after treatment or the waiting period (depending on group assignment), children's psychiatric symptoms, their emotional self-regulation abilities, their level of impairment and functioning, and their tendency to experience guilt. Parents were assessed for depression severity, coping styles, and strategies they used in response to their child's negative emotions, and for stress within the parent-child relationship.

At the completion of treatment, children in the PCIT-ED group were less likely to meet criteria for depression, more likely to have achieved remission, and were more likely to score lower on depression severity than children in the waitlist group. Children in the PCIT-ED treatment group had improved functioning, fewer comorbid disorders, and were rated as having greater emotional regulation skills and greater "guilt reparation" (e.g., spontaneously saying ''sorry'' after having done something wrong, appropriate empathy with others, etc.) compared with children in the waitlist group.

Parents in the PCIT-ED group also benefited. They were found to have decreased symptoms of depression, lower levels of parenting stress, and reported employing more parenting techniques that focused on emotion reflection and processing than parents in the waitlist group. Parents also overwhelmingly reported positive impressions of the therapeutic program.

"The study provides very promising evidence that an early and brief psychotherapeutic intervention that focuses on the parent-child relationship and on enhancing emotion development may be a powerful and low-risk approach to the treatment of depression," said lead study author Joan Luby, M.D., of Washington University School of Medicine in St. Louis. "It will be very important to determine if gains made in this early treatment are sustained over time and whether early intervention can change the course of the disorder."

Credit: 
NIH/National Institute of Mental Health

Chameleon-inspired nanolaser changes colors

image: Novel nanolaser leverages the same color-changing mechanism that a chameleon uses to camouflage its skin.

Image: 
Egor Kamelev

As a chameleon shifts its color from turquoise to pink to orange to green, nature's design principles are at play. Complex nano-mechanics are quietly and effortlessly working to camouflage the lizard's skin to match its environment.

Inspired by nature, a Northwestern University team has developed a novel nanolaser that changes colors using the same mechanism as chameleons. The work could open the door for advances in flexible optical displays in smartphones and televisions, wearable photonic devices and ultra-sensitive sensors that measure strain.

"Chameleons can easily change their colors by controlling the spacing among the nanocrystals on their skin, which determines the color we observe," said Teri W. Odom, Charles E. and Emma H. Morrison Professor of Chemistry in Northwestern's Weinberg College of Arts and Sciences. "This coloring based on surface structure is chemically stable and robust."

The research was published online yesterday in the journal Nano Letters. Odom, who is the associate director of Northwestern's International Institute of Nanotechnology, and George C. Schatz, Charles E. and Emma H. Morrison Professor of Chemistry in Weinberg, served as the paper's co-corresponding authors.

The same way a chameleon controls the spacing of nanocrystals on its skin, the Northwestern team's laser exploits periodic arrays of metal nanoparticles on a stretchable, polymer matrix. As the matrix either stretches to pull the nanoparticles farther apart or contracts to push them closer together, the wavelength emitted from the laser changes wavelength, which also changes its color.

"Hence, by stretching and releasing the elastomer substrate, we could select the emission color at will," Odom said.

The resulting laser is robust, tunable, reversible and has a high sensitivity to strain. These properties are critical for applications in responsive optical displays, on-chip photonic circuits and multiplexed optical communication.

Credit: 
Northwestern University

The seed that could bring clean water to millions

image: (left) Unshelled M. oleifera seeds, (middle) shelled seeds, (right) crushed seeds before protein extraction.

Image: 
Carnegie Mellon University College of Engineering

According to the United Nations, 2.1 billion people lack access to safely managed drinking water services, the majority of whom live in developing nations.

Carnegie Mellon University's Biomedical Engineering and Chemical Engineering Professors Bob Tilton and Todd Przybycien recently co-authored a paper with Ph.D. students Brittany Nordmark and Toni Bechtel, and alumnus John Riley, further refining a process that could soon help provide clean water to many in water-scarce regions. The process, created by Tilton's former student and co-author Stephanie Velegol, uses sand and plant materials readily available in many developing nations to create a cheap and effective water filtration medium, termed "f-sand."

"F-sand" uses proteins from the Moringa oleifera plant, a tree native to India that grows well in tropical and subtropical climates. The tree is cultivated for food and natural oils, and the seeds are already used for a type of rudimentary water purification. However, this traditional means of purification leaves behind high amounts of dissolved organic carbon (DOC) from the seeds, allowing bacteria to regrow after just 24 hours. This leaves only a short window in which the water is drinkable.

Velegol, who is now a professor of chemical engineering at Penn State University, had the idea to combine this method of water purification with sand filtration methods common in developing areas. By extracting the seed proteins and adsorbing (adhering) them to the surface of silica particles, the principal component of sand, she created f-sand. F-sand both kills microorganisms and reduces turbidity, adhering to particulate and organic matter. These undesirable contaminants and DOC can then be washed out, leaving the water clean for longer, and the f-sand ready for reuse.

While the basic process was proven and effective, there were still many questions surrounding f-sand's creation and use--questions Tilton and Przybycien resolved to answer.

Would isolating certain proteins from the M. oleifera seeds increase f-sand's effectiveness? Are the fatty acids and oils found in the seeds important to the adsorption process? What effect would water conditions have? What concentration of proteins is necessary to create an effective product?

The answers to these questions could have big implications on the future of f-sand.

Fractionation

The seed of M. oleifera contains at least eight different proteins. Separating these proteins, a process known as fractionation, would introduce another step to the process. Prior to their research, the authors theorized that isolating certain proteins might provide a more efficient finished product.

However, through the course of testing, Tilton and Przybycien found that this was not the case. Fractionating the proteins had little discernible effect on the proteins' ability to adsorb to the silica particles, meaning this step was unnecessary to the f-sand creation process.

The finding that fractionation is unnecessary is particularly advantageous to the resource-scarce scenario in which f-sand is intended to be utilized. Leaving this step out of the process helps cut costs, lower processing requirements, and simplify the overall process.

Fatty Acids

One of the major reasons M. oleifera is cultivated currently is for the fatty acids and oils found in the seeds. These are extracted and sold commercially. Tilton and Przybycien were interested to know if these fatty acids had an effect on the protein adsorption process as well.

They found that much like fractionation, removing the fatty acids had little effect on the ability of the proteins to adsorb. This finding also has beneficial implications for those wishing to implement this process in developing regions. Since the presence or absence of fatty acids in the seeds has little effect on the creation or function of f-sand, people in the region can remove and sell the commercially valuable oil, and still be able to extract the proteins from the remaining seeds for water filtration.

Concentration

Another parameter of the f-sand manufacturing process that Tilton and Przybycien tested was the concentration of seed proteins needed to create an effective product. The necessary concentration has a major impact on the amount of seeds required, which in turn has a direct effect on overall efficiency and cost effectiveness.

The key to achieving the proper concentration is ensuring that there are enough positively charged proteins to overcome the negative charge of the silica particles to which they are attached, creating a net positive charge. This positive charge is crucial to attract the negatively charged organic matter, particulates, and microbes contaminating the water.

This relates to another potential improvement to drinking water treatment investigated by Tilton, Przybycien, and Nordmark in a separate publication. In this project, they used seed proteins to coagulate contaminants in the water prior to f-sand filtration. This also relies on controlling the charge of the contaminants, which coagulate when they are neutralized. Applying too much protein can over-charge the contaminants and inhibit coagulation.

"There's kind of a sweet spot in the middle," says Tilton, "and it lies in the details of how the different proteins in these seed protein mixtures compete with each other for adsorption to the surface, which tended to broaden that sweet spot."

This broad range of concentrations means that not only can water treatment processes be created at relatively low concentrations, thereby conserving materials, but that there is little risk of accidentally causing water contamination by overshooting the concentration. In areas where exact measurements may be difficult to make, this is crucial.

Water Hardness

Water hardness refers to the amount of dissolved minerals in the water. Although labs often use deionized water, in a process meant to be applied across a range of real world environments, researchers have to prepare for both soft and hard water conditions.

Tilton and Przybycien found that proteins were able to adsorb well to the silica particles, and to coagulate suspended contaminants, in both soft and hard water conditions. This means that the process could potentially be viable across a wide array of regions, regardless of water hardness.

Tilton and Przybycien recently published a paper on this research, "Moringa oleifera Seed Protein Adsorption to Silica: Effects of Water Hardness, Fractionation, and Fatty Acid Extraction," in ACS Langmuir.

Overall, the conclusions that Tilton, Przybycien, and their fellow authors were able to reach have major benefits for those in developing countries looking for a cheap and easily accessible form of water purification. Their work puts this novel innovation one step closer to the field, helping to forge the path that may one day see f-sand deployed in communities across the developing world. They've shown that the f-sand manufacturing process displays a high degree of flexibility, as it is able to work at a range of water conditions and protein concentrations without requiring the presence of fatty acids or a need for fractionation.

"It's an area where complexity could lead to failure--the more complex it is, the more ways something could go wrong," says Tilton. "I think the bottom line is that this supports the idea that the simpler technology might be the better one."

Credit: 
College of Engineering, Carnegie Mellon University

Multiracial congregations have nearly doubled in the United States

The percentage of multiracial congregations in the United States has nearly doubled, with about one in five American congregants attending a place of worship that is racially mixed, according to a Baylor University study.

While Catholic churches remain more likely to be multiracial -- about one in four -- a growing number of Protestant churches are multiracial, the study found. The percentage of Protestant churches that are multiracial tripled, from 4 percent in 1998 to 12 percent in 2012, the most recent year for which data are available.

In addition, more African-Americans are in the pulpits and pews of U.S. multiracial churches than in the past, according to the study.

Multiracial congregations are places of worship in which less than 80 percent of participants are of the same race or ethnicity.

"Congregations are looking more like their neighborhoods racially and ethnically, but they still lag behind," said lead author Kevin D. Dougherty, Ph.D., associate professor of sociology in Baylor's College of Arts & Sciences. "The average congregation was eight times less diverse racially than its neighborhood in 1998 and four times less diverse in 2012."

"More congregations seem to be growing more attentive to the changing demographics outside their doors, and as U.S. society continues to diversify by race and ethnicity, congregations' ability to adapt to those changes will grow in importance," said co-author Michael O. Emerson, Ph.D., provost of North Park University in Chicago.

For the study, Dougherty and Emerson analyzed data from the National Congregations Study, a nationally representative survey conducted in 1998, 2006-2007 and 2012, with a cumulative sample of 4,071 congregations. The study by Dougherty and Emerson -- "The Changing Complexion of American Congregations" -- is published in the Journal for the Scientific Study of Religion.

The study found that:

One-third of U.S. congregations were composed entirely of one race in 2012, down from nearly half of U.S. congregations in 1998.

Multiracial congregations constituted 12 percent of all U.S. congregations in 2012, up from 6 percent in 1998.

The percentage of Americans worshipping in multiracial congregations climbed to 18 percent in 2012, up from 13 percent in 1998.

Mainline Protestant and Evangelical Protestant churches have become more common in the count of multiracial congregations, but Catholic churches continue to show higher percentages of multiracial congregations. One in four Catholic churches was multiracial in 2012.

While whites are the head ministers in more than two-thirds (70 percent) of multiracial congregations, the percentage of those led by black clergy has risen to 17 percent, up from fewer than 5 percent in 1998.

Blacks have replaced Latinos as the most likely group to worship with whites. In the typical multiracial congregation, the percentage of black members rose to nearly a quarter in 2012, up from 16 percent in 1998. Meanwhile, Latinos in multiracial congregations dropped from 22 percent in 1998 to 13 percent in 2012.

The percentage of immigrants in multiracial congregations decreased from over 5 percent in 1998 to under 3 percent in 2012.

Previous research shows that congregations have adopted varying ways to encourage racial diversity, among them integrating music genres, using more participatory worship, hosting small groups to foster interracial networks and creating programs to address racial or ethnic issues. Churches with shorter histories are more likely to have diversity, and change is harder to bring about in long-established congregations.

The new study by Dougherty and Emerson concluded that the complexion of American congregations is indeed changing -- and the authors see benefits for American society.

"During a several-year period of heightened racial tensions, the growth of multiracial congregations is a dramatic development," Emerson said. "Such congregations are places of significantly increased cross-racial friendships and cross-racial common experiences."

Credit: 
Baylor University

Chip upgrade helps miniature drones navigate

Researchers at MIT, who last year designed a tiny computer chip tailored to help honeybee-sized drones navigate, have now shrunk their chip design even further, in both size and power consumption.

The team, co-led by Vivienne Sze, associate professor in MIT's Department of Electrical Engineering and Computer Science (EECS), and Sertac Karaman, the Class of 1948 Career Development Associate Professor of Aeronautics and Astronautics, built a fully customized chip from the ground up, with a focus on reducing power consumption and size while also increasing processing speed.

The new computer chip, named "Navion," which they are presenting this week at the Symposia on VLSI Technology and Circuits, is just 20 square millimeters -- about the size of a LEGO minifigure's footprint -- and consumes just 24 milliwatts of power, or about one-thousandth the energy required to power a lightbulb.

Using this tiny amount of power, the chip is able to process in real-time camera images at up to 171 frames per second, as well as inertial measurements, both of which it uses to determine where it is in space. The researchers say the chip can be integrated into "nanodrones" as small as a fingernail, to help the vehicles navigate, particularly in remote or inaccessible places where global positioning satellite data is unavailable.

The chip design can also be run on any small robot or device that needs to navigate over long stretches of time on a limited power supply.

"I can imagine applying this chip to low-energy robotics, like flapping-wing vehicles the size of your fingernail, or lighter-than-air vehicles like weather balloons, that have to go for months on one battery," says Karaman, who is a member of the Laboratory for Information and Decision Systems and the Institute for Data, Systems, and Society at MIT. "Or imagine medical devices like a little pill you swallow, that can navigate in an intelligent way on very little battery so it doesn't overheat in your body. The chips we are building can help with all of these."

Sze and Karaman's co-authors are EECS graduate student Amr Suleiman, who is the lead author; EECS graduate student Zhengdong Zhang; and Luca Carlone, who was a research scientist during the project and is now an assistant professor in MIT's Department of Aeronautics and Astronautics.

A flexible chip

In the past few years, multiple research groups have engineered miniature drones small enough to fit in the palm of your hand. Scientists envision that such tiny vehicles can fly around and snap pictures of your surroundings, like mosquito-sized photographers or surveyors, before landing back in your palm, where they can then be easily stored away.

But a palm-sized drone can only carry so much battery power, most of which is used to make its motors fly, leaving very little energy for other essential operations, such as navigation, and, in particular, state estimation, or a robot's ability to determine where it is in space.

"In traditional robotics, we take existing off-the-shelf computers and implement [state estimation] algorithms on them, because we don't usually have to worry about power consumption," Karaman says. "But in every project that requires us to miniaturize low-power applications, we have to now think about the challenges of programming in a very different way."

In their previous work, Sze and Karaman began to address such issues by combining algorithms and hardware in a single chip. Their initial design was implemented on a field-programmable gate array, or FPGA, a commercial hardware platform that can be configured to a given application. The chip was able to perform state estimation using 2 watts of power, compared to larger, standard drones that typically require 10 to 30 watts to perform the same tasks. Still, the chip's power consumption was greater than the total amount of power that miniature drones can typically carry, which researchers estimate to be about 100 milliwatts.

To shrink the chip further, in both size and power consumption, the team decided to build a chip from the ground up rather than reconfigure an existing design. "This gave us a lot more flexibility in the design of the chip," Sze says.

Running in the world

To reduce the chip's power consumption, the group came up with a design to minimize the amount of data -- in the form of camera images and inertial measurements -- that is stored on the chip at any given time. The design also optimizes the way this data flows across the chip.

"Any of the images we would've temporarily stored on the chip, we actually compressed so it required less memory," says Sze, who is a member of the Research Laboratory of Electronics at MIT. The team also cut down on extraneous operations, such as the computation of zeros, which results in a zero. The researchers found a way to skip those computational steps involving any zeros in the data. "This allowed us to avoid having to process and store all those zeros, so we can cut out a lot of unnecessary storage and compute cycles, which reduces the chip size and power, and increases the processing speed of the chip," Sze says.

Through their design, the team was able to reduce the chip's memory from its previous 2 megabytes, to about 0.8 megabytes. The team tested the chip on previously collected datasets generated by drones flying through multiple environments, such as office and warehouse-type spaces.

"While we customized the chip for low power and high speed processing, we also made it sufficiently flexible so that it can adapt to these different environments for additional energy savings," Sze says. "The key is finding the balance between flexibility and efficiency." The chip can also be reconfigured to support different cameras and inertial measurement unit (IMU) sensors.

From these tests, the researchers found they were able to bring down the chip's power consumption from 2 watts to 24 milliwatts, and that this was enough to power the chip to process images at 171 frames per second -- a rate that was even faster than what the datasets projected.

The team plans to demonstrate its design by implementing its chip on a miniature race car. While a screen displays an onboard camera's live video, the researchers also hope to show the chip determining where it is in space, in real-time, as well as the amount of power that it uses to perform this task. Eventually, the team plans to test the chip on an actual drone, and ultimately on a miniature drone.

Credit: 
Massachusetts Institute of Technology

Machine learning may be a game-changer for climate prediction

New York, NY--June 19, 2018--A major challenge in current climate prediction models is how to accurately represent clouds and their atmospheric heating and moistening. This challenge is behind the wide spread in climate prediction. Yet accurate predictions of global warming in response to increased greenhouse gas concentrations are essential for policy-makers (e.g. the Paris climate agreement).

In a paper recently published online in Geophysical Research Letters (May 23), researchers led by Pierre Gentine, associate professor of earth and environmental engineering at Columbia Engineering, demonstrate that machine learning techniques can be used to tackle this issue and better represent clouds in coarse resolution (~100km) climate models, with the potential to narrow the range of prediction.

"This could be a real game-changer for climate prediction," says Gentine, lead author of the paper, and a member of the Earth Institute and the Data Science Institute. "We have large uncertainties in our prediction of the response of the Earth's climate to rising greenhouse gas concentrations. The primary reason is the representation of clouds and how they respond to a change in those gases. Our study shows that machine-learning techniques help us better represent clouds and thus better predict global and regional climate's response to rising greenhouse gas concentrations."

The researchers used an idealized setup (an aquaplanet, or a planet with continents) as a proof of concept for their novel approach to convective parameterization based on machine learning. They trained a deep neural network to learn from a simulation that explicitly represents clouds. The machine-learning representation of clouds, which they named the Cloud Brain (CBRAIN), could skillfully predict many of the cloud heating, moistening, and radiative features that are essential to climate simulation.

Gentine notes, "Our approach may open up a new possibility for a future of model representation in climate models, which are data driven and are built 'top-down,' that is, by learning the salient features of the processes we are trying to represent."

The researchers also note that, because global temperature sensitivity to CO2 is strongly linked to cloud representation, CBRAIN may also improve estimates of future temperature. They have tested this in fully coupled climate models and have demonstrated very promising results, showing that this could be used to predict greenhouse gas response.

Credit: 
Columbia University School of Engineering and Applied Science

Twenty-five per cent of seafood sold in Metro Vancouver is mislabelled

image: Xiaonan Lu was principal investigator on a University of British Columbia study that found 25 per cent of seafood samples sold in Metro Vancouver to be mislabelled.

Image: 
UBC Faculty of Land and Food Systems

A quarter of the seafood tested from Metro Vancouver grocery stores, restaurants and sushi bars is not what you think it is.

A new UBC study used DNA barcoding to determine that 70 of 281 seafood samples collected in Metro Vancouver between September 2017 and February 2018 were mislabelled.

Researchers from UBC's Lu Food Safety & Health Engineering Lab conducted the study in partnership with independent charity Oceana Canada and the Hanner Lab at the University of Guelph.

"We aim to comprehensively understand the fraudulent labelling of fish products sold in Metro Vancouver, as the first step in studying the complicated seafood supply chain that serves the west coast of Canada," said Xiaonan Lu, who leads the Lu lab. "Our study demonstrates the importance of improving both the regulation of seafood labelling, and the transparency of the fish supply chain."

The supply chain for seafood is complex and opaque. A fish can be caught in Canada, gutted in China, breaded in the U.S., and ultimately sold back to Canada as an American product. Misidentification can happen anywhere along the way. When it's intentional, it's food fraud -- a $52-billion worldwide problem defined as the misrepresentation of food for economic gain.

"Seafood fraud cheats Canadian consumers and hurts local, honest fishers as well as chefs and seafood companies looking to buy sustainable seafood. It causes health concerns and masks global human rights abuses by creating a market for illegally caught fish," said Julia Levin, seafood fraud campaigner with Oceana Canada. "The key to fighting seafood fraud is boat-to-plate traceability. This means tracking the seafood product through the supply chain and requiring that key information travels with the product."

The UBC team and Oceana Canada gathered samples from sellers in Vancouver, Richmond, Coquitlam, Burnaby, North Vancouver, West Vancouver, Surrey and Langley. The Lu lab analyzed UBC's samples, while Oceana Canada's went to TRU-ID, a Guelph-based company that provides DNA certification of foods and natural health products. DNA barcoding involves comparing genetic information from test specimens with reference sequences that can help identify species. Data from the two sources was later collated.

Restaurants had the highest rate of mislabelling, at 29 per cent, followed by grocery stores (24 per cent) and sushi bars (22 per cent). The most commonly mislabelled fish was snapper, with 31 of 34 samples mislabelled.

The researchers found evidence of both intentional and unintentional mislabelling. For example, many fish sold as snapper or red snapper were actually far less valued species such as tilapia. Sutchi catfish took the place of halibut, snapper, sole and cod. Economic motivations were less likely in other cases, such as the substitution of sockeye for pink salmon.

The situation doesn't appear to be improving. An Oceana study conducted in the U.S. from 2010 to 2012 found the mislabelling rate to be 33 per cent. A study in Metro Vancouver 10 years ago had similar findings to the new UBC study, with a much smaller sample size. Oceana Canada found nearly half of samples tested last fall in Ottawa to be mislabelled. They will release a national seafood fraud report this fall with findings from testing done in Halifax, Toronto, Vancouver and Victoria.

"Canada is one of the top seafood-producing countries in the world and our industry complies with much more stringent labelling when exporting products to the European Union, but Canadian consumers don't benefit from this same level of transparency at home," said Robert Hanner, chief technology officer at TRU-ID. "This situation compromises consumer choice and even facilitates laundering illegally harvested seafood into the domestic market, at the expense of legitimate suppliers. This situation must change."

A 2018 report by the Food and Agriculture Organization of the United Nations called for a harmonized DNA-based system that provides universal access to a standard database using scientific names.

Authors of the UBC study support several measures to help consumers understand what they're buying:

harmonize common names of fish between major trading countries

require scientific names on labels

provide consumers with information about where fish was caught or farmed, its processing history, and the fishing/farming methods used

Credit: 
University of British Columbia

Ground-breaking discoveries could create superior alloys with many applications

image: This is a sample holder inside a focus ion beam (FIB) milling microscope used to create thin foils for transmission electron microscopy (TEM) studies.

Image: 
Johan Bodell/Chalmers University of Technology

Many current and future technologies require alloys that can withstand high temperatures without corroding. Now, researchers at Chalmers University of Technology, Sweden, have hailed a major breakthrough in understanding how alloys behave at high temperatures, pointing the way to significant improvements in many technologies. The results are published in the highly ranked journal Nature Materials.

Developing alloys that can withstand high temperatures without corroding is a key challenge for many fields, such as renewable and sustainable energy technologies like concentrated solar power and solid oxide fuel cells, as well as aviation, materials processing and petrochemistry.

At high temperatures, alloys can react violently with their environment, quickly causing the materials to fail by corrosion. To protect against this, all high temperature alloys are designed to form a protective oxide scale, usually consisting of aluminium oxide or chromium oxide. This oxide scale plays a decisive role in preventing the metals from corroding. Therefore, research on high temperature corrosion is very focused on these oxide scales - how they are formed, how they perform at high heat, and how they sometimes fail.

The article in Nature Materials answers two classical issues in the area. One applies to the very small additives of so-called 'reactive elements' - often yttrium and zirconium - found in all high-temperature alloys. The second issue is about the role of water vapour.

"Adding reactive elements to alloys results in a huge improvement in performance - but no one has been able to provide robust experimental proof why," says Nooshin Mortazavi, materials researcher at Chalmers' Department of Physics, and first author of the study. "Likewise, the role of water, which is always present in high-temperature environments, in the form of steam, has been little understood. Our paper will help solve these enigmas".

In this paper, the Chalmers researchers show how these two elements are linked. They demonstrate how the reactive elements in the alloy promote the growth of an aluminium oxide scale. The presence of these reactive element particles causes the oxide scale to grow inward, rather than outward, thereby facilitating the transport of water from the environment, towards the alloy substrate. Reactive elements and water combine to create a fast-growing, nanocrystalline, oxide scale.

"This paper challenges several accepted 'truths' in the science of high temperature corrosion and opens up exciting new avenues of research and alloy development," says Lars Gunnar Johansson, Professor of Inorganic Chemistry at Chalmers, Director of the Competence Centre for High Temperature Corrosion (HTC) and co-author of the paper.

"Everyone in the industry has been waiting for this discovery. This is a paradigm shift in the field of high-temperature oxidation," says Nooshin Mortazavi. "We are now establishing new principles for understanding the degradation mechanisms in this class of materials at very high temperatures."

Further to their discoveries, the Chalmers researchers suggest a practical method for creating more resistant alloys. They demonstrate that there exists a critical size for the reactive element particles. Above a certain size, reactive element particles cause cracks in the oxide scale, that provide an easy route for corrosive gases to react with the alloy substrate, causing rapid corrosion. This means that a better, more protective oxide scale can be achieved by controlling the size distribution of the reactive element particles in the alloy.

This ground-breaking research from Chalmers University of Technology points the way to stronger, safer, more resistant alloys in the future.

More about: Potential consequences of the research breakthrough

High temperature alloys are used in a variety of areas, and are essential to many technologies which underpin our civilisation. They are crucial for both new and traditional renewable energy technologies, such as "green" electricity from biomass, biomass gasification, bio-energy with carbon capture and storage (BECCS), concentrated solar energy, and solid oxide fuel cells. They are also crucial in many other important technology areas such as jet engines, petrochemistry and materials processing.

All these industries and technologies are entirely dependent on materials that can withstand high temperatures - 600 ° C and beyond - without failing due to corrosion. There is a constant demand for materials with improved heat resistance, both for developing new high temperature technologies, and for enhancing the process efficiency of existing ones.

For example, if the turbine blades in an aircraft's jet engines could withstand higher temperatures, the engine could operate more efficiently, resulting in fuel-savings for the aviation industry. Or, if you can produce steam pipes with better high-temperature capability, biomass-fired power plants could generate more power per kilogram of fuel.

Corrosion is one of the key obstacles to material development within these areas. The Chalmers researchers' article provides new tools for researchers and industry to develop alloys that withstand higher temperatures without quickly corroding.

Credit: 
Chalmers University of Technology

Keyhole may trump robotic surgery for mitral valve repair

Keyhole surgery for heart valve repair may trump robotic surgery, because it is associated with lower rates of subsequent heart flutter and blood transfusions, and a shorter hospital stay, reveals research looking at the pros and cons of different surgical approaches, published online in the journal Heart.

But as keyhole, robotic, and conventional surgery are all very safe and effective, the choice of which to perform should be governed by patient preference and the experience of the operating surgeon, suggest the researchers.

Despite the steep learning curves and additional cost involved, mitral valve repair is the most common heart operation performed using robot assisted surgery. But to date few studies have compared it with keyhole and conventional surgical techniques.

The researchers therefore drew on a comparison of 2300 patients who needed planned mitral valve repair surgery between 2011 and 2016, and who were allocated to either robotic surgery (372), keyhole surgery (576), or conventional (1352) sternotomy--where the sternum is cut open and divided.

Rates of successful repair were high in those undergoing robotic and keyhole surgery: 91 per cent. But they were significantly lower in those who had conventional surgery: 76 percent. This was despite similar rates of degenerative disease across all the cases.

The robotic procedure took the most time to perform--224 minutes compared with 180 minutes for keyhole and 168 minutes for conventional surgery.

The robotic approach had similar outcomes to the conventional approach except that there were half the number of onward discharges to further care--7% vs 15%--and one day less spent in hospital.

But compared with keyhole surgery, robotic surgery required more blood transfusions (15% vs 5%), was associated with higher rates of heart flutter (atrial fibrillation) of 26% vs 18%, and one day longer in hospital, on average.

Because the cases were all reviewed after surgery had taken place, the findings can't establish cause, caution the authors, and the patients may not be representative of all those who require mitral valve repair.

There are pros and cons to each of the techniques, prompting the authors to conclude: "From a patient perspective, all three approaches provide excellent outcomes, thus patient preference and surgeon experience should dictate the approach for mitral valve surgery."

Credit: 
BMJ Group

Carbon nanotube optics provide optical-based quantum cryptography and quantum computing

image: Depiction of a carbon nanotube defect site generated by functionalization of a nanotube with a simple organic molecule.  Altering the electronic structure at the defect enables room-temperature single photon emission at telecom wavelengths.

Image: 
LANL

LOS ALAMOS, N.M., June 18, 2018--Researchers at Los Alamos and partners in France and Germany are exploring the enhanced potential of carbon nanotubes as single-photon emitters for quantum information processing. Their analysis of progress in the field is published in this week's edition of the journal Nature Materials.

"We are particularly interested in advances in nanotube integration into photonic cavities for manipulating and optimizing light-emission properties," said Stephen Doorn, one of the authors, and a scientist with the Los Alamos National Laboratory site of the Center for Integrated Nanotechnologies (CINT). "In addition, nanotubes integrated into electroluminescent devices can provide greater control over timing of light emission and they can be feasibly integrated into photonic structures. We are highlighting the development and photophysical probing of carbon nanotube defect states as routes to room-temperature single photon emitters at telecom wavelengths."

The team's overview was produced in collaboration with colleagues in Paris (Christophe Voisin) who  are advancing the integration of nanotubes into photonic cavities for modifying their emission rates, and at Karlsruhe (Ralph Krupke) where they are integrating nanotube-based electroluminescent devices with photonic waveguide structures. The Los Alamos focus is the analysis of nanotube defects for pushing quantum emission to room temperature and telecom wavelengths, he said.

As the paper notes, "With the advent of high-speed information networks, light has become the main worldwide information carrier. . . . Single-photon sources are a key building block for a variety of technologies, in secure quantum communications metrology or quantum computing schemes."

The use of single-walled carbon nanotubes in this area has been a focus for the Los Alamos CINT team, where they developed the ability to chemically modify the nanotube structure to create deliberate defects, localizing excitons and controlling their release. Next steps, Doorn notes, involve integration of the nanotubes into photonic resonators, to provide increased source brightness and to generate indistinguishable photons. "We need to create single photons that are indistinguishable from one another, and that relies on our ability to functionalize tubes that are well-suited for device integration and to minimize environmental interactions with the defect sites," he said.
 

"In addition to defining the state of the art, we wanted to highlight where the challenges are for future progress and lay out some of what may be the most promising future directions for moving forward in this area. Ultimately, we hope to draw more researchers into this field," Doorn said.
   

Credit: 
DOE/Los Alamos National Laboratory

Unique immune-focused AI model creates largest library of inter-cellular communications

image: This image shows immune-focused disease module comparisons.

Image: 
<i>Nature Biotechnology</i>

Tel Aviv -- June 18, 2018 -- New data published in Nature Biotechnology, represents the largest ever analysis of immune cell signaling research, mapping more than 3,000 previously unlisted cellular interactions, and yielding the first ever immune-centric modular classification of diseases. These data serve to rewrite the reference book on immune-focused inter-cellular communications and disease relationships.

The immune system is highly complex and dynamic, and with a new immunology paper published every 30 minutes, there is no practical way for a human to grapple with the sheer size and diversity of the field. As this body of data grows, machine learning methods will be the only practical way of fully leveraging all the efforts being made to advance immunology and science in general.

Standardizing and contextualising the full body of cell-cytokine relationships is vital in our ability to broaden immune system understanding. Based on this curated knowledge base, 355 hypotheses for entirely novel cell-cytokine interactions were generated through the application of validated prediction technologies.

These alone, represent discoveries born out of a better contextual understanding of existing immune system knowledge. This potential becomes even more powerful when such knowledge can be integrated with other rich data sources and AI technologies to generate significant new clues in the fight against disease.

INFOGRAPHIC: Cell Talk - re-writing the book on immune-focused inter-cellular communications - available here: https://bit.ly/2lj4OBT

"Given the dominant role the immune system plays in disease, an immune-centric view takes us towards a better understanding of disease mechanisms." Said Professor Shai Shen-Orr, PhD., Chief Scientist at CytoReason and Director of Systems Immunology at the Technion. "These data demonstrate that valuable, validated, predictions are possible just by mining and learning from existing papers. This ability grows exponentially when you integrate it with other prediction technologies and additional data sets."

“This important piece of work changes the paradigm in what can be predicted when you interfere with a particular receptor, molecule or cell - specific to a disease or tissue", said David Harel, CEO of CytoReason. "This work, combined with our Cell-Centred Model, doesn't just describe what happens between the cells etc, but also defines who initiates and who acts on it - this is the key to the uniquely 3-dimensional view of the immune system that CytoReason builds.”

Credit: 
CytoReason

Valuing gluten-free foods relates to health behaviors in young adults

Philadelphia, June 18, 2018 - In a new study featured in the Journal of the Academy of Nutrition and Dietetics, researchers found that among young adults valuing gluten-free foods could be indicative of an overall interest in health or nutrition. These young adults were more likely to engage in healthier behaviors including better dietary intake and also valued food production practices (e.g., organic, non-GMO, locally sourced). Of concern, they were also more likely to engage in unhealthy weight control behaviors and over-concern about weight.

Gluten-free food offerings have become more ubiquitous in the past decade, with proponents claiming they can help with everything from weight loss, to treating autoimmune disease, to improving your skin. Despite all the attention, little is known about the effect these beliefs have on the dietary habits of the general public.

Researchers from the University of Minnesota wanted to explore the sociodemographic and behavioral characteristics of young adults who value gluten-free as an important food attribute and investigate how this is associated with their dietary intake. The study looked at a sample of 1,819 young adults 25 to 36 years old from the Project EAT longitudinal cohort study. They measured whether they value gluten-free food, weight goals, weight control behaviors, food production values, eating behaviors, physical activity, and dietary intake.

Investigators found that approximately 13 percent of participants valued gluten-free food. These individuals were four to seven times more likely to value food production practices such as organic, locally-grown, non-GMO, and not processed. There was also an association between using Nutrition Facts and having a weight goal and valuing gluten-free foods.

Interestingly, valuing gluten-free food was also linked to both healthy eating behaviors like eating breakfast daily and consuming more fruits and vegetables, and unhealthy weight control behaviors such as smoking, using diet pills or purging. These data show that while eating gluten-free can be associated with an overall interest in maintaining a healthy lifestyle, it might also indicate a harmful preoccupation with weight loss and/or behaviors that are perceived to promote weight loss. Researchers found that valuing gluten-free food was three times higher for young adults engaging in unhealthy weight control behaviors.

"I have concerns about the increasing number of people who perceive that eating a gluten-free diet is a healthier way to eat. Of particular concern is the higher risk for those engaging in unhealthy weight control practices for perceiving a gluten-free diet as important, given that eating gluten-free, may be viewed as a 'socially acceptable way' to restrict intake that may not be beneficial for overall health," noted lead investigator Dianne Neumark-Sztainer, PhD, MPH, RD, professor and head, Division of Epidemiology and Community Health, University of Minnesota, Minneapolis, MN, USA. "If there is a need for eating gluten-free, then it is important to avoid foods with gluten. Otherwise, a dietary pattern that includes a variety of foods, with a large emphasis on fruits, vegetables, and whole grains, is recommended for optimal health."

Gluten-free food offerings continue to gain a foothold in the marketplace. In 2015, gluten-free alternatives to traditional foods accounted for almost $1.6 billion in sales with most of the growth driven by consumers for whom being gluten-free is not medically necessary (e.g., Celiac disease). Other research shows that up to one third of consumers believe that gluten-free foods are healthier than their gluten counterparts. This is part of the "health halo" effect, belief by consumers that because a food lacks a certain ingredient or has a specific label, that food is automatically "healthy."

"Products labeled as 'low sodium,' 'natural,' and 'free from' certain food components or characteristics may be interpreted by consumers as being healthier overall," explained lead author Mary J. Christoph, PhD, MPH, postdoctoral fellow, Department of Pediatrics, University of Minnesota, Minneapolis, MN, USA. "The health halo effect can have unintended consequences on eating habits, such as people overconsuming because they believe they have chosen a healthier product."

Investigators did find that individuals who valued gluten-free foods were more likely to eat a higher quality diet. Although dietary intake did not meet most guidelines, these participants were more in sync with the Dietary Guidelines for Americans, including consuming more fruits, vegetables, and fiber, less sodium, and a smaller proportion of calories from saturated fat. The data did show that there was no difference in whole grain intake between those who valued gluten-free foods and those who did not.

"This is one of the first population-based studies to describe sociodemographic and behavioral characteristics of young adults who value gluten-free food and to compare dietary intake for those who did and did not value gluten-free food," concluded Dr. Christoph. "Nutrition professionals counseling gluten-free clientele should ask about the reasons underlying valuing and/or eating gluten-free food along with other behaviors, particularly weight control, to promote overall nutrition and health."

Credit: 
Elsevier

360 degrees, 180 seconds: Technique speeds analysis of crop traits

video: This video demonstrates a new automated, 360-degree method for scanning plants. By continuously firing pulsed laser light at the plant as it rotates, then measuring how long it takes the light to return, the technique can gather millions of 3-D coordinates in a matter of minutes. A sophisticated algorithm then clusters and digitally molds those coordinates into distinct components of the plant while measuring traits such as leaf area and leaf angle.

Image: 
University of Nebraska-Lincoln

A potted nine-leaf corn plant sits on a Frisbee-sized plate. The tandem begins rotating like the centerpiece atop a giant music box, three degrees per second, and after two minutes the plant has pirouetted to its original position.

Another minute passes, and on a nearby screen appears a digital 3-D image in the palette of Dr. Seuss: magenta and teal and yellow, each leaf rendered in a different hue but nearly identical to its actual counterpart in shape, size and angle.

That rendering and its associated data come courtesy of LiDAR, a technology that fires pulsed laser light at a surface and measures the time it takes for those pulses to reflect back - the greater the delay, the greater the distance. By scanning a plant throughout its rotation, this 360-degree LiDAR technique can collect millions of 3-D coordinates that a sophisticated algorithm then clusters and digitally molds into the components of the plant: leaves, stalks, ears.

The University of Nebraska-Lincoln's Yufeng Ge, Suresh Thapa and their colleagues have devised the approach as a way to automatically and efficiently gather data about a plant's phenotype: the physical traits that emerge from its genetic code. The faster and more accurately phenotypic data can be collected, the more easily researchers can compare crops that have been bred or genetically engineered for specific traits - ideally those that help produce more food.

Accelerating that effort is especially important, the researchers said, to meet the food demands of a global population expected to grow from about 7.5 billion people today to nearly 10 billion in 2050.

"We can already do DNA sequencing and genomic research very rapidly," said Ge, assistant professor of biological systems engineering. "To use that information more effectively, you have to pair it with phenotyping data. That will allow you to go back and investigate the genetic information more closely. But that is now (reaching) a bottleneck, because we can't do that as fast as we want at a low cost."

At three minutes per plant, the team's set-up operates substantially faster than most other phenotyping techniques, Ge said. But speed matters little without accuracy, so the team also used the system to estimate four traits of corn and sorghum plants. The first two traits - the surface area of individual leaves and all leaves on a plant - help determine how much energy-producing photosynthesis the plant can perform. The other two - the angle at which leaves protrude from a stalk and how much those angles vary within a plant - influence both photosynthesis and how densely a crop can be planted in a field.

Comparing the system's estimates with careful measurements of the corn and sorghum plants revealed promising results: 91 percent agreement on the surface area of individual leaves and 95 percent on total leaf area. The accuracy of angular estimates was generally lower but still ranged from 72 percent to 90 percent, depending on the variable and type of plant.

CAMERA SHY

To date, the most common form of 3-D phenotyping has relied on stereo-vision: two cameras that simultaneously capture images of a plant and merge their perspectives into an approximation of 3-D by identifying the same points from both images.

Though imaging has revolutionized phenotyping in many ways, it does have shortcomings. The shortest, Ge said, is an inevitable loss of spatial information during the translation from 3-D to 2-D, especially when one part of a plant blocks a camera's view of another part.

"It has been particularly challenging for traits like leaf area and leaf angle, because the image does not preserve those traits very well," Ge said.

The 360-degree LiDAR approach contends with fewer of those issues, the researchers said, and demands fewer computational resources when constructing a 3-D image from its data.

"LiDAR is advantageous in terms of the throughput and speed and in terms of accuracy and resolution," said Thapa, doctoral student in biological systems engineering. "And it's becoming more economical (than before)."

Going forward, the team wants to introduce lasers of different colors to its LiDAR set-up. The way a plant reflects those additional lasers will help indicate how it uptakes water and nitrogen - the essentials of plant growth - and produces the chlorophyll necessary for photosynthesis.

"If we can tackle those three (variables) on the chemical side and these other four (variables) on the morphological side, and then combine them, we'll have seven properties that we can measure simultaneously," Ge said. "Then I will be really happy."

Credit: 
University of Nebraska-Lincoln

Large outdoor study shows biodiversity improves stability of algal biofuel systems

image: These are samples collected from large tanks containing mixes of various freshwater algal species. The green samples are healthy, while the yellow samples were contaminated by a fungal disease. The biofuels experiment was conducted in the summer of 2016 at U-M's E.S. George Reserve near Pinckney, Mich

Image: 
Daryl Marshke/Michigan Photograp

ANN ARBOR--A diverse mix of species improves the stability and fuel-oil yield of algal biofuel systems, as well as their resistance to invasion by outsiders, according to the findings of a federally funded outdoor study by University of Michigan researchers.

U-M scientists grew various combinations of freshwater algal species in 80 artificial ponds at U-M's E.S. George Reserve near Pinckney in the first large-scale, controlled experiment to test the widely held idea that biodiversity can improve the performance of algal biofuel systems in the field.

Overall, the researchers found that diverse mixes of algal species, known as polycultures, performed more key functions at higher levels than any single species--they were better at multitasking. But surprisingly, the researchers also found that polycultures did not produce more algal mass, known as biomass, than the most productive single species, or monoculture.

"The results are key for the design of sustainable biofuel systems because they show that while a monoculture may be the optimal choice for maximizing short-term algae production, polycultures offer a more stable crop over longer periods of time," said study lead author Casey Godwin, a postdoctoral research fellow at U-M's School for Environment and Sustainability.

The team's findings are scheduled for publication June 18 in the journal Global Change Biology-Bioenergy.

Algae-derived biocrude oil is being studied as a potential renewable-energy alternative to fossil fuels. Because they grow quickly and can be converted to bio-oil, algae have the potential to generate more fuel from less surface area than crops like corn. But the technical challenges involved in growing vast amounts of these microscopic aquatic plants in large outdoor culture ponds have slowed progress toward commercial-scale cultivation.

Outside--far from the controlled conditions of the laboratory--an algal biofuel cultivation system must maintain steady, stable growth of fuel-ready algae in the face of fluctuating weather conditions, the threat of population crashes caused by diseases and pests, and invasion by nuisance species of algae.

Decades of ecological research have demonstrated that plant and animal communities containing a rich mix of species are, on average, more productive than less-diverse communities, more stable in the face of environmental fluctuations, and more resistant to pests and diseases.

But the idea that algal polycultures can outperform monocultures had never been rigorously tested under large-scale field conditions. With funding from the National Science Foundation and the U-M Energy Institute, U-M ecologist Bradley Cardinale and his colleagues set out to test this hypothesis, using a two-part study.

The first phase involved growing various combinations of six North American lake algal species in 180 aquarium-like tanks in the basement of the Dana Building on U-M's Ann Arbor campus. All six species are commonly used in biofuel systems.

The second phase involved field-testing the four most promising algal species and species mixtures by growing them outdoors inside 290-gallon cattle tanks at the 1,300-acre U-M reserve. That work was done in summer 2016 and led to the upcoming Global Change Biology-Bioenergy paper.

In both phases of the study, colleagues at the U-M College of Engineering used a technique called hydrothermal liquefaction to convert the algae into combustible oils, or biocrude, which can be refined to make transportation fuels like biodiesel.

"First we evaluated different combinations of algae in the lab, and then we brought the best ones out to nature, where they were exposed to fluctuating weather conditions, pests, disease and all the other factors that have plagued algae-based fuel research efforts for 40 years," Godwin said.

In their analysis of the algal samples collected during the 10-week E.S. George Reserve study, researchers compared the ability of monocultures and polycultures to do several jobs at once: to grow lots of algal biomass, to yield high-quality biocrude, to remain stable through time, to resist population crashes and to repel invasions by unwanted algal species.

Their analysis showed that the use of polycultures significantly delayed invasion by unwanted species of algae; that biocrude yields were significantly higher in the two- and four-species polycultures than in the monocultures; and that diverse crops of algae were more stable over time.

And while monocultures tended to be good at one or two jobs at a time, polycultures performed more of the jobs at higher levels than any of the monocultures, a trait called multifunctionality.

But at the same time, polycultures produced less biomass than the best-performing monoculture. And the use of polycultures had no significant effect on the magnitude and timing of sudden, sharp declines in algal production known as population crashes.

"Our findings suggest there is a fundamental tradeoff when growing algal biofuel," said Cardinale, a professor at the U-M School for Environment and Sustainability.

"You can grow single-species crops that produce large amounts of biomass but are unstable and produce less biocrude. Or, if you are willing to give up some yield, you can use mixtures of species to produce a biofuel system that is more stable through time, more resistant to pest species, and which yields more biocrude oil."

Authors of the Global Change Biology paper, in addition to Godwin and Cardinale, are U-M's Aubrey Lashaway and David Hietala, and Phillip Savage of Pennsylvania State University.

Members of the same research team have published other recent papers that examine the benefits of diversity in algal biofuels systems for minimizing fertilizer use, recycling wastes, and improving the chemical properties of biocrude.

"Collectively, these results show how applying principles from ecology could help in the design of next-generation renewable fuel systems," Godwin said.

Credit: 
University of Michigan

Orange, tea tree & eucalyptus oils sweeten diesel fumes

QUT PhD researcher Ashrafur Rahman tested each of the waste oils for performance and emissions as a 10 per cent oil/90per cent diesel blend in a 6-cylinder, 5.9l diesel engine.

"As only therapeutic grade oil can be used, there is a substantial volume of low-value waste oil that currently is stored, awaiting a use," Mr Rahman said.

"Our tests found essential oil blends produced almost the same power as neat diesel with a slight increase in fuel consumption.

"Diesel particulate emissions, which are dangerous to human health, were lower than pure diesel, but nitrogen oxide emissions, a precursor to photochemical smog, were slightly higher."

Mr Rahman said the abundance of the three oils could mean that fragrant fumes on farms were not far off.

"Orange, eucalyptus and tea tree are either native or grown extensively in Australia for essential oil production.

"We see the main use for an essential oil/diesel blend would be in the agricultural sector, especially in the vehicles used by the producers of these oils.

"With further improvement of some key properties, essential oils could be used in all diesel vehicles."

Credit: 
Queensland University of Technology