Tech

Plants share defensive proteins in evolutionary pick 'n' mix

The recent research, led by the Krasileva Group of Earlham Institute and The Sainsbury Laboratory, used phylogenetics (the study of how DNA sequences are related) to identify how these 'bait' genes are distributed throughout various wild and domestic grasses, including important crop plants such as wheat, barley, maize and rice. This fresh evidence could help scientists and breeders especially in arming crop plants against a swathe of emerging diseases.

By looking at the genetic history of these plants, the team found several interesting groups that gravitated towards forming novel fusions with plant receptors, which were most diverse in the wheat crop. These proteins are involved in plant stress responses in general, particularly in defense against pathogen attack.

"If we could understand better how these proteins with these additional 'integrated' domains were formed during recent evolution, then there is a good chance that we could engineer genes with specific domains to provide resistance to new types of pathogen attack," says Paul Bailey, lead author of the study who performed the phylogenetic analysis.

The research team are primarily interested in bread wheat due to the complexity and size of its genetic makeup, as well as eight other grass species' genomes. Advancements in such genome sequencing have enabled the scientists to make comparisons of gene similarity between closely related species - wheat and barley, for example - and more distantly related species such as wheat and maize.

"We were intrigued to find that even between closely-related species, there can be significant variability between the type of fusion events that have occurred, indicating that the process is still active and ongoing in these plants," added Paul.

Plants have an immune system that helps them recognise a range of pathogens, but they have to keep up with enemy fire that is perpetually evolving and constantly adapting new ways of getting around a plant's primary defenses.

However, certain plant pathogen receptors known as 'NLR' proteins have been shown to be able to recognise some of the signals associated with disease-causing agents. By acquiring sections of proteins coded by other genes, which are often the target of pathogen infection, NLRs act as an 'integrated defense decoy'.

Dr Ksenia Krasileva, who led the project, added: "When plants are able to evolve rapidly, they can respond to pathogens working their way around other defenses. Uncovering how plants stay healthy is still a challenge. This paper is a result of a successful collaboration of several genomics experts, including our group, Dr Matthew Moscou at TSL and Dr Wilfried Haerty at EI. Together, we uncovered one of the tricks plant use which will help to generate resistant crops."

Plant pathogens are continually evolving, but in the future, the research team hope to be able to generate novel proteins with specifically integrated domains that give resistance to pathogens, particularly new threats to our crops.

Credit: 
Earlham Institute

Why US policies for dyslexia should be scrapped

Many of the current US Federal and State dyslexia laws should be scrapped as they ignore scientific evidence and privilege some poor readers at the expense of huge numbers of others, according to a leading expert in reading disability.

Professor Julian Elliott from Durham University in the UK says valuable resources are put into expensive and time-consuming tests to diagnose children which are not only often highly questionable, but also do not point to forms of learning support that are different from what should be provided to any other poor reader.

He suggests that the priority for teaching professionals and others should be to identify reading difficulties early in any child and intervene as quickly as possible rather than pursue a questionable diagnosis of dyslexia with all the costs and time delays involved.

Professor Elliott, a former teacher of children with learning difficulties and educational (school) psychologist, and now professor of educational psychology at Durham University, will be speaking at the Everyone Reading Conference in New York City on Monday, 5 March, and at Cornell University on Thursday, 8 March.

He will argue that the current dyslexia policies in the US undermine the education and life chances of large numbers of children, particularly those from socially and economically disadvantaged backgrounds.

The Individuals with Disabilities in Education Act (IDEA) includes dyslexia as a specific learning disability that can make students eligible for special education. While political pressure has proven very successful in getting this term on the statute books - at least 33 states have passed dyslexia-related legislation since 2012 - it is not at all clear what criteria can, or should, be used to make such a diagnosis, says Professor Elliott. He argues that many of the symptoms listed for dyslexia are simply factors that are more prevalent in those who struggle to learn to read rather than a feature of so-called dyslexic readers.

Professor Elliott, who is author of the book 'The Dyslexia Debate', said: "The key problem stems from popular notions promulgated by powerful advocacy groups and vested professional interests that not all poor readers are dyslexic, and diagnostic testing using special psychological tests are necessary for identifying the dyslexic child.

"Parents are being totally misled about the importance of such a dyslexia diagnosis. They are often led to believe that dyslexic children have different brains to other poor readers and that this can result in them demonstrating special gifts and talents. Such assertions are not supported by the facts.

"Those children who are diagnosed with dyslexia, it is wrongly believed, need special forms of intervention that differ from those appropriate for other poor readers.

"What we need instead is for teachers to look at each child's educational needs and spot problems early on, without pursuing a scientifically dubious label or diagnosis. This alternative approach would then be helpful for all children who struggle to read, not just those with the means to get a diagnosis."

In his research, Professor Elliott has shown that teaching approaches to help children who are deemed to be poor readers are no different from those who have been labelled dyslexic. Drawing upon his experience in the UK, he argues that myths around dyslexia persist despite overwhelming evidence against them and that the dyslexia industry privileges poor readers with a dyslexia diagnosis at the expense of large numbers of other poor readers.

Professor Elliott said: "This mythical distinction between a child with a dyslexia diagnosis and a child who is a poor reader but not diagnosed, increasingly prevalent in many areas of the United States, undermines the need to tackle reading problems in all children irrespective of their ability to gain the dyslexic label."

Although Professor Elliott does not question the existence of the very real underlying problems that those with complex reading difficulties typically experience, he is critical of dyslexia as a term often used to describe a wide range of problems, of varying degrees of severity, in a haphazard and imprecise fashion.

Research has shown that it has proven impossible to identify a dyslexic sub-group that is scientifically justifiable, and which has value for practitioners. Symptoms found in one person leading to a diagnosis of dyslexia may well be absent in another person similarly diagnosed, according to Professor Elliott.

To illustrate his argument, in his presentations Professor Elliott will report new research findings from his team's study of school psychologists and specialist teachers. Additionally, he will highlight conceptual and operational flaws in dyslexia legislation currently passing through the New York State Senate.

Credit: 
Durham University

Rice team designs lens-free fluorescent microscope

image: FlatScope is being developed at Rice University for use as a fluorescent microscope able to capture three-dimensional data and produce images from anywhere within the field of view.

Image: 
Jeff Fitlow/Rice University

HOUSTON - (March 5, 2018) - Lenses are no longer necessary for some microscopes, according to Rice University engineers developing FlatScope, a thin fluorescent microscope whose abilities promise to surpass those of old-school devices.

A paper in Science Advances by Rice engineers Ashok Veeraraghavan, Jacob Robinson, Richard Baraniuk and their labs describes a wide-field microscope thinner than a credit card, small enough to sit on a fingertip and capable of micrometer resolution over a volume of several cubic millimeters.

FlatScope eliminates the tradeoff that hinders traditional microscopes in which arrays of lenses can either gather less light from a large field of view or gather more light from a smaller field.

The Rice team began developing the device as part of a federal initiative by the Defense Advanced Research Projects Agency as an implantable, high-resolution neural interface. But the device's potential is much greater. The researchers claim FlatScope, an advance on the labs' earlier FlatCam, could be used as an implantable endoscope, a large-area imager or a flexible microscope.

"We think of this as amping up FlatCam so it can solve even bigger problems," Baraniuk said.

Traditional fluorescent microscopes are essential tools in biology. They pick up fluorescent signals from particles inserted into cells and tissues that are illuminated with specific wavelengths of light. The technique allows scientists to probe and track biological agents with nanometer-scale resolution.

But like all traditional microscopes, telescopes and cameras, their resolution depends on the size of their lenses, which can be large and heavy and limit their use in biological applications.

The Rice team takes a different approach. It uses the same charge-coupled device (CCD) chips found in all electronic cameras to capture incoming light, but the comparisons stop there. Like the FlatCam project that inspired it, FlatScope's field of view equals the size of the CCD sensor, which can be as large or as small as required. It's flat because it replaces the array of lenses in a traditional microscope with a custom amplitude mask.

This mask, which resembles a bar code, sits directly in front of the CCD. Light that comes through the mask and hits the sensor becomes data that a computer program interprets to produce images.

The algorithm can focus on any part of the three-dimensional data the scope captures and produce images of objects smaller than a micron anywhere in the field.

That resolution is what makes the device a microscope, Robinson said. "A camera in your mobile phone or DSLR typically gets on the order of 100-micron resolution," he said. "When you take a macro photo, the resolution is about 20 to 50 microns.

"I think of a microscope as something that allows you to image things on the micron scale," he said. "That means things that are smaller than the diameter of a human hair, like cells, parts of cells or the fine structure of fibers."

Achieving that resolution required modifications to the FlatCam mask to further cut the amount of light that reaches the sensor as well as a rewrite of their software, Robinson said. "It wasn't as trivial as simply applying the FlatCam algorithm to the same techniques we used to image things that are far away," he said.

The mask is akin to the aperture in a lensed camera that focuses light onto the sensor, but it's only a few hundred micrometers from the sensor and allows only a fraction of the available light to get through, limiting the amount of data to simplify processing.

"In the case of a megapixel camera, that computational problem requires a matrix of a million times a million elements," Robinson said. "It's an incredibly big matrix. But because we break it down through this pattern of rows and columns, our matrix is just 1 million elements."

That cuts the data for each snapshot from six terabytes to a more practical 21 megabytes, which translates to short processing times. From early versions of FlatCam that required an hour or more to process an image, FlatScope captures 30 frames of 3-D data per second.

Veeraraghavan said the burgeoning internet of things may provide many applications for flat cameras and microscopes. That in turn would drive down costs. "One of the big advantages of this technology compared with traditional cameras is that because we don't need lenses, we don't need postfabrication assembly," he said. "We can imagine this rolling off a fabrication line."

But their primary targets are medical uses, from implantable scopes for the clinic to palm-sized microscopes for the battlefield. "To be able to carry a microscope in your pocket is a neat technology," Veeraraghavan said.

The researchers noted that while their current work is focused on fluorescent applications, FlatScope could also be used for bright-field, dark-field and reflected-light microscopy. They suggested an array of FlatScopes on a flexible background could be used to match the contours of a target.

Credit: 
Rice University

Scientists observe a new quantum particle with properties of ball lightning

video: This is a side view of the experimental creation of a 3-D skyrmion. The imaging method produces three regions where the spins point up (right), horizontally (center), and down (left). In the actual experiment, there is only a single condensate which contains all these different regions. Brighter color denotes a higher particle density.

Image: 
Tuomas Ollikainen

Scientists at Amherst College and Aalto University have created, for the first time a three-dimensional skyrmion in a quantum gas. The skyrmion was predicted theoretically over 40 years ago, but only now has it been observed experimentally.

In an extremely sparse and cold quantum gas, the physicists have created knots made of the magnetic moments, or spins, of the constituent atoms. The knots exhibit many of the characteristics of ball lightning, which some scientists believe to consist of tangled streams of electric currents. The persistence of such knots could be the reason why ball lightning, a ball of plasma, lives for a surprisingly long time in comparison to a lightning strike. The new results could inspire new ways of keeping plasma intact in a stable ball in fusion reactors.

'It is remarkable that we could create the synthetic electromagnetic knot, that is, quantum ball lightning, essentially with just two counter-circulating electric currents. Thus, it may be possible that a natural ball lighting could arise in a normal lightning strike,' says Dr Mikko Möttönen, leader of the theoretical effort at Aalto University.

Möttönen also recalls having witnessed a ball lightning briefly glaring in his grandparents' house. Observations of ball lightning have been reported throughout history, but physical evidence is rare.

The dynamics of the quantum gas matches that of a charged particle responding to the electromagnetic fields of a ball lightning.

'The quantum gas is cooled down to a very low temperature where it forms a Bose-Einstein condensate: all atoms in the gas end up in the state of minimum energy. The state does not behave like an ordinary gas anymore but like a single giant atom,' explains Professor David Hall, leader of the experimental effort at Amherst College.

The skyrmion is created first by polarizing the spin of each atom to point upward along an applied natural magnetic field. Then, the applied field is suddenly changed in such a way that a point where the field vanishes appears in the middle of the condensate. Consequently, the spins of the atoms start to rotate in the new direction of the applied field at their respective locations. Since the magnetic field points in all possible directions near the field zero, the spins wind into a knot.

The knotted structure of the skyrmion consists of linked loops, at each of which all the spins point to a certain fixed direction. The knot can be loosened or moved, but not untied.

'What makes this a skyrmion rather than a quantum knot is that not only does the spin twist but the quantum phase of the condensate winds repeatedly,' says Hall.

If the direction of the spin is changing in space, the velocity of the condensate responds just as would happen for a charged particle in a magnetic field. The knotted spin structure thus gives rise to a knotted artificial magnetic field that exactly matches the magnetic field in a model of ball lightning.

'More research is needed to know whether or not it is also possible to create a real ball lightning with a method of this kind. Further studies could lead to finding a solution to keep plasma together efficiently and enable more stable fusion reactors than we have now,' Möttönen explains.

Credit: 
Aalto University

Capturing the balance of nature

image: Researcher spends over a decade recording the changes in the populations of all the fish and jellyfish he encountered.

Image: 
Kyoto University / Reiji Masuda

Kyoto, Japan -- In a study spanning twelve years, researchers from Kyoto University, and with Ryukoku University have developed a method to calculate the fluctuating stability of a natural ecological community in Maizuru Bay.

Their findings, published in Nature, provide insight into and new methodologies for ecological and population research.

How biological communities are maintained is a big question in ecology. While current studies suggest community stability can be influenced by species diversity and interactions, these ideas have almost never been tested in natural ecosystems.

"Previous research focused on issues such as the birth-death process, like when a predator eats prey, and inside a somewhat controlled environment," explains lead author Masayuki Ushio. "Even then, it is challenging to measure the rapidly changing interactions of multiple species in nature."

A key reason for this is that natural ecosystems do not typically exhibit equilibrium dynamics, and so the most viable way to measure changes is to keep constant records of organisms in a particular location.

Fortunately, Reiji Masuda of the Field Science Education and Research Center has been doing just that in Maizuru Bay. For twelve years, he has been diving into the waters every two weeks, recording the populations of all the fish and jellyfish he encountered.

"I initially started these observations in 2002 to see what organisms lived in the area and how they change over time," Masuda describes. "When I showed my work to Masayuki Ushio and Michio Kondoh at Ryukoku University, they suggested we work together and build a model to chart the interactions between the organisms."

The model was constructed by applying a time series analysis tool -- empirical dynamic modeling, or EDM -- to the Maizuru Bay data set. EDM is designed to analyze the fluctuating dynamics of a natural ecosystem, without setting any defined equations. This was then further developed to analyze the ecosystem's dynamic stability.

What they found was a complex web of interactions, where organisms have positive or negative effects on each other, and more importantly, showed that short-term changes in the network influences overall community dynamics.

"We could even see that things such as diversity, species interactions, and stability itself fluctuate throughout time," says Ushio. "This suggests that these properties are key to understanding the mechanisms underlying the functioning of natural ecosystems."

The team's methods can be readily applied to other fields, from neuroscience, microbiology, to economics.

"This is by far one of the most comprehensive looks at how much an ecosystem can change in the natural world," concludes team leader Michio Kondoh. "I was surprised to see that community stability, which is often assumed to be constant, is in fact changing over time. The robustness of ecological communities depends on timing and seasons."

Credit: 
Kyoto University

Vertical measurements of air pollutants in urban Beijing

image: Photos taken at (a) the ground level (up), and (b) 280 m (down) on the Beijing 325 m Meteorological Tower at 8:30 am, Nov. 21st, 2014.

Image: 
The photo on the left was taken by SUN Yele; The photo on the right was taken by the camera installed on the Tower.

Severe haze episodes with surprisingly high concentrations of fine particles (PM2.5) still occur in fall and winter seasons in Beijing, although the air quality has been improved in recent years. Air pollution often shows strong vertical differences in Beijing. For example, we can feel fresh air with a good visibility at the peak of a mountain on a hazy day, while the city is actually buried in a low visibility and severely polluted air. In the urban area, we also often observe the coexistence of haze and blue sky (Figure 1).
To gain an in-depth understanding of the vertical evolution characteristics of air pollutants within urban boundary layer, a team of the State Key Laboratory of Atmospheric Boundary Layer Physics and Atmospheric Chemistry, Institute of Atmospheric Physics, CAS, used a container that can travel on the Beijing 325 m Meteorological Tower for the vertically resolved measurements of light extinction coefficient of dry fine particles, gaseous NO2, and black carbon (BC) (Figure 2) from ground surface to 260 m during daytime, and 200 m at nighttime. Simultaneously, non-refractory submicron aerosol (NR-PM1) species including organics, sulfate, nitrate, ammonium and chloride, were measured at ground level and 260 m on the tower with an Aerodyne High-Resolution Aerosol Mass Spectrometer (HR- AMS) and an Aerosol Chemical Speciation Monitor (ACSM), respectively. Four distinct types of vertical profiles were illustrated, and the vertical convection as indicated by mixing layer height, temperature inversion, and local emissions are three major factors affecting the changes in vertical profiles.
The team found that the temperature inversion coupled by the interactions of different air masses elucidated the "blue sky - haze" co-existent phenomenon as shown in Figure 1. The tower-based vertically resolved measurements prove to be essential supplements to lidar measurements with a blind zone, typically below 200 m. The findings have recently been published in Atmospheric Chemistry & Physics).

Credit: 
Institute of Atmospheric Physics, Chinese Academy of Sciences

Story tips from the Department of Energy's Oak Ridge National Laboratory, March 2018

image: ORNL's Jim Szybist works with a multi-cylinder engine at the lab's National Transportation Research Center.

Image: 
Jason Richards/Oak Ridge National Laboratory, U.S. Dept. of Energy

Ecology -- Better mercury predictions

Analyses of creek algae informed a new model that can more accurately predict the presence of the neurotoxin methylmercury in small headwater ecosystems. For about two years, Oak Ridge National Laboratory scientists studied biofilms collected during different seasons and from various locations along an East Tennessee creek bed and discovered methylmercury in tiny oxygen-deficient pockets within the biofilms' complex ecosystem. "For methylmercury to be produced, the samples had to be grown and incubated in the light to actively photosynthesize, which means oxygen is present," said ORNL's Scott Brooks. "However, methylmercury only forms in anaerobic, or oxygen-free, zones, which means there are optimal conditions for methylmercury production at small scales within the biofilms." The team also found that simply shaking the samples disrupted the biofilms' delicate ecosystem and reduced methylmercury levels. Their newly developed model, described in Environmental Science & Technology, could be applied to other water systems to predict methylmercury production. [Contact: Sara Shoemaker, (865) 576-9219; shoemakerms@ornl.gov]

Image: https://www.ornl.gov/sites/default/files/Ecology_ORNL_2.jpg

Caption: Over time, algae biofilms accumulated on glass washers affixed to a plastic pegboard submerged in East Fork Poplar Creek in Oak Ridge, Tenn. ORNL researchers further analyzed the samples in the laboratory to determine the production of methylmercury. Credit: Todd Olsen and Scott Brooks/Oak Ridge National Laboratory, U.S. Dept. of Energy

Engines -- Fueling innovation

Gasoline-powered automobiles could achieve an 8 percent or greater fuel efficiency gain through a new combustion strategy developed at Oak Ridge National Laboratory. Scientists have demonstrated a new method for reforming fuel over a catalyst, a process that chemically converts fuel into a hydrogen-rich blend. This blend allows more work to be extracted from the engine cylinders, increasing efficiency and saving fuel. "Typically, you incur a fuel penalty when reforming fuel," said ORNL's Jim Szybist. "We've created a systematic approach that addresses that issue and can be used with conventional fuels and conventional emissions controls." The team published the method in Energy & Fuels and is working at ORNL's National Transportation Research Center to demonstrate similar fuel savings at a wider range of engine operation. [Contact: Kim Askey, (865) 576-2841; askeyka@ornl.gov]

Image: https://www.ornl.gov/sites/default/files/news/images/Engines-Combustion_strategy_ORNL.jpg

Caption: ORNL's Jim Szybist works with a multi-cylinder engine at the lab's National Transportation Research Center. Credit: Jason Richards/Oak Ridge National Laboratory, U.S. Dept. of Energy

Materials -- Pure, precise nanostructures

Oak Ridge National Laboratory researchers have directly written high-purity metallic structures narrower than a cold virus--which could open nanofabrication opportunities in electronics, drug delivery, catalysis and chemical separations. At ORNL's Center for Nanophase Materials Sciences, the team rastered a beam from a helium-ion microscope through a solution to locally deposit platinum, forming a ribbon only 15 nanometers in diameter. "This is the first occurrence of direct-write nanofabrication from a liquid-phase precursor using an ion microscope," said ORNL's Olga Ovchinnokova. "With full understanding from experiment and theory, we direct-wrote precise structures with highly pure material using unique tools." The team ran calculations on ORNL's Titan supercomputer and analyzed data from experiments and simulations to understand the dynamic interactions among ions, solids and liquids essential for optimizing the process. Their results were published in the journal Nanoscale. [Contact: Dawn Levy, (865) 576-6448; levyd@ornl.gov]

Image #1: https://www.ornl.gov/sites/default/files/news/images/Materials_nanostructures_1.jpg

Caption #1: ORNL researchers married helium-ion microscopy with a liquid cell from North Carolina-based Protochips Inc., to fabricate exceedingly pure, precise platinum structures. Credit: Stephen Jesse/Oak Ridge National Laboratory, U.S. Dept. of Energy

Image #2: https://www.ornl.gov/sites/default/files/Materials_nanostructures_2.jpg

Caption #2: These structures can be made less than 15 nanometers wide (the white scale bar is 50 nanometers) and are more precise than any produced by direct-write technology. Credit: Stephen Jesse/Oak Ridge National Laboratory, U.S. Dept. of Energy

Neutrons -- Antibacterial breakdown

New insights into certain catalytic enzymes formed by bacteria to break down antibiotics may lead to the design of drugs better equipped to combat resistant bacteria. Scientists at Oak Ridge National Laboratory used neutron crystallography at the lab's Spallation Neutron Source to study the interaction between one of these enzymes, called a beta-lactamase, and an antibiotic, building on data collected with x-ray crystallography. "Antibiotics destroy bacteria by preventing cell walls from forming, but beta-lactamases bind to the drugs to stop this attack," said ORNL's Patricia Langan, coauthor of a study published in ACS Catalysis. The binding of the antibiotic subtly altered the protein structure in the beta-lactamase and helped transfer a proton from one part of the enzyme to another, the first step toward blocking the drug's effects. "With neutrons, we can gain a much deeper understanding of the mechanisms that break down antibiotics," Langan added. [Contact: Kelley Smith, (865) 576-5668; smithks@ornl.gov]

Image: https://www.ornl.gov/sites/default/files/news/images/Neutrons-Antibacterial_breakdown_2.png

Caption: Using neutrons, an ORNL research team studied the protein structure of bacteria-produced enzymes called beta-lactamases by examining one of them to better understand how resistant bacteria behave. This research could help inform the development of more effective antibiotics. Credit: Leighton Coates/Oak Ridge National Laboratory, U.S. Dept. of Energy.

Nuclear -- Simulation scale-up

Nuclear scientists at Oak Ridge National Laboratory are retooling existing software used to simulate radiation transport in small modular reactors, or SMRs, to run more efficiently on next-generation supercomputers. ORNL is working on various aspects of advanced SMR designs through simulations currently performed on the lab's Titan supercomputer. "The next generation of supercomputers will run on more sophisticated architectures based predominately on graphics processing units, or GPUs," ORNL's Steven Hamilton said. "For the radiation transport algorithms to be compatible, we are preparing now so that we can take advantage of the full capability of GPU-based systems and run simulations as efficiently as possible." The ORNL team leveraged Titan's hybrid platform system that includes GPUs to develop and test the radiation transport codes. The newly developed method, described in Annals of Nuclear Energy, will be further scaled up to run larger simulations efficiently when future machines come online. [Contact: Sara Shoemaker, (865) 576-9219; shoemakerms@ornl.gov]

Image: https://www.ornl.gov/sites/default/files/news/images/Nuclear_simulation_scale-up%20R4.jpg

Caption: ORNL uses supercomputers to simulate radiation transport in small modular reactors using Monte Carlo codes. Credit: Steven Hamilton/Oak Ridge National Laboratory, U.S. Dept. of Energy

Credit: 
DOE/Oak Ridge National Laboratory

KAIST finds the principle of electric wind in plasma

image: Figure 1. This is a plasma jet image.

Image: 
KAIST

A KAIST team identified the basic principle of electric wind in plasma. This finding will contribute to developing technology in various applications of plasma, including fluid control technology.

Professor Wonho Choe from the Department of Physics and his team identified the main principle of neutral gas flow in plasma, known as 'electric wind', in collaboration with Professor Se Youn Moon's team at Chonbuk National University.

Electric wind in plasma is a well-known consequence of interactions arising from collisions between charged particles (electrons or ions) and neutral particles. It refers to the flow of neutral gas that occurs when charged particles accelerate and collide with a neutral gas.

This is a way to create air movement without mechanical movement, such as fan wings, and it is gaining interest as a next-generation technology to replace existing fans. However, there was no experimental evidence of the cause.

To identify the cause, the team used atmospheric pressure plasma. As a result, the team succeeded in identifying streamer propagation and space charge drift from electrohydrodynamic (EHD) force in a qualitative manner.

According to the team, streamer propagation has very little effect on electric wind, but space charge drift that follows streamer propagation and collapse was the main cause of electric wind.

The team also identified that electrons, instead of negatively charged ions, were key components of electric wind generation in certain plasmas.

Furthermore, electric wind with the highest speed of 4 m/s was created in a helium jet plasma, which is one fourth the speed of a typhoon. These results indicate that the study could provide basic principles to effectively control the speed of electric wind.

Professor Choe said, "These findings set a significant foundation to understand the interactions between electrons or ions and neutral particles that occur in weakly ionized plasmas, such as atmospheric pressure plasmas. This can play an important role in expanding the field of fluid-control applications using plasmas which becomes economically and commercially interest."

This research, led by PhD Sanghoo Park, was published online in Nature Communications on January 25.

Credit: 
The Korea Advanced Institute of Science and Technology (KAIST)

Reducing a building's carbon output can also lower costs

Researchers from Concordia University's Department of Building, Civil and Environmental Engineering have found a way to significantly reduce carbon emissions produced by residential and non-residential buildings, while also cutting costs.

Heating, cooling, and powering hospitals, hotels, city halls, apartment complexes and other large buildings that share built energy systems makes for a complex and potentially costly climate-change problem.

Add to this the challenges posed by Canada's climate and size -- especially in the Far North where remote communities are located considerable distances from the power grid.

In 2014, Canadian homes and buildings contributed nearly a fifth of Canada's total greenhouse gas emissions.

"It often feels like we have to choose between our financial constraints and using more energy-efficient measures," says Mohammad Sameti, a PhD candidate in Building Engineering at Concordia.

"But what our method shows is that we can efficiently integrate a given system to positively affect both."

To reduce overall energy consumption, Sameti and Fariborz Haghighat, professor in the Department of Building, Civil, and Environmental Engineering and Tier 1 Concordia Research Chair in Energy and Environment, developed a way to optimize the integration of multiple systems across multiple buildings.

They looked at a grid of eight residential buildings with a variety of characteristics, operating costs and technical constraints to arrive at an energy-efficient and cost-effective usage pattern. The researchers used hydro-powered heat pumps and lake cooling -- which uses large bodies of naturally cold water as heat sinks -- as renewable energy sources in their simulations.

After running all possible variations, the team found that by prioritizing the reduction of carbon emissions, they could cut costs by 75 per cent while also reducing emissions by 59 per cent.

However, when they prioritized overall costs instead, it resulted in savings of just 38 per cent, but carbon emissions were much higher. "To optimize cost, we had to prioritize systems that burn fossil fuels. These technologies are cheaper to install and operate than the renewable energy models, but offer no reduction in emissions," explains Sameti.

"Renewable energy sources used in the optimal simulation create a net-zero energy usage by the network, removing the need to rely on traditional heating and cooling technologies with higher emissions, and draw less power from the grid."

For Canada's northern communities, optimizing energy usage in this manner offers the chance to integrate technologies better suited to their remote locations far from the power grid and fossil-fuel supplies.

The researchers' findings were published in December by the journal Applied Energy.

The virtual model tested by Haghighat and Sameti considered multiple renewable and non-renewable energy sources.

They also had to consider issues within the grid -- for example, the age of buildings or how their energy use can change at different times or during different seasons.

"Because of the complexity of the problem and the large number of decision variables involved, we needed to run all possible variables," said Haghighat.

They demonstrated that a significant reduction in carbon emissions is possible without changing all systems in all buildings in a grid -- a process that has to happen slowly with periodic investment in new equipment.

As a result, their methodology can be applied as changes are made to a system over time.

This research aims to further lower both carbon emissions and overall costs by optimal integration and sizing of energy storage systems (both thermal and electrical) into the community. The ultimate goal will be the successful optimization of a net-zero energy district (nZED).

To make widespread adoption of their methods a reality, Sameti and Haghighat are hard at work on expanding its application to ever more complex networks.

Credit: 
Concordia University

Discovery shows wine grapes gasping for breath

image: This is a miniature oxygen probe measuring oxygen in a Shiraz grape.

Image: 
University of Adelaide

University of Adelaide researchers have discovered how grapes "breathe", and that shortage of oxygen leads to cell death in the grape.

The discovery raises many questions about the potentially significant impacts on grape and wine quality and flavour and vine management, and may lead to new ways of selecting varieties for warming climates.

"In 2008 we discovered the phenomenon of cell death in grapes, which can be implicated where there are problems with ripening. We've since been trying to establish what causes cell death," says Professor Steve Tyerman, Chair of Viticulture at the University of Adelaide's Waite campus.

"Although there were hints that oxygen was involved, until now we've not known of the role of oxygen and how it enters the berry."

Professor Tyerman and PhD student Zeyu Xiao from the University's Australian Research Council (ARC) Training Centre for Innovative Wine Production have identified that during ripening, grapes suffer internal oxygen shortage. The research was in collaboration with Dr Victor Sadras, South Australian Research and Development Institute (SARDI), and Dr Suzy Rogiers, NSW Department of Primary Industries, Wagga Wagga.

Published in the Journal of Experimental Botany, the researchers describe how grape berries suffer internal oxygen shortage during ripening. With the use of a miniature oxygen measuring probe - the first time this has been done in grapes - they compared oxygen profiles across the flesh inside grapes of Chardonnay, Shiraz and Ruby Seedless table grape.

They found that the level of oxygen shortage closely correlated with cell death within the grapes. Respiration measurements indicated that this would be made worse by high temperatures during ripening - expected to happen more frequently with global warming.

"By manipulating oxygen supply we discovered that small pores on the surface of the berry stem were vital for oxygen supply, and if they were blocked this caused increased cell death within the berry of Chardonnay, essentially suffocating the berry. We also used micro X-ray computed tomography (CT) to show that air canals connect the inside of the berry with the small pores on the berry stem," says Mr Xiao.

"Shiraz has a much smaller area of these oxygen pores on the berry stem which probably accounts for its greater sensitivity to temperature and higher degree of cell death within the berry."

Professor Vladimir Jiranek, Director of the University of Adelaide's ARC Training Centre for Innovative Wine Production, says: "This breakthrough on how grapes breathe will provide the basis for further research into berry quality and cultivar selection for adapting viticulture to a warming climate."

Credit: 
University of Adelaide

New statistics reveal the shape of plastic surgery

image: After a 4% decline in 2016, there was a dramatic spike in breast reduction surgeries in 2017, increasing by 11%.

Image: 
The American Society of Plastic Surgeons

New data released by the American Society of Plastic Surgeons (ASPS) shows continued growth in cosmetic procedures over the last year. According to the annual plastic surgery procedural statistics, there were 17.5 million surgical and minimally-invasive cosmetic procedures performed in the United States in 2017, a 2 percent increase over 2016.

The statistics also reveal Americans are turning to new and innovative ways to shape their bodies, as minimally-invasive cosmetic procedures have increased nearly 200% since 2000.

Top 5 Cosmetic Surgical and Minimally-Invasive Procedures

Minimally-invasive cosmetic procedures grew at a slightly higher rate than surgical procedures in 2017. While three of the top-five surgical procedures focused on the body, the top minimally-invasive procedures focused on the face.

Of the nearly 1.8 million cosmetic surgical procedures performed in 2017, the top 5 were:

Breast augmentation (300,378 procedures, up 3 percent from 2016)

Liposuction (246,354 procedures, up 5 percent from 2016)

Nose reshaping (218,924 procedures, down 2 percent from 2016)

Eyelid surgery (209,571 procedures, approximately the same as 2016)

Tummy tuck (129,753 procedures, up 2 percent from 2016

Among the 15.7 million cosmetic minimally-invasive procedures performed in 2017, the top 5 were:

Botulinum Toxin Type A (7.23 million procedures, up 2 percent from 2016)

Soft Tissue Fillers (2.69 million procedures, up 3 percent from 2016)

Chemical Peel (1.37 million procedures, up 1 percent since 2016)

Laser hair removal (1.1 million procedures, down 2 percent from 2016)

Microdermabrasion (740,287 procedures, down 4% from 2016)

Spike in Breast Reductions

After a 4% decline in 2016, there was a dramatic spike in breast reduction surgeries in 2017, increasing by 11%. Breast reduction, also known as reduction mammaplasty, is a procedure to remove excess breast fat, glandular tissue and skin to achieve a breast size in proportion with your body.

"Breast reductions are consistently reported as one of the highest patient satisfaction procedures because it positively affects a woman's quality of life. It addresses both functional and aesthetic concerns," said ASPS President Jeffrey E. Janis, MD.

A Comeback Story For Tummy Tucks

Tummy tucks, which dropped from the top five most popular cosmetic surgical procedures in 2016, have rebounded their way back into the top five in 2017. There were over 2,000 more tummy tuck procedures in 2017 than there were in 2016.

"An improved abdominal contour is something that many of us strive for, but for some patients, that may not be attainable through diet and exercise alone," said Janis. "Age, pregnancy and significant weight changes can impact both the skin and underlying muscle. Tummy tucks performed by a board-certified plastic surgeon remove excess fat and skin and, in most cases, restore weakened or separated muscles to create an improved abdominal profile."

Body Sculpting and Non-Invasive Fat Procedures Boom

More people are choosing to shape different parts of their bodies using ultrasound, radio frequency, infrared light, vacuum massage and injectable medication to reduce fat cells. Non-invasive procedures to eliminate fat and tighten the skin are gaining popularity, with the fastest growing procedure - cellulite treatments - up nearly 20% over last year.

"Unwanted fat is something that affects so many Americans," said Janis. "Plastic surgeons are able to give patients more options than ever before for fat elimination or redistribution. Patients appreciate having options, especially if they can act as maintenance steps while they decide if getting something more extensive down the line will be right for them."

Current non-invasive fat reduction and skin tightening procedures continue to gain popularity:

Non-invasive fat reduction procedures that use special technology to "freeze" away fat without surgery increased 7%

Non-surgical cellulite treatments that use lasers to eliminate fat increased 19% (up 55% since 2000)

Non-invasive skin tightening procedures that target fat and tighten sagging areas increased 9%

Credit: 
MediaSource

Johns Hopkins researchers invent new technology for cancer immunotherapy

Johns Hopkins researchers have invented a new class of cancer immunotherapy drugs that are more effective at harnessing the power of the immune system to fight cancer. This new approach, which was reported in Nature Communications, results in a significant decrease of tumor growth, even against cancers that do not respond to existing immunotherapy.

"The immune system is naturally able to detect and eliminate tumor cells. However, virtually all cancers -- including the most common cancers, from lung, breast and colon cancers to melanomas and lymphomas -- evolve to counteract and defeat such immune surveillance by co-opting and amplifying natural mechanisms of immune suppression," says Atul Bedi, M.D., M.B.A., an associate professor of otolaryngology -- head and neck surgery at the Johns Hopkins University School of Medicine, and senior author of the study.

A major way tumors evade the immune system is via regulatory T cells (Tregs), a subset of immune cells that turn off the immune system's ability to attack tumor cells. Tumors are frequently infiltrated by Tregs, and this is strongly correlated with poor outcome in multiple cancer types.

Many tumors produce high levels of a protein that promotes the development of Tregs. Bedi's team reasoned that since Tregs in the tumor shut down immune responses against tumor cells, turning off Tregs may help immunotherapy work better.

"This is especially challenging because Tregs are not only induced by the TGF? (transforming growth factor-beta) protein made by tumor cells, but make their own TGF? to maintain their identity and function in the tumor," says Bedi. Tregs also make cytotoxic T-lymphocyte associated protein 4 (CTLA-4), which prevents anti-tumor immune cells from acting.

To address this problem, the researchers invented a new class of immunotherapy drugs they called Y-traps. Each Y-trap molecule is an antibody shaped like a Y and fused to a molecular "trap" that captures other molecules nearby, rendering them useless.

The researchers first designed a Y-trap that targets CTLA-4 and traps TGF?. This Y-trap disables both CTLA-4 and TGF?, which allows anti-tumor immune cells to fight the tumor and turns down Treg cells.

To test the Y-traps, the team transplanted human cancer cells into mice engineered to have human immune cells. The researchers found that their Y-trap eliminated Treg cells in tumors and slowed the growth of tumors that failed to respond to ipilimumab, a current immunotherapy drug that targets the CTLA-4 protein.

"Tregs have long been a thorn in the side of cancer immunotherapy," says Bedi. "We've finally found a way to overcome this hurdle with this CTLA-4-targeted Y-trap."

Antibodies to another immune checkpoint protein, PD-1, or its ligand (PD-L1), are a central focus of current cancer immunotherapy. While they work in some patients, they don't work in the vast majority of patients.

The research team designed a Y-trap targeting PD-L1 and trapping TGF?. Tested against the same engineered mice, they found that their Y-trap works better than just PD-L1-targeting drugs atezolizumab and avelumab. Again, this Y-trap slowed the growth of tumors that previously had not responded to drugs.

"These first-in-class Y-traps are just the beginning. We have already invented a whole family of these multifunctional molecules based on the Y-trap technology. Since mechanisms of immune dysfunction are shared across many types of cancer, this approach could have broad impact for improving cancer immunotherapy," says Bedi. "Y-traps could also provide a therapeutic strategy against tumors that resist current immune checkpoint inhibitors."

"This approach appears to be an innovative strategy, and an exciting technical accomplishment to target multiple suppressive mechanisms in the tumor microenvironment," says Robert Ferris, M.D., Ph.D., professor of oncology and director of the Hillman Cancer Center at the University of Pittsburgh. Ferris was not connected with the study. "I look forward to seeing its translation into the clinic."

Bedi envisions using Y-traps not only for treatment of advanced, metastatic cancers, but also as a neoadjuvant therapy to create a "vaccine" effect -- that is, giving them to patients before surgery to prevent recurrence of the disease.

Credit: 
Johns Hopkins Medicine

Researchers use recycled carbon fiber to improve permeable pavement

image: Water runs through Washington State University pervious pavement.

Image: 
Washington State University

PULLMAN, Wash. - A Washington State University research team is solving a high-tech waste problem while addressing the environmental challenge of stormwater run-off.

The researchers have shown they can greatly strengthen permeable pavements by adding waste carbon fiber composite material. Their recycling method, described in the March issue of the Journal of Materials in Civil Engineering, doesn't require using much energy or chemicals -- a critical factor for recycling waste materials.

Traditional vs. pervious

Unlike the impermeable pavement that is used for most roads and parking lots, pervious concrete allows rainwater to freely drain and seep into the ground underneath. Because of increasing concerns about flooding in urban areas and requirements for controlling stormwater run-off, several cities have tried using the pervious concrete in parking lots and low-traffic streets. But because it is highly porous, it is not as durable as the traditional concrete that is used on major roads.

Recycling carbon fiber

Carbon fiber composites, meanwhile, have become increasingly popular in numerous industries. Super light and strong, the material is used in everything from airplane wings to wind turbines and cars. While the market is growing about 10 percent per year, however, industries have not figured out a way to easily recycle their waste, which is as much as 30 percent of the material used in production.

Led by Karl Englund, associate research professor, and Somayeh Nassiri, assistant professor in the Department of Civil and Environmental Engineering, the researchers added carbon fiber composite scrap that they received from Boeing manufacturing facilities to their pervious concrete mix. They used mechanical milling to refine the composite pieces to the ideal sizes and shapes. The added material greatly increased both the durability and strength of pervious concrete.

"In terms of bending strength, we got really good results -- as high as traditional concrete, and it still drains really quickly," said Nassiri.

Milling vs. heat or chemicals

The researchers used inexpensive milling techniques instead of heat or chemicals to create a reinforcing element from the waste carbon fiber composites. They maintained and made use of the original strength of the composites by keeping them in their cured composite form. Their mix also required using a lot of the composite material, which would be ideal for waste producers.

"You're already taking waste -- you can't add a bunch of money to garbage and get a product," said Englund. "The key is to minimize the energy and to keep costs down."

The composite materials were dispersed throughout the pavement mix to provide uniform strength.

Testing and mainstreaming

While they have shown the material works at the laboratory scale, the researchers are beginning to conduct real-world tests on pavement applications. They are also working with industry to begin developing a supply chain.

"In the lab this works to increase permeable pavement's durability and strength," said Nassiri. "The next step is to find out how to make it mainstream and widespread."

The research for this project was made possible through a partnership with the Boeing Company.

Credit: 
Washington State University

NASA space laser completes 2,000-mile road trip

image: The ATLAS team carefully lowers the instrument onto a platform, held up by wire-rope-coils, in a specifically designed transporter truck.

Image: 
NASA/Desiree Stover

Once in orbit after it launches this fall, NASA's ICESat-2 satellite will travel at speeds faster than 15,000 miles per hour. Last week, the satellite's instrument began its journey toward space riding a truck from Maryland to Arizona, never exceeding 65 mph.

ICESat-2, or the Ice, Cloud and land Elevation Satellite-2, is slated to launch in September to measure the height of Earth's surface, particularly the changing polar ice. To do that, it uses a laser instrument called the Advanced Topographic Laser Altimeter System, or ATLAS, that precisely times how long it takes light particles to bounce off Earth and return to the satellite.

ATLAS, which was designed, built and tested at NASA's Goddard Space Flight Center in Greenbelt, Maryland, arrived in Gilbert, Arizona, at Orbital ATK's facility on Feb. 23, where it will be joined with the spacecraft structure. To deliver the instrument safely to the spacecraft for assembly and testing, the ATLAS team developed special procedures for packing, transporting and monitoring the sensitive hardware.

"There was a lot of care and feeding that went with ATLAS along the road," said Kathy Strickler, ATLAS integration and test lead.

The trip followed a successful series of tests, designed to ensure the ATLAS instrument will function in the harsh environment of space. After the instrument passed those tests, including some in a thermal vacuum chamber, engineers inspected ATLAS to make sure it was clean and in the correct travel configuration. Then, they attached probes to the instrument that would check for vibrations as well as temperature and humidity.

"These probes tracked what ATLAS actually sensed when going over road bumps, and what ATLAS felt as far as temperature and humidity," said Jeffrey Twum, the ATLAS transport lead.

The team then wrapped the instrument - about the size of a Smart Car - in two layers of anti-electrostatic discharge film, to prevent any shocks en route. With its protections in place, a crane lifted ATLAS into a transporter container. The team bolted it to a platform supported by a series of wire-rope coils used to soften the ride, and the cover of the transporter was fastened shut, sealing up the cargo.

The truck carrying ATLAS left the NASA Goddard campus outside Washington around 3 a.m. on President's Day - timed to avoid the worst traffic on Washington's busy Beltway.

Planning the drive itself was a challenge, Twum noted. The convoy - a scout car, the transport truck, a trail vehicle and other support cars - can't drive through state capitals or big cities during rush hour. The convoy can't drive at night without special approvals. It requires permits for some stretches of roads, and is routed to avoid certain construction zones.

he lead car, driving a quarter-mile ahead of the truck, looked out for accidents or debris in the road; anything that could impact the transporter. The trailing SUV helped switch lanes when necessary, and watched out for other drivers. The support vehicle carried quality assurance personnel and technicians, who were keeping an eye on the environment within the truck, monitoring any bumps or jostling, and controlling the air flow through the trailer so no dust particles settle on the ATLAS instrument. They stayed at hotels with big parking lots along the way, taking shifts to monitor the instrument overnight.

The 2,000-mile trip took four and a half days.

The ATLAS instrument is now at Orbital ATK, where engineers will attach it to the spacecraft and conduct additional testing. Then, the complete satellite will be repacked and trucked to its last stop before low-Earth orbit: Vandenberg Air Force Base in California.

Credit: 
NASA/Goddard Space Flight Center

The fine-tuning of two-dimensional materials

image: In situ rhenium doping of monolayer MoS2.

Image: 
Donna Deng/Penn State

A new understanding of why synthetic 2-D materials often perform orders of magnitude worse than predicted was reached by teams of researchers led by Penn State. They searched for ways to improve these materials' performance in future electronics, photonics, and memory storage applications.

Two-dimensional materials are films only an atom or two thick. Researchers make 2-D materials by the exfoliation method -- peeling a slice of material off a larger bulk material -- or by condensing a gas precursor onto a substrate. The former method provides higher quality materials, but is not useful for making devices. The second method is well established in industrial applications, but yields low performance 2-D films.

The researchers demonstrated, for the first time, why the quality of 2-D materials grown by the chemical vapor deposition method have poor performance compared to their theoretical predictions. They report their results in a recent issue of Scientific Reports.

"We grew molybdenum disulfide, a very promising 2-D material, on a sapphire substrate," said Kehao Zhang, a doctoral candidate of Joshua Robinson, associate professor of materials science and engineering, Penn State. "Sapphire itself is aluminum oxide. When the aluminum is the top layer of the substrate, it likes to give up its electrons to the film. This heavy negative doping -- electrons have negative charge -- limits both the intensity and carrier lifetime for photoluminescence, two important properties for all optoelectronic applications, such as photovoltaics and photosensors."

Once they determined that the aluminum was giving up electrons to the film, they used a sapphire substrate that was cut in such a way as to expose the oxygen rather than the aluminum on the surface. This enhanced the photoluminescence intensity and the carrier lifetime by 100 times.

In related work, a second team of researchers led by the same Penn State group used doping engineering that substitutes foreign atoms into the crystal lattice of the film in order to change or improve the properties of the material. They reported their work this week in Advanced Functional Materials.

"People have tried substitution doping before, but because the interaction of the sapphire substrate screened the effects of the doping, they couldn't deconvolute the impact of the doping," said Zhang, who was also the lead author on the second paper.

Using the oxygen-terminated substrate surface from the first paper, the team removed the screening effect from the substrate and doped the molybdenum disulfide 2-D film with rhenium atoms.

"We deconvoluted the rhenium doping effects on the material," said Zhang. "With this substrate we can go as high as 1 atomic percent, the highest doping concentration ever reported. An unexpected benefit is that doping the rhenium into the lattice passivates 25 percent of the sulfur vacancies, and sulfur vacancies are a long-standing problem with 2-D materials."

The doping solves two problems: It makes the material more conductive for applications like transistors and sensors, and at the same time improves the quality of the materials by passivating the defects called sulfur vacancies. The team predicts that higher rhenium doping could completely eliminate the effects of sulfur vacancies.

"The goal of my entire work is to push this material to technologically relevant levels, which means making it industrially applicable," Zhang said.

Credit: 
Penn State