Tech

NASA examines tropical storm Mangga in infrared light

image: On May 22 at 3:10 a.m. EST (0710 UTC), the MODIS instrument aboard NASA's Aqua satellite gathered temperature information about Tropical Storm Mangga's cloud tops. MODIS found one small area of powerful thunderstorms (red) where temperatures were as cold as or colder than minus 70 degrees Fahrenheit (minus 56.6 Celsius).

Image: 
NASA/NRL

NASA's Aqua satellite used infrared light to provide forecasters with a look at the temperatures of the cloud tops in Tropical Storm Mangga.

Mangga, formerly known as 27S, is moving through the Southern Indian Ocean. Mangga was approaching the Cocos (Keeling) Islands, where a tropical cyclone warning was in effect on May 22.

NASA's Aqua satellite used infrared light to analyze the strength of storms in Mangga. Infrared data provides temperature information, and the strongest thunderstorms that reach high into the atmosphere have the coldest cloud top temperatures. On May 22 at 3:10 a.m. EST (0710 UTC), the Moderate Resolution Imaging Spectroradiometer or MODIS instrument aboard NASA's Aqua satellite gathered temperature information about Tropical Storm Mangga's cloud tops. MODIS found one area of powerful thunderstorms where temperatures were as cold as or colder than minus 70 degrees Fahrenheit (minus 56.6 Celsius). Cloud top temperatures that cold indicate strong storms with the potential to generate heavy rainfall.

Cloud tops of storms surrounding that area were warmer, indicating those storms were weaker and fragmented.

At 5 a.m. EDT (0900 UTC) on May 22, Tropical Storm Mangga was located near latitude 11.1 degrees south and longitude 94.2 degrees east, about 1,324 nautical miles west-northwest of Learmonth, Western Australia. Mangga was moving to southeast and had maximum sustained winds near 35 knots (40 mph/65 kph).

Mangga is forecast to strengthen to 45 knots (52 mph/83 kph), but become extra-tropical before making landfall in southwestern Australia on Sunday, May 24, between Perth and Learmonth.

Typhoons and hurricanes are the most powerful weather events on Earth. NASA's expertise in space and scientific exploration contributes to essential services provided to the American people by other federal agencies, such as hurricane weather forecasting.

Credit: 
NASA/Goddard Space Flight Center

'Time is vision' after a stroke

A person who has a stroke that causes vision loss is often told there is nothing she can do to improve or regain the vision she has lost.

New research from the University of Rochester, published in the journal Brain, may offer hope to stroke patients in regaining vision.

The Rochester team found that survivors of occipital strokes--strokes that occur in the occipital lobe of the brain and affect the ability to see--may retain some visual capabilities immediately after the stroke, but these abilities diminish and eventually disappear permanently after approximately six months. By capitalizing on this initial preserved vision, early vision training interventions can help stroke patients recover more of their vision loss than if training is administered after six months.

"One of our key findings, which has never been reported before, is that an occipital stroke that damages the visual cortex causes gradual degeneration of visual structures all the way back to the eyes," says Krystel Huxlin, the James V. Aquavella, MD Professor in Ophthalmology at the University of Rochester's Flaum Eye Institute.

The Rochester research team--including Elizabeth Saionz, a PhD candidate in Huxlin's lab and the first author of the paper; Duje Tadin, professor and chair of the Department of Brain and Cognitive Sciences; and Michael Melnick, a postdoctoral associate in Tadin and Huxlin's labs--additionally discovered that early intervention in the form of visual training appears to stop the gradual loss of visual processing that stroke victims may experience.

Vision stroke rehabilitation remains a developing field, and previous studies and trials of experimental therapies have focused on patients with chronic vision loss--that is, patients who are more than six months post-stroke.

"Right now, the 'standard of care' for vision stroke patients is that they don't receive any targeted therapy to restore vision," Saionz says. "They might be offered therapy to help maximize use of their remaining vision or learn how to navigate the world with their new limited vision, but there are no treatments offered that can give them back any of the vision that they lost."

The new study compared chronic patients--those who were more than six-months post-stroke--with early subacute patients, who started training within the first three months after their stroke.

The researchers trained both groups of stroke patients using a computer-based device Huxlin developed. The training is like physical therapy for the visual system and involves a set of exercises that stimulates undamaged portions of the visual cortical system to use visual information. With repeated stimulation, these undamaged parts of the brain can learn to more effectively process visual information that is not filtered by the damaged primary visual cortex, partially restoring conscious visual sensations.

The researchers discovered that the subacute patients who underwent such vision training recovered global motion discrimination--the ability to determine the direction of motion in a noisy environment--as well as luminance detection--the ability to detect a spot of light--faster and much more efficiently than the chronic patients.

Overall, the group's findings suggest that individuals may maintain visual abilities early after a stroke, indicating they have preserved some sensory information processing that may temporarily circumvent the permanently damaged regions of the brain. Early visual training may therefore be critical both to prevent vision from degrading and to enhance restoration of any preserved perceptual abilities.

"For the first time, we can now conclusively say that just as for sensorimotor stroke, 'time is vision' after an occipital stroke," Huxlin says.

Credit: 
University of Rochester

Indigenous collaboration and leadership key to managing sea otter population recovery

image: A group photo from the 2014 Coastal Voices workshop.

Image: 
Ilja Herb

A new study highlights the need to engage Indigenous communities in managing sea otter population recovery to improve coexistence between humans and this challenging predator.

The sea otters' recovery along the northwest coast of North America presents a challenge for coastal communities because both otters and humans like to eat shellfish, such as sea urchins, crabs, clams and abalone. Expanding populations of sea otters and their arrival in new areas are heavily impacting First Nations and Tribes that rely on harvesting shellfish.

SFU lead author Jenn Burt says the study focused beyond the challenges to seek solutions going forward. "We documented Indigenous peoples' perspectives which illuminated key strategies to help improve sea otter management and overall coexistence with sea otters."

Most research focuses on how sea otter recovery greatly reduces shellfish abundance or expands kelp forests, rather than on how Indigenous communities are impacted, or how they are adapting to the returning sea otters' threat to their food security, cultural traditions, and livelihoods.

Recognizing that Indigenous perspectives were largely absent from dialogues about sea otter recovery and management, SFU researchers reached out to initiate the Coastal Voices collaboration.

Coastal Voices is a partnership with Indigenous leaders and knowledge holders representing 19 First Nations and Tribes from Alaska to British Columbia.

Based on information revealed in workshops, interviews, and multiple community surveys, SFU researchers and collaborating Indigenous leaders found that human-otter coexistence can be enabled by strengthening Indigenous governance authority and establishing locally designed, adaptive co-management plans for sea otters.

The study, published this week in People and Nature also suggests that navigating sea otter recovery can be improved by incorporating Indigenous knowledge into sea otter management plans, and building networks and forums for community discussions about sea otter and marine resource management.

"Our people actively managed a balanced relationship with sea otters for millennia," says co-author and Haida matriarch Kii'iljuus (Barbara Wilson), a recent SFU alumnus.

"Our work with Coastal Voices and this study helps show how those rights and knowledge need to be recognized and be part of contemporary sea otter management."

Anne Salomon, a professor in SFU's School of Resource and Environmental Management, co-authored the study and co-led the Coastal Voices research partnership.

"This research reveals that enhancing Indigenous people's ability to coexist with sea otters will require a transformation in the current governance of fisheries and marine spaces in Canada, if we are to navigate towards a system that is more ecologically sustainable and socially just," says Salomon.

Despite challenges, the authors say transformation is possible. They found that adaptive governance and Indigenous co-management of marine mammals exist in other coastal regions in northern Canada and the U.S. They suggest that increasing Indigenous leadership and Canadian government commitments to Reconciliation may provide opportunities for new approaches and more collaborative marine resource management.

Credit: 
Simon Fraser University

Next-generation solar cells pass strict international tests

image: Professor Anita Ho-Baillie with an earlier prototype perovskite solar cell.

Image: 
UNSW

Australian scientists have for the first time produced a new generation of experimental solar energy cells that pass strict International Electrotechnical Commission testing standards for heat and humidity.

The research findings, an important step towards commercial viability of perovskite solar cells, are published today in the journal Science.

Solar energy systems are now widespread in both industry and domestic housing. Most current systems rely on silicon to convert sunlight into useful energy.

However, the energy conversion rate of silicon in solar panels is close to reaching its natural limits. So, scientists have been exploring new materials that can be stacked on top of silicon in order to improve energy conversion rates. One of the most promising materials to date is a metal halide perovskite, which may even outperform silicon on its own.

"Perovskites are a really promising prospect for solar energy systems," said Professor Anita Ho-Baillie, the inaugural John Hooke Chair of Nanoscience at the University of Sydney. "They are a very inexpensive, 500 times thinner than silicon and are therefore flexible and ultra-lightweight. They also have tremendous energy enabling properties and high solar conversion rates."

In experimental form, the past 10 years has seen the performance of perovskites cells improve from low levels to being able to convert 25.2 percent of energy from the Sun into electricity, comparable to silicon-cell conversion rates, which took 40 years to achieve.

However, unprotected perovskite cells do not have the durability of silicon-based cells, so they are not yet commercially viable.

"Perovskite cells will need to stack up against the current commercial standards. That's what is so exciting about our research. We have shown that we can drastically improve their thermal stability," Professor Ho-Baillie said.

The scientists did this by suppressing the decomposition of the perovskite cells using a simple, low-cost polymer-glass blanket.

The work was led by Professor Ho-Baillie who joined the University of Sydney Nano Institute this. Lead author Dr Lei Shi conducted the experimental work in Ho-Baillie's research group in the School of Photovoltaic and Renewable Energy Engineering at the University of New South Wales, where Professor Ho-Baillie remains an adjunct professor.

Under continual exposure to the Sun and other elements, solar panels experience extremes of heat and humidity. Experiments have shown that under such stress, unprotected perovskite cells become unstable, releasing gas from within their structures.

"Understanding this process, called 'outgassing', is a central part of our work to develop this technology and to improve its durability," Professor Ho-Baillie said.

"I have always been interested in exploring how perovskite solar cells could be incorporated into thermal insulated windows, such as vacuum glazing. So, we need to know the outgassing properties of these materials."

Low-cost solution

For the first time, the research team used gas chromatography-mass spectrometry (GC-MS) to identify the signature volatile products and decomposition pathways of the thermally stressed hybrid perovskites commonly used in high-performance cells. Using this method, they found that a low-cost polymer-glass stack with a pressure-tight seal was effective in suppressing the perovskite 'outgassing', the process that leads to its decomposition.

When put to strict international testing standards, the cells the team was working on outperformed expectations.

"Another exciting outcome of our research is that we are able to stabilise perovskite cells under the harsh International Electrotechnical Commission standard environmental testing conditions. Not only did the cells pass the thermal cycling tests, they exceeded the demanding requirements of damp-heat and humidity-freeze tests as well," Professor Ho-Baillie said.

These tests help determine if solar cell modules can withstand the effects of outdoor operating conditions by exposing them to repeated temperature cycling between -40 degrees and 85 degrees, as well as exposure to 85 percent relative humidity.

Specifically, the perovskite solar cells survived more than 1800 hours of the IEC "Damp Heat" test and 75 cycles of "Humidity Freeze" test, exceeding the requirement of IEC61215:2016 standard for the first time.

"We expect this work will contribute to advances for stabilising perovskite solar cells, increasing their commercialisation prospects," Professor Ho-Baillie said.

Credit: 
University of Sydney

New map reveals global scope of groundwater arsenic risk

Up to 220 million people worldwide, with approximately 94% of them in Asia, could be at risk of drinking well water containing harmful levels of arsenic, a tasteless, odorless and naturally occurring poison. The global scope of this persistent public health issue is revealed in a new study, in which researchers present the most accurate and detailed global prediction map of groundwater arsenic concentrations to date. It reveals previously unidentified areas of potential arsenic contamination, including parts of Central Asia and broad areas of the Arctic and sub-Arctic. Trace amounts of arsenic occur in virtually all rocks and sediments but rarely at concentrations high enough to cause adverse health effects. Nevertheless, arsenic is toxic; at high levels, it causes a wide range of maladies, including neurological disorders and cancer. Because dissolved arsenic can accumulate in aquafers, drinking contaminated groundwater is a major source of exposure. Consequently, the World Health Organization's (WHO) guideline concentration for arsenic in drinking water is 10 micrograms per liter. While the severe public health risks from arsenic contamination are well-recognized, arsenic is generally not included in the standard suite of tested water quality parameters. And due to incomplete and unreliable records and spotty testing, risk assessments are often fraught with uncertainty. Joel Podgorski and Michael Berg compiled data from 80 groundwater arsenic studies worldwide and used machine learning to model global arsenic risk. The resulting map revealed global groundwater contamination hazards, including in regions with few or no reported measurements. According to the results, the highest-risk regions include areas of Asia and South America. "Disparities in coverage of regulatory requirements in the U.S. have left over a million rural Americans unknowingly exposed to arsenic with a high proportion of socio-economically and behaviorally vulnerable groups," writes Yan Zheng in a related Perspective.

Credit: 
American Association for the Advancement of Science (AAAS)

Mental ill health 'substantial health concern' among police, finds international study

Mental health issues among police officers are a "substantial health concern," with around 1 in 4 potentially drinking at hazardous levels and around 1 in 7 meeting the criteria for post traumatic stress disorder and depression, finds a pooled data analysis of the available international evidence, published online in Occupational & Environmental Medicine.

The high prevalence of mental health issues among the police emphasises the need for effective treatment and monitoring programmes as well as extra cash to match the preventive efforts offered to other high risk groups, conclude the researchers.

Published research suggests that first responders run a higher risk of mental health issues than the general public. But it's not clear how common mental health issues are among police officers, or what the risk factors for these might be.

This is despite the fact that the nature of their work means that the police are uniquely exposed to extreme violence and death while often running the gauntlet of public distrust and disparagement, say the researchers.

To try and plug this knowledge gap, the researchers trawled 16 research databases for relevant studies published between 1980 and October 2019.

They found 67 studies which met their inclusion criteria of involving at least 100 active police professionals and the use of validated measures to assess specific aspects of mental ill health. The overall study quality was high (46%) or moderate (54%).

The studies included a total of 272,463 police officers from 24 countries and covered post traumatic stress disorder (PTSD), depression, substance misuse, anxiety disorder, and suicidal thoughts (ideation).

Most of the studies came from North America (46%), Europe (28%), and Australia (10%) and primarily featured male officers with an average age of 39, carrying out general duties.

Pooled analysis of the data indicated that the estimated prevalence of mental health issues among police officers was substantial, and more than double the rate reported in several previously published studies.

Around one in four (just under 26%) police officers screened positive for hazardous drinking, while one in seven met the criteria for PTSD (14%) or depression (14.5%).

Around one in 10 met the criteria for anxiety disorder (9.5%) and suicidal thoughts (8.5%), while one in 20 (5%) would be considered to be drinking at harmful levels or to be alcohol dependent.

Low levels of peer support, higher levels of job stress, and poor (avoidant) coping strategies were all strong risk factors for mental health issues, the data analysis suggested. Female sex was also a consistent risk factor for poorer mental health.

The researchers acknowledge that the study methods and designs varied widely, and that many of the included studies were observational and relied on subjective symptom reporting.

Nevertheless, the findings prompt them to deduce: "Police officers show a substantial burden of mental health problems, emphasising the need for effective interventions and monitoring programmes."

Otherwise, "psychological difficulties will remain a substantial health concern among police," they conclude.

A major problem, however, is that in the absence of good evidence no one can agree on what these interventions should be, they point out.

"Further research into interventions that address stress and peer support in the police is needed, taking into account risk differences between genders and cultures," they say, adding: "The results support increased funding initiatives for police wellbeing to match preventative efforts currently offered in other high-risk populations."

Credit: 
BMJ Group

Surrey reveals its implantable biosensor that operates without batteries

Researchers from the University of Surrey have revealed their new biodegradable motion sensor - paving the way for implanted nanotechnology that could help future sports professionals better monitor their movements to aid rapid improvements, or help caregivers remotely monitor people living with dementia.

In a paper published by Nano Energy, a team from Surrey's Advanced Technology Institute (ATI), in partnership with Kyung Hee University in South Korea, detail how they developed a nano-biomedical motion sensor which can be paired with AI systems to recognise movements of distinct body parts.

The ATI's technology builds on its previous work around triboelectric nanogenerators (TENG), where researchers used the technology to harness human movements and generate small amounts of electrical energy. Combining the two means self-powered sensors are possible without the need for chemical or wired power sources.

In their new research, the team from the ATI developed a flexible, biodegradable and long-lasting TENG from silk cocoon waste. They used a new alcohol treatment technique, which leads to greater durability for the device, even under harsh or humid environments.

Dr. Bhaskar Dudem, project lead and Research Fellow at the ATI, said: "We are excited to show the world the immense potential of our durable, silk film based nanogenerator. It's ability to work in severe environments while being able to generate electricity and monitor human movements positions our TENG in a class of its own when it comes to the technology."

Professor Ravi Silva, Director of the ATI, said: "We are proud of Dr Dudem's work which is helping the ATI lead the way in developing wearable, flexible, and biocompatible TENGs that efficiently harvest environmental energies. If we are to live in a future where autonomous sensing and detecting of pathogens is important, the ability to create both self-powered and wireless biosensors linked to AI is a significant boost."

Credit: 
University of Surrey

A replaceable, more efficient filter for N95 masks

image: A replaceable nanoporous membrane, illustrated above, attached to an N95 mask filters out particles the size of SARS-CoV-2 (purple circles), allowing only clean air (blue circles) through.

Image: 
<i>ACS Nano</i> <b>2020</b>, DOI: 10.1021/acsnano.0c03976

Since the outbreak of COVID-19, there's been a worldwide shortage of face masks -- particularly, the N95 ones worn by health care workers. Although these coverings provide the highest level of protection currently available, they have limitations. Now, researchers reporting in ACS Nano have developed a membrane that can be attached to a regular N95 mask and replaced when needed. The filter has a smaller pore size than normal N95 masks, potentially blocking more virus particles.

N95 masks filter about 85% of particles smaller than 300 nm, according to published research. SARS-CoV-2 (the coronavirus that causes COVID-19) is in the size range of 65-125 nm, so some virus particles could slip through these coverings. Also, because of shortages, many health care workers have had to wear the same N95 mask repeatedly, even though they are intended for a single use. To help overcome these problems, Muhammad Mustafa Hussain and colleagues wanted to develop a membrane that more efficiently filters particles the size of SARS-CoV-2 and could be replaced on an N95 mask after every use.

To make the membrane, the researchers first developed a silicon-based, porous template using lithography and chemical etching. They placed the template over a polyimide film and used a process called reactive ion etching to make pores in the membrane, with sizes ranging from 5-55 nm. Then, they peeled off the membrane, which could be attached to an N95 mask. To ensure that the nanoporous membrane was breathable, the researchers measured the airflow rate through the pores. They found that for pores tinier than 60 nm (in other words, smaller than SARS-CoV-2), the pores needed to be placed a maximum of 330 nm from each other to achieve good breathability. The hydrophobic membrane also cleans itself because droplets slide off it, preventing the pores from getting clogged with viruses and other particles.

Credit: 
American Chemical Society

Measuring blood damage

image: This diagram depicts the way conductivity will change as blood cells break. The yellow dots represent electrons. The red circles represent blood cells. Viewing the graphic from left to right, one can see that when more blood cells are present, fewer electrons are able to get across. As blood cells break, there are fewer barriers and the blood becomes more conductive, making it easier for electrons to move from one side to the other.

Image: 
Graphic courtesy of Tyler Van Buren

According to the National Kidney Foundation, more than 37 million people are living with kidney disease.

The kidneys play an important role in the body, from removing waste products to filtering the blood. For people with kidney disease, dialysis can help the body perform these essential functions when the kidneys aren't working at full capacity.

However, red blood cells sometimes rupture when blood is sent through faulty equipment that is supposed to clean the blood, such as a dialysis machine. This is called hemolysis. Hemolysis also can occur during blood work when blood is drawn too quickly through a needle, leading to defective laboratory samples.

There is no reliable indicator that red blood cells are being damaged in a clinical setting until an individual begins showing symptoms, such as fever, weakness, dizziness or confusion.

University of Delaware mechanical engineer Tyler Van Buren and collaborating colleagues at Princeton University have developed a method to monitor blood damage in real-time.

"Our goal was to find a method that could detect red blood cell damage without the need for lab sample testing," said Van Buren, an assistant professor of mechanical engineering with expertise in fluid dynamics.

The researchers recently reported their technique in Scientific Reports, a Nature publication.

Detecting blood cell damage

In the body, red blood cells float in plasma alongside white blood cells and platelets. The plasma is naturally conductive and is efficient at passing an electric charge. Red blood cells are chock-full of hemoglobin, an oxygen-transporting protein, that also is conductive.

This hemoglobin is typically insulated from the body by the cell lining. But as red blood cells rupture, hemoglobin is released into the bloodstream, causing the blood to become more conductive.

"Think of the blood like a river and red blood cells like water balloons in that river," said Van Buren, who joined UD in 2019. "If you have electrons (negatively charged particles) waiting to cross the river, it is more difficult when there are a lot of water balloons present. This is because the rubber is insulated, so the blood will be less conductive. As the water balloons (or blood cells) break, there are fewer barriers and the blood becomes more conductive, making it easier for electrons to move from one side to the other."

In dialysis, a patient's blood is removed from the body, cleaned, then recirculated into the body. The researchers developed a simple experiment to see if they could measure the blood's mechanical resistance outside of the body.

To test their technique, the researchers circulated healthy blood through the laboratory system and gradually introduced mechanically damaged blood to see if it would change the conductive nature of the fluid in the system.

It did. The researchers saw a direct correlation between the conductivity of the fluid in the system and the amount of damaged blood included in the sample.

While this issue of damaged blood is very rare, the research team's method does introduce one potential way to indirectly monitor blood damage in the body during dialysis. The researchers theorize that if clinicians were able to monitor the resistance of a patient's blood going into a dialysis machine and coming out, and they saw a major change in resistance -- or conductivity -- there is good reason to believe the blood is being damaged.

"We are not doctors, we're mechanical engineers," said Van Buren. "This technique would need a lot more vetting before being applied in a clinical setting."

For example, Van Buren said the method wouldn't necessarily work across patient populations because an individual's blood conductivity is just that, individual.

In the future, Van Buren said it would be interesting to evaluate whether conductivity also could be used in place of lab sampling for applications outside of dialysis. For example, this might be useful in research aimed at understanding how blood cells may be damaged, both inside and outside of the body, and possible methods for prevention.

He also is curious whether this method could be used to evaluate and identify compromised blood samples on-site, saving time and money for hospitals or diagnostic laboratories, while eliminating the need for patients to make multiple trips to have blood drawn if there is a problem.

Credit: 
University of Delaware

New Army 3-D printing study shows promise for predictive maintenance

image: Army researchers study the performance of 3-D-printed metal parts and how they degrade as part of ongoing research in vehicle technology. The CCDC Army Research Laboratory at Aberdeen Proving Ground, Maryland, prints metal parts from powder.

Image: 
Dave McNally

ABERDEEN PROVING GROUND, Md. -- Army researchers have discovered a way to monitor the performance of 3-D printed parts, which tend to have imperfections that affect performance in ways traditionally-machined parts do not.

A new study published recently in the International Journal of Advanced Manufacturing Technology showed that the Army could detect and monitor the wear and tear of 3-D printed maraging steel through sensor measurement. These types of measurements help Soldiers maintain readiness because these indicators help predict when parts will degrade or fail, and need replacement.

"3-D printed parts display certain attributes, due to the manufacturing process itself, which, unchecked, may cause these parts to degrade in manners not observed in traditionally-machined parts," said Dr. Jaret C. Riddick, director of the Vehicle Technology Directorate at the U.S. Army's Combat Capabilities Development Command's Army Research Laboratory. "Because of this, it's commonly understood that the use of these parts, in current cases, is meant to be a stop-gap to fill a critical need just as we have seen with 3-D printing during the COVID-19 response."

He said the laboratory's study points to scientific discovery that ensures readiness in increasingly contested environments where the immediate need for replacement parts places constraints on the time it takes to deliver them from far away. In these cases, Soldiers would opt for a stop-gap to continue the mission rather having to abort the mission.

This study was led by a team of researchers from the laboratory, the National Institute of Standards and Technology, CCDC Aviation and Missile Center and Johns Hopkins University, who likened cues from the material's performance to a vehicle odometer reading that signals a need for an oil change.

"The strain or eddy current sensor would supply a measurement and let you know the part needs replaced," said Dr. Todd C. Henry, a mechanical engineer at the laboratory who co-authored the study.

Henry wants to develop a tool for measuring the unique performance of each 3-D printed part acknowledging that each is different via sensor measurement.

"If I took a batch of paper clips and started bending them back and forth they'll break from fatigue damage at different intervals depending on the internal imperfections associated with the steel," Henry said. "Every real-world material and structure has imperfections that make it unique in terms of performance so if the batch of paper clips take 21-30 cycles to break, what we would do today is after 15 cycles throw the batch of paperclips away to be safe."

He said the imperfections in 3-D printed parts are typically attributed to voids and geometric variance between the computer model and the print. Sensor technology he's developing offers a way to track individual parts, predict failure points and replace them a few cycles before they break.

"In order to create a high trust situation, you take little risk such as throwing the paper clip away after 15 cycles even though the lowest lifetime in your test batch was 21. If you try and take more risk and put the throw away limit at 22 cycles then the paperclip may break on someone sometime but you will save money."

The research team conducted an experimental validation set for assessing the real-time fatigue behavior of metallic additively manufactured maraging steel structures.

Army researchers are applying these findings to new studies to 3-D-printing of stainless steel parts and using machine-learning techniques, instead of sensors, to characterize the life of parts, Henry said.

"With 3-D printing, you might not be able to replace a part with the exact same material," he said. "There is a cost and time benefit with 3-D printing that perhaps warrants using it anyway. Imagine a situation where you always chose the strongest material but there was another material that was cheaper and easier to get however you need to prove that this other material can be depended on."

This study is as much about understanding the specific performance of a 3-D-printed material as it is about understanding our ability to monitor and detect performance and 3-D-printed material degradation, Henry said.

Credit: 
U.S. Army Research Laboratory

Capturing the coordinated dance between electrons and nuclei in a light-excited molecule

image: A new study shows that electrons scattering off pyridine molecules in two different ways, as shown by the striped orange cone and the red coil, could be separated, allowing researchers to simultaneously observe how the molecule's nuclei and electrons respond to flashes of light. The study was done with SLAC's "electron camera," MeV-UED.

Image: 
Greg Stewart/SLAC National Accelerator Laboratory

Using a high-speed "electron camera" at the Department of Energy's SLAC National Accelerator Laboratory, scientists have simultaneously captured the movements of electrons and nuclei in a molecule after it was excited with light. This marks the first time this has been done with ultrafast electron diffraction, which scatters a powerful beam of electrons off materials to pick up tiny molecular motions.

"In this research, we show that with ultrafast electron diffraction, it's possible to follow electronic and nuclear changes while naturally disentangling the two components," says Todd Martinez, a Stanford chemistry professor and Stanford PULSE Institute researcher involved in the experiment. "This is the first time that we've been able to directly see both the detailed positions of the atoms and the electronic information at the same time."

The technique could allow researchers to get a more accurate picture of how molecules behave while measuring aspects of electronic behaviors that are at the heart of quantum chemistry simulations, providing a new foundation for future theoretical and computational methods. The team published their findings today in Science.

Skeletons and glue

In previous research, SLAC's instrument for ultrafast electron diffraction, MeV-UED, allowed researchers to create high-definition "movies" of molecules at a crossroads and structural changes that occur when ring-shaped molecules break open in response to light. But until now, the instrument was not sensitive to electronic changes in molecules.

"In the past, we were able to track atomic motions as they happened," says lead author Jie Yang, a scientist at SLAC's Accelerator Directorate and the Stanford PULSE Institute. "But if you look closer, you'll see that the nuclei and electrons that make up atoms also have specific roles to play. The nuclei make up the skeleton of the molecule while the electrons are the glue that holds the skeleton together."

Freezing ultrafast motions

In these experiments, a team led by researchers from SLAC and Stanford University was studying pyridine, which belongs to a class of ring-shaped molecules that are central to light-driven processes such as UV-induced DNA damage and repair, photosynthesis and solar energy conversion. Because molecules absorb light almost instantaneously, these reactions are extremely fast and difficult to study. Ultra-high-speed cameras like MeV-UED can "freeze" motions occurring within femtoseconds, or millionths of a billionth of a second, to allow researchers to follow changes as they occur.

First, the researchers flashed laser light into a gas of pyridine molecules. Next, they blasted the excited molecules with a short pulse of high-energy electrons, generating snapshots of their rapidly rearranging electrons and atomic nuclei that can be strung together into a stop-motion movie of the light-induced structural changes in the sample.

A clean separation

The team found that elastic scattering signals, produced when electrons diffract off a pyridine molecule without absorbing energy, encoded information about the nuclear behavior of the molecules, while inelastic scattering signals, produced when electrons exchange energy with the molecule, contained information about electronic changes. Electrons from these two types of scattering emerged at different angles, allowing researchers to cleanly separate the two signals and directly observe what the molecule's electrons and nuclei were doing at the same time.

"Both of these observations agree almost precisely with a simulation that is designed to take into account all possible reaction channels," says co-author Xiaolei Zhu, who was a postdoctoral fellow at Stanford at the time of this experiment. "This provides us with an exceptionally clear view of the interplay between electronic and nuclear changes."

Complementary techniques

The scientists believe this method will supplement the range of structural information collected through X-ray diffraction and other techniques at instruments such as SLAC's Linac Coherent Light Source (LCLS) X-ray laser, which is able to measure precise details of the chemical dynamics on the shortest timescales, as recently reported for another light-induced chemical reaction.

"We're seeing that MeV-UED is becoming more and more of a tool that complements other techniques," says co-author and SLAC scientist Thomas Wolf. "The fact that we can get electronic and nuclear structures in the same data set, measured together yet observed separately, will provide new opportunities to combine what we learn with knowledge from other experiments."

'A new way of looking at things'

In the future, this technique could allow scientists to follow ultrafast photochemical processes where the timing of electronic and nuclear changes is crucial to the outcome of the reaction.

"This really opens up a new way of looking at things with ultrafast electron diffraction," says co-author Xijie Wang, director of the MeV-UED instrument. "We're always trying to find out how the electrons and the nuclei actually interact to make these processes so fast. This technique allows us to distinguish which comes first - the change to the electrons or the change in the nuclei. Once you get a complete picture of how these changes play out, you can start to predict and control photochemical reactions."

Credit: 
DOE/SLAC National Accelerator Laboratory

Towable sensor free-falls to measure vertical slices of ocean conditions

The motion of the ocean is often thought of in horizontal terms, for instance in the powerful currents that sweep around the planet, or the waves that ride in and out along a coastline. But there is also plenty of vertical motion, particularly in the open seas, where water from the deep can rise up, bringing nutrients to the upper ocean, while surface waters sink, sending dead organisms, along with oxygen and carbon, to the deep interior.

Oceanographers use instruments to characterize the vertical mixing of the ocean's waters and the biological communities that live there. But these tools are limited in their ability to capture small-scale features, such as the up- and down-welling of water and organisms over a small, kilometer-wide ocean region. Such features are essential for understanding the makeup of marine life that exists in a given volume of the ocean (such as in a fishery), as well as the amount of carbon that the ocean can absorb and sequester away.

Now researchers at MIT and the Woods Hole Oceanographic Institution (WHOI) have engineered a lightweight instrument that measures both physical and biological features of the vertical ocean over small, kilometer-wide patches. The "ocean profiler," named EcoCTD, is about the size of a waist-high model rocket and can be dropped off the back of a moving ship. As it free-falls through the water, its sensors measure physical features, such as temperature and salinity, as well as biological properties, such as the optical scattering of chlorophyll, the green pigment of phytoplankton.

"With EcoCTD, we can see small-scale areas of fast vertical motion, where nutrients could be supplied to the surface, and where chlorophyll is carried downward, which tells you this could also be a carbon pathway. That's something you would otherwise miss with existing technology," says Mara Freilich, a graduate student in MIT's Department of Earth, Atmospheric, and Planetary Sciences and the MIT-WHOI Joint Program in Oceanography/Applied Ocean Sciences and Engineering.

Freilich and her colleagues have published their results in the Journal of Atmospheric and Oceanic Technology. The paper's co-authors are J. Thomas Farrar, Benjamin Hodges, Tom Lanagan, and Amala Mahadevan of WHOI, and Andrew Baron of Dynamic System Analysis, in Nova Scotia. The lead author is Mathieu Dever of WHOI and RBR, a developer of ocean sensors based in Ottawa.

Ocean synergy

Oceanographers use a number of methods to measure the physical properties of the ocean. Some of the more powerful, high-resolution instruments used are known as CTDs, for their ability to measure the ocean's conductivity, temperature, and depth. CTDs are typically bulky, as they contain multiple sensors as well as components that collect water and biological samples. Conventional CTDs require a ship to stop as scientists lower the instrument into the water, sometimes via a crane system. The ship has to stay put as the instrument collects measurements and water samples, and can only get back underway after the instrument is hauled back onboard.

Physical oceanographers who do not study ocean biology, and therefore do not need to collect water samples, can sometimes use "UCTDs" -- underway versions of CTDs, without the bulky water sampling components, that can be towed as a ship is underway. These instruments can sample quickly since they do not require a crane or a ship to stop as they are dropped.

Freilich and her team looked to design a version of a UCTD that could also incorporate biological sensors, all in a small, lightweight, towable package, that would also keep the ship moving on course as it gathered its vertical measurements.

"It seemed there could be straightforward synergy between these existing instruments, to design an instrument that captures physical and biological information, and could do this underway as well," Freilich says.

"Reaching the dark ocean"

The core of the EcoCTD is the RBR Concerto Logger, a sensor that measures the temperature of the water, as well as the conductivity, which is a proxy for the ocean's salinity. The profiler also includes a lead collar that provides enough weight to enable the instrument to free-fall through the water at about 3 meters per second -- a rate that takes the instrument down to about 500 meters below the surface in about two minutes.

"At 500 meters, we're reaching the upper twilight zone," Freilich says. "The euphotic zone is where there's enough light in the ocean for photosynthesis, and that's at about 100 to 200 meters in most places. So we're reaching the dark ocean."

Another sensor, the EcoPuck, is unique to other UCTDs in that it measures the ocean's biological properties. Specifically, it is a small, puck-shaped bio-optical sensor that emits two wavelengths of light -- red and blue. The sensor captures any change in these lights as they scatter back and as chlorophyll-containing phytoplankton fluoresce in response to the light. If the red light received resembles a certain wavelength characteristic of chlorophyll, scientists can deduce the presence of phytoplankton at a given depth. Variations in red and blue light scattered back to the sensor can indicate other matter in the water, such as sediments or dead cells -- a measure of the amount of carbon at various depths.

The EcoCTD includes another sensor unique to UCTDs -- the Rinko III Do, which measures the oxygen concentration in water, which can give scientists an estimate of how much oxygen is being taken up by any microbial communities living at a given depth and parcel of water.

Finally, the entire instrument is encased in a tube of aluminum and designed to attach via a long line to a winch at the back of a ship. As the ship is moving, a team can drop the instrument overboard and use the winch to pay the line out at a rate that the instrument drops straight down, even as the ship moves away. After about two minutes, once it has reached a depth of about 500 meters, the team cranks the winch to pull the instrument back up, at a rate that the instrument catches up to the ship within 12 minutes. The crew can then drop the instrument again, this time at some distance from their last dropoff point.

"The nice thing is, by the time we go to the next cast, we're 500 meters away from where we were the first time, so we're exactly where we want to sample next," Freilich says.

They tested the EcoCTD on two cruises in 2018 and 2019, one to the Mediterranean and the other in the Atlantic, and in both cases were able to collect both physical and biological data at a higher resolution than existing CTDs.

"The ecoCTD is capturing these ocean characteristics at a gold-standard quality with much more convenience and versatility," Freilich says.

The team will further refine their design, and hopes that their high-resolution, easily-deployable, and more efficient alternative may be adapted by both scientists to monitor the ocean's small-scale responses to climate change, as well as fisheries that want to keep track of a certain region's biological productivity.

Credit: 
Massachusetts Institute of Technology

Low-severity fires enhance long-term carbon retention of peatlands

image: A proscribed burn at Pocosin Lakes National Wildlife Refuge.

Image: 
Curt Richardson, Duke University

DURHAM, N.C. -- High-intensity fires can destroy peat bogs and cause them to emit huge amounts of their stored carbon into the atmosphere as greenhouse gases, but a new Duke University study finds low-severity fires spark the opposite outcome.

The smaller fires help protect the stored carbon and enhance the peatlands' long-term storage of it.

The flash heating of moist peat during less severe surface fires chemically alters the exterior of clumped soil particles and "essentially creates a crust that makes it difficult for microbes to reach the organic matter inside," said Neal Flanagan, visiting assistant professor at the Duke Wetland Center and Duke's Nicholas School of the Environment.

This reaction -- which Flanagan calls "the crème brulee effect" -- shields the fire-affected peat from decay. Over time, this protective barrier helps slow the rate at which a peatland's stored carbon is released back into the environment as climate-warming carbon dioxide and methane, even during periods of extreme drought.

By documenting this effect on peatland soils from Minnesota to Peru, "this study demonstrates the vital and nuanced, but still largely overlooked, role fire plays in preserving peat across a wide latitudinal gradient, from the hemi-boreal zone to the tropics," said Curtis J. Richardson, director of the Duke Wetland Center.

"This is the first time any study has been able to show that," Richardson said, "and it has important implications for the beneficial use of low-severity fire in managing peatlands, especially at a time of increasing wildfires and droughts."

The researchers published their peer-reviewed findings May 10 in the journal Global Change Biology.

Peatlands are wetlands that cover only 3% of Earth's land but store one-third of the planet's total soil carbon. Left undisturbed, they can lock away carbon in their organic soil for millennia due to natural antimicrobial compounds called phenolics and aromatics that earlier studies by the Duke team have shown can prevent even drier peat from decaying. If a smoldering, high-intensity fire or other major disturbance destroys this natural protection, however, they can quickly turn from carbon sinks to carbon sources.

To conduct the new study, Flanagan and his colleagues at the Duke Wetland Center monitored a U.S. Fish and Wildlife Service proscribed burn of a peatland pocosin, or shrub-covered wetland bog, at Pocosin Lakes National Wildlife Refuge in eastern North Carolina in 2015. Using field sensors, they measured the changing intensity of the fire over its duration and the effects it had on soil moisture, surface temperatures and plant cover. They also did chemical analyses of soil organic matter samples collected before and after the fire.

They then replicated the intensity and duration of the N.C. fire, which briefly reached temperatures of 850oF, in controlled laboratory tests on soil from peatlands in Minnesota, Florida and the Amazon basin of Peru, and analyzed the burn samples using using X?ray photoelectron spectroscopy and Fourier transform infrared spectroscopy.

The analysis showed that the low-severity fires increased the degree of carbon condensation and aromatization in the soil samples, particularly those collected from the peatlands' surface. In other words, the researchers saw the "crème brulee effect" in samples from each of the latitudes.

Long-term laboratory incubations of the burnt samples showed lower cumulative CO2 emissions coming from the peat for more than 1-3 years after the tests.

"Initially, there was some loss of carbon, but long-term you easily offset that because there's also reduced respiration by the microbes that promote decay, so the peat is decomposing at a much slower rate," Flanagan said.

Globally, peatlands contain approximately 560 gigatons of stored carbon. That's the same amount that is stored in all forests and nearly as much as the 597 gigatons found in the atmosphere.

"Improving the way we manage and preserve peatlands is critical given their importance in Earth's carbon budget and the way climate change is altering natural fire regimes worldwide," Richardson said, "This study reminds us that fire is not just a destructive anomaly in peatlands, it can also be a beneficial part of their ecology that has a positive influence on their carbon accretion."

Flanagan and Richardson conducted the study with fellow Duke Wetland Center researchers Hongjun Wang and Scott Winton. Winton also holds appointments at ETH Zurich's Institute of Biogeochemistry and Pollutant Dynamics and the Swiss Federal Institute of Aquatic Science and Technology.

Credit: 
Duke University

CRISPR a tool for conservation, not just gene editing

image: Longfin smelt can be difficult to differentiate from endangered Delta smelt. Here, a longfin smelt is swabbed for genetic identification through a CRISPR tool called SHERLOCK.

Image: 
Alisha Goodbla/UC Davis

The gene-editing technology CRISPR has been used for a variety of agricultural and public health purposes -- from growing disease-resistant crops to, more recently, a diagnostic test for the virus that causes COVID-19.

Now a study involving fish that look nearly identical to the endangered Delta smelt finds that CRISPR can be a conservation and resource management tool, as well. The researchers think its ability to rapidly detect and differentiate among species could revolutionize environmental monitoring.

The study, published in the journal Molecular Ecology Resources, was led by scientists at the University of California, Davis, and the California Department of Water Resources in collaboration with MIT Broad Institute.

As a proof of concept, it found that the CRISPR-based detection platform SHERLOCK (Specific High-sensitivity Enzymatic Reporter Unlocking) was able to genetically distinguish threatened fish species from similar-looking nonnative species in nearly real time, with no need to extract DNA.

"CRISPR can do a lot more than edit genomes," said co-author Andrea Schreier, an adjunct assistant professor in the UC Davis animal science department. "It can be used for some really cool ecological applications, and we're just now exploring that."

WHEN GETTING IT WRONG IS A BIG DEAL

The scientists focused on three fish species of management concern in the San Francisco Estuary: the U.S. threatened and California endangered Delta smelt, the California threatened longfin smelt and the nonnative wakasagi. These three species are notoriously difficult to visually identify, particularly in their younger stages.

Hundreds of thousands of Delta smelt once lived in the Sacramento-San Joaquin Delta before the population crashed in the 1980s. Only a few thousand are estimated to remain in the wild.

"When you're trying to identify an endangered species, getting it wrong is a big deal," said lead author Melinda Baerwald, a project scientist at UC Davis at the time the study was conceived and currently an environmental program manager with California Department of Water Resources.

For example, state and federal water pumping projects have to reduce water exports if enough endangered species, like Delta smelt or winter-run chinook salmon, get sucked into the pumps. Rapid identification makes real-time decision making about water operations feasible.

FROM HOURS TO MINUTES

Typically to accurately identify the species, researchers rub a swab over the fish to collect a mucus sample or take a fin clip for a tissue sample. Then they drive or ship it to a lab for a genetic identification test and await the results. Not counting travel time, that can take, at best, about four hours.

SHERLOCK shortens this process from hours to minutes. Researchers can identify the species within about 20 minutes, at remote locations, noninvasively, with no specialized lab equipment. Instead, they use either a handheld fluorescence reader or a flow strip that works much like a pregnancy test -- a band on the strip shows if the target species is present.

"Anyone working anywhere could use this tool to quickly come up with a species identification," Schreier said.

OTHER CRYPTIC CRITTERS

While the three fish species were the only animals tested for this study, the researchers expect the method could be used for other species, though more research is needed to confirm. If so, this sort of onsite, real-time capability may be useful for confirming species at crime scenes, in the animal trade at border crossings, for monitoring poaching, and for other animal and human health applications.

"There are a lot of cryptic species we can't accurately identify with our naked eye," Baerwald said. "Our partners at MIT are really interested in pathogen detection for humans. We're interested in pathogen detection for animals as well as using the tool for other conservation issues."

Credit: 
University of California - Davis

Study unveils many ways carcinogens trigger development of breast cancer

Newton, Mass. (May 21, 2020) - In the most comprehensive review to date of how breast cancer develops, scientists have created a detailed map that describes the many ways in which environmental chemicals can trigger the disease. Using ionizing radiation as a model, the researchers identified key mechanisms within cells that when disrupted cause breast cancer. Because the findings can be generalized to other environmental carcinogens, they could help regulators identify chemicals that increase breast cancer risk.

"We know exposure to toxic chemicals can play an important role in the development of breast cancer," says Ruthann Rudel, an environmental toxicologist at Silent Spring Institute and one of the study's co-authors. "Yet, when regulators try to evaluate whether a chemical is harmful or not, the tests they use do not capture the effects on the breast. This gap in testing means potential breast carcinogens are being given the green light for use in our consumer products."

Breast cancer is the most common invasive cancer in women, with incidence rates highest in North America and Europe, and rates increasing globally. Because only 5 to 10 percent of breast cancers are due to high risk inherited mutations, such as BRCA1 and BRCA2, scientists say a better understanding of how environmental factors contribute to the disease is needed to prevent future breast cancers and lower incidence rates.

Toward that end, researchers at Silent Spring looked at ionizing radiation--an established risk factor for breast cancer. People can be exposed to ionizing radiation from many sources, including X-rays, CT scans and radiation treatment. The effects of radiation on breast cancer have been extensively studied, based in large part on studies of survivors of the atomic bombings in Hiroshima and Nagasaki and women who were exposed to medical radiation as adolescents.

Reporting in the journal Archives of Toxicology, Rudel and co-author Jessica Helm reviewed 467 studies to identify the sequence of biological changes that occur in breast cells and tissue from the time of radiation exposure to the formation of a tumor. They then created a map of these sequential changes, revealing multiple interconnected pathways by which ionizing radiation leads to breast cancer.

The researchers created the map using a framework called an Adverse Outcome Pathway (AOP). AOPs were designed by the Organisation for Economic Co-operation and Development (OECD) as a way to represent how complex diseases develop, and to help regulators, chemical manufacturers, and drug companies predict how chemicals might affect diseases early in the research process.

"It turns out, not surprising, breast cancer is a lot more complex than how it's conveyed in traditional cancer models," says Rudel. In traditional models, ionizing radiation triggers breast cancer solely through DNA damage. The new model by Silent Spring integrates recent findings in cancer biology that show radiation, in addition to DNA damage, also increases the production of molecules called reactive oxygen and nitrogen species. These molecules wreak havoc inside cells, causing inflammation, altering DNA, and disrupting other important biological activities.

"This study is important and highlights the need for a holistic consideration of mechanistic evidence when identifying potential carcinogens," says Kathryn Guyton, a senior toxicologist at the International Agency for Research on Cancer. "In reality there are multiple key characteristics of carcinogens. Increasingly, we are appreciating that human carcinogens may exhibit different combinations of these key characteristics."

The Silent Spring team also found that the biological changes that lead to breast cancer are highly influenced by reproductive hormones, such as estrogen and progesterone. Reproductive hormones stimulate the proliferation of cells within the breast, so chemicals that similarly encourage cell proliferation could make the breast more susceptible to tumors. "Critical periods of development, such as during puberty or pregnancy when the breast undergoes important changes, are times when the breast is especially vulnerable," says Rudel.

To address gaps in chemical safety testing, the Silent Spring researchers identified a series of tests regulators could use to find chemicals that disrupt the pathways outlined in their new model. Chemicals that disrupt these pathways would be considered potential breast carcinogens, thereby discouraging their use in products.

"This study is an invaluable contribution to the field and a real wake-up call for regulators," says Linda Birnbaum, former director of the National Institute for Environmental Health Sciences. "By holding on to an oversimplified model of how chemicals cause cancer, regulators have been missing critical information, potentially allowing toxic chemicals to enter our products, our air, and our water."

The AOP project is part of Silent Spring Institute's Safer Chemicals Program which is developing new cost-effective ways of screening chemicals for their effects on the breast. Knowledge generated by this effort will help government agencies regulate chemicals more effectively and assist companies in developing safer products.

Credit: 
Silent Spring Institute