Tech

Indoor precautions essential to stem airborne COVID-19

image: Tiny virus droplets travel on airflow

Image: 
QUT: Chantal Labbe

World-leading air quality and health expert QUT Professor Lidia Morawska and Professor Junji Cao from Chinese Academy of Sciences in an article in Environment International published this week called on health bodies to initiate research into the airborne transmission of COVID-19 as it is happening

"National health bodies responsible for controlling the pandemic are hampered by not acknowledging the research evidence of airborne transmission of viable virus droplets, that was conducted after the SARS 2003 outbreak," Professor Morawska said.

"Now is the ideal time to conduct research into how viruses can travel on the airflow, because there are many similarities between the coronavirus that caused SARS and the COVID-19 coronavirus and therefore it is highly likely that COVID-19 spreads by air.

"Analysis of the initial pattern of COVID-19 spread in China reveals multiple cases of non-contact transmission, especially in areas outside Wuhan.

"On numerous cruise ships where thousands of people onboard were infected, many of the infections occurred after passengers had to isolate in their cabins even though hand hygiene was implemented.

"Therefore, the ventilation system could have spread the airborne virus between the cabins.

"We know that Covid-19's predecessor, SARS.CoV-1, did spread on the air in the 2003 outbreak. Several studies have retrospectively explained this pathway of transmission in Hong Kong's Prince of Wales Hospital as well as in healthcare facilities in Toronto, Canada.

"A WHO review (2009) of the evidence found viral diseases can be transmitted across distances in indoor environments by aerosol or airborne infection and can result in large clusters of infection in a short period."

Professor Morawska said authorities need to put in place public health precautions to lower airborne transmission by:

increased ventilation of indoor spaces

use of natural ventilation

avoiding air recirculation

avoiding staying in another person's direct air flow

minimizing the number of people sharing the same environment

providing adequate ventilation in nursing homes, hospitals, shops, offices, schools, restaurants and cruise ships.

Professor Morawska said virus droplets' liquid content started to evaporate immediately after being exhaled and some became so small that could travel on air currents, rather than fall to the ground as larger droplets do.

"Such small droplets can carry their viral content metres, even tens of metres, away from the infected person."

Professor Morawska said it was difficult to directly detect viruses travelling in the air because it took knowledge of the air flow from an infected person and a long sampling period to collect enough copies of the viruses.

"Air transmission research should be undertaken now and its likelihood as a means of spread should be taken seriously with due precautions taken now.

"We have already lost valuable time by ignoring this method of spread and we should act on the presumption that COVID-19 is spreading on the air."

Credit: 
Queensland University of Technology

What is an individual? Information Theory may provide the answer

image: Coral polyps on Molasses Reef. Florida Keys National Marine Sanctuary

Image: 
Brent Deuel/NOAA Photo Library

It's almost impossible to imagine biology without individuals -- individual organisms, individual cells, and individual genes, for example. But what about a worker ant that never reproduces, and could never survive apart from the colony? Are the trillions of microorganisms in our microbiomes, which vastly outnumber our human cells, part of our individuality?

"Despite the near-universal assumption of individuality in biology, there is little agreement about what individuals are and few rigorous quantitative methods for their identification," write the authors of new work published in the journal Theory in Biosciences. The problem, they note in the paper, is analogous to identifying a figure from its background in a Gestalt psychology drawing. Like the famous image of two faces outlining a vase, an individual life form and its environment create a whole that is greater than the sum of its parts.

One way to solve the puzzle comes from information theory. Instead of focusing on anatomical traits, like cell walls, authors David Krakauer, Nils Bertschinger, Eckehard Olbrich, Jessica Flack, and Nihat Ay* look to structured information flows between a system and its environment. "Individuals," they argue, "are best thought of in terms of dynamical processes and not as stationary objects."

The individual as a verb: what processes produce distinct identity? Flack stresses that this lens allows for individuality to be "continuous rather than binary, nested, and possible at any level." An information theory of individuality (or ITI) describes emergent agency at different scales and with distributed communication networks (e.g., both ants and colonies at once).

The authors use a model that suggests three kinds of individual, each corresponding to a different blend of self-regulation and environmental influence. Decomposing information like this gives a gradient: it ranges from environmentally-scaffolded forms like whirlpools, to partially-scaffolded colonial forms like coral reefs and spider webs, to organismal individuals that are sculpted by their environment but strongly self-organizing.

Each is a strategy for propagating information forward through time--meaning, Flack adds, "individuality is about temporal uncertainty reduction." Replication here emerges as just one of many strategies for individuals to order information in their future. To Flack, this "leaves us free to ask what role replication plays in temporal uncertainty reduction through the creation of individuals," a question close to asking why we find life in the first place.

Perhaps the biggest implication of this work is in how it places the observer at the center of evolutionary theory. "Just as in quantum mechanics," Krakauer says, "where the state of a system depends on measurement, the measurements we call natural selection dictate the preferred form of the individual. Who does the measuring? What we find depends on what the observer is capable of seeing."

Credit: 
Santa Fe Institute

Lung-heart super sensor on a chip tinier than a ladybug

image: A square black dot with mammoth abilities to record lung and heart data.

Image: 
Georgia Tech / Ayazi lab

During a stroll, a woman's breathing becomes a slight bit shallower, and a monitor in her clothing alerts her to get a telemedicine check-up. A new study details how a sensor chip smaller than a ladybug records multiple lung and heart signals along with body movements and could enable such a future socially distanced health monitor.

The core mechanism of the chip developed by researchers at the Georgia Institute of Technology involves two finely manufactured layers of silicon, which overlay each other separated by the space of 270 nanometers - about 0.005 the width of a human hair. They carry a minute voltage.

Vibrations from bodily motions and sounds put part of the chip in flux, making the voltage flux, too, thus creating readable electronic outputs. In human testing, the chip has recorded a variety of signals from the mechanical workings of the lungs and the heart with clarity, signals that often escape meaningful detection by current medical technology.

"Right now, medicine looks to EKGs (electrocardiograms) for information on the heart, but EKGs only measure electrical impulses. The heart is a mechanical system with muscles pumping and valves opening and shutting, and it sends out a signature of sounds and motions, which an EKG does not detect. EKGs also say nothing about lung function," said Farrokh Ayazi, Ken Byers Professor in Georgia Tech's School of Electrical and Computer Engineering.

Stethoscope-accelerometer combo

The chip, which acts as an advanced electronic stethoscope and accelerometer in one, is aptly called an accelerometer contact microphone. It detects vibrations that enter the chip from inside the body while keeping out distracting noise from outside the body's core like airborne sounds

"If it rubs on my skin or shirt, it doesn't hear the friction, but the device is very sensitive to sounds coming at it from inside the body, so it picks up useful vibrations even through clothing," Ayazi said.

The detection bandwidth is enormous - from broad, sweeping motions to inaudibly high-pitched tones. Thus, the sensor chip records all at once fine details of the heartbeat, pulse waves traversing the body's tissues, respiration rates, and lung sounds. It even tracks the wearer's physical activities such as walking.

The signals are recorded in sync, potentially offering the big picture of a patient's heart and lung health. For the study, the researchers successfully recorded a "gallop," a faint third sound after the "lub-dub" of the heartbeat. Gallops are normally elusive clues of heart failure.

The researchers published their results in the journal npj Digital Medicine on February 12, 2020. The research was funded by the Georgia Research Alliance, the Defense Advanced Research Projects Agency (DARPA), the National Science Foundation, and the National Institutes of Health. Study coauthor Divya Gupta, M.D., a cardiologist at Emory University, collaborated in testing the chip on human participants.

Hermetically sealed vacuum

Medical research has tried to make better use of the body's mechanical signals for decades but recording some - like waves traversing multiple tissues - has proven inconsistent, while others - like gallops - have relied upon clinician skills influenced by human error. The new chip produces high-resolution, quantified data that future research could match to pathologies in order to identify them.

"We are working already to collect significantly more data matched with pathologies. We envision algorithms in the future that may enable a broad array of clinical readings," Ayazi said.

Though the chip's main engineering principle is simple, making it work and then manufacturable took Ayazi's lab ten years, mainly because of the Lilliputian scale of the gap between the silicon layers, i.e. electrodes. If the 2-millimeter by 2-millimeter sensor chip were expanded to the size of a football field, that air gap would be about an inch wide.

"That very thin gap separating the two electrodes cannot have any contact, not even by forces in the air in between the layers, so the whole sensor is hermetically sealed inside a vacuum cavity," Ayazi said. "This makes for that ultralow signal noise and breadth of bandwidth that are unique."

Detects through clothing

The researchers used a manufacturing process developed in Ayazi's lab called the HARPSS+ platform (High Aspect Ratio Poly and Single Crystalline Silicon) for mass production, running off hand-sized sheets that were then cut into the tiny sensor chips. HARPSS+ is the first reported mass manufacturing process that achieves such consistently thin gaps, and it has enabled high-throughput manufacturing of many such advanced MEMS, or microelectromechanical systems.

The experimental device is currently battery-powered and uses a second chip called a signal-conditioning circuit to translate the sensor chip's signals into patterned read-outs.

Three sensors or more could be inserted into a chest band that would triangulate health signals to locate their sources. Someday a device may pinpoint an emerging heart valve flaw by turbulence it produces in the bloodstream or identify a cancerous lesion by faint crackling sounds in a lung.

Credit: 
Georgia Institute of Technology

Could shrinking a key component help make autonomous cars affordable?

Engineers and business leaders have been working on autonomous cars for years, but there's one big obstacle to making them cheap enough to become commonplace: They've needed a way to cut the cost of lidar, the technology that enables robotic navigation systems to spot and avoid pedestrians and other hazards along the roadway by bouncing light waves off these potential obstacles.

Today's lidars use complex mechanical parts to send the flashlight-sized infrared lasers spinning around like the old-fashioned, bubblegum lights atop police cars -- at a cost of $8,000 to $30,000.

But now a team led by electrical engineer Jelena Vuckovic is working on shrinking the mechanical and electronic components in a rooftop lidar down to a single silicon chip that she thinks could be mass produced for as little as a few hundred dollars.

The project grows out of years of research by Vuckovic's lab to find a practical way to take advantage of a simple fact: Much like sunlight shines through glass, silicon is transparent to the infrared laser light used by lidar (short for light detection and ranging).

In a study published in Nature Photonics, the researchers describe how they structured the silicon in a way that used its infrared transparency to control, focus and harness the power of photons, the quirky particles that constitute light beams.

The team used a process called inverse design that Vuckovic's lab has pioneered over the past decade. Inverse design relies on a powerful algorithm that drafts a blueprint for the actual photonic circuits that perform specific functions -- in this case, shooting a laser beam out ahead of a car to locate objects in the road and routing the reflected light back to a detector. Based on the delay between when the light pulse is sent forward and when the beam reflects back to the detector, lidars measure the distance between car and objects.

It took Vuckovic's team two years to create the circuit layout for the lidar-on-a-chip prototype they built in the Stanford nanofabrication facility. Postdoctoral scholar Ki Youl Yang and PhD student Jinhie Skarda played key roles in that process, with crucial theoretical insights from City University of New York physicist Andrea Alù and CUNY postdoctoral scholar Michele Cotrufo.

Building this range-finding mechanism on a chip is just the first -- though essential -- step toward creating inexpensive lidars. The researchers are now working on the next milestone, ensuring that the laser beam can sweep in a circle without using expensive mechanical parts. Vuckovic estimates her lab is about three years away from building a prototype that would be ready for a road test.

"We are on a trajectory to build a lidar-on-a-chip that is cheap enough to help create a mass market for autonomous cars," Vuckovic said.

Credit: 
Stanford University School of Engineering

Skin that computes

As our body's largest and most prominent organ, the skin also provides one of our most fundamental connections to the world around us. From the moment we're born, it is intimately involved in every physical interaction we have.

Though scientists have studied the sense of touch, or haptics, for more than a century, many aspects of how it works remain a mystery.

"The sense of touch is not fully understood, even though it is at the heart of our ability to interact with the world," said UC Santa Barbara haptics researcher Yon Visell. "Anything we do with our hands -- picking up a glass, signing our name or finding keys in our bag -- none of that is possible without the sense of touch. Yet we don't fully understand the nature of the sensations captured by the skin or how they are processed in order to enable perception and action."

We have better models for how our other senses, such as vision and hearing, work, but our understanding of how the sense of touch works is much less complete, he added.

To help fill that gap, Visell and his research team, including Yitian Shao and collaborator Vincent Hayward at the Sorbonne, have been studying the physics of touch sensation -- how touching an object gives rise to signals in the skin that shape what we feel. In a study (link) published in the journal Science Advances, the group reveals how the intrinsic elasticity of the skin aids tactile sensing. Remarkably, they show that far from being a simple sensing material, the skin can also aid the processing of tactile information.

To understand this significant but little-known aspect of touch, Visell thinks it is helpful to think about how the eye, our visual organ, processes optical information.

"Human vision relies on the optics of the eye to focus light into an image on the retina," he said. "The retina contains light-sensitive receptors that translate this image into information that our brain uses to decompose and interpret what we're looking at."

An analogous process unfolds when we touch a surface with our skin, Visell continued. Similar to the structures such as the cornea and iris that capture and focus light onto the retina, the skin's elasticity distributes tactile signals to sensory receptors throughout the skin.

Building on previous work (link) which used an array of tiny accelerometers worn on the hand to sense and catalog the spatial patterns of vibrations generated by actions such as tapping, sliding or grasping, the researchers here employed a similar approach to capture spatial patterns of vibration that are generated as the hand feels the environment.

"We used a custom device consisting of 30 three-axis sensors gently bonded to the skin," explained lead author Shao. "And then we asked each participant in our experiments to perform many different touch interactions with their hands." The research team collected a dataset of nearly 5000 such interactions, and analyzed that data to interpret how the transmission of touch-produced vibration patterns that were transmitted throughout the hand shaped information content in the tactile signals. The vibration patterns arose from the elastic coupling within the skin itself.

The team then analyzed these patterns in order to clarify how the transmission of vibrations in the hand shaped information in the tactile signals. "We used a mathematical model in which high-dimensional signals felt throughout the hand were represented as combinations of a small number of primitive patterns," Shao explained. The primitive patterns provided a compact lexicon, or dictionary, that compressed the size of the information in the signals, enabling them to be encoded more efficiently.

This analysis generated a dozen or fewer primitive wave patterns -- vibrations of the skin throughout the hand that could be used to capture information in the tactile signals felt by the hand. The striking feature of these primitive vibration patterns, Visell said, is that they automatically reflected the structure of the hand and the physics of wave transmission in the skin.

"Elasticity plays this very basic function in the skin of engaging thousands of sensory receptors for touch in the skin, even when contact occurs at a small skin area," he explained. "This allows us to use far more sensory resources than would otherwise be available to interpret what it is that we're touching." The remarkable finding of their research is that this process also makes it possible to more efficiently capture information in the tactile signals, Visell said. Information processing of this kind is normally considered to be performed by the brain, rather than the skin.

The role played by mechanical transmission in the skin is in some respects similar to the role of the mechanics of the inner ear in hearing, Visell said. In 1961, von Bekesy received the Nobel Prize for his work showing how the mechanics of the inner ear facilitate auditory processing. By spreading sounds with different frequency content to different sensory receptors in the ear they aid the encoding of sounds by the auditory system. The team's work suggests that similar processes may underly the sense of touch.

These findings, according to the researchers, not only contribute to our understanding of the brain, but may also suggest new approaches for the engineering of future prosthetic limbs for amputees that might be endowed with skin-like elastic materials. Similar methods also could one day be used to improve tactile sensing by next-generation robots.

Credit: 
University of California - Santa Barbara

Acoustic growth factor patterning

image: Journal brings together scientific and medical experts in the fields of biomedical engineering, material science, molecular and cellular biology, and genetic engineering.

Image: 
Mary Ann Liebert Inc., publishers

New Rochelle, NY, April 15, 2020--For optimally engineered tissues, it is important that biological cues are delivered with appropriate timing and to specific locations. To aid in this endeavor, researchers have combined acoustic droplet ejection (ADE) technology with 3D printing to establish stringently controlled growth factor patterning in porous constructs. Their work is reported in Tissue Engineering, a peer-reviewed journal from Mary Ann Liebert, Inc., publishers. Click here to read the article for free on the Tissue Engineering website through May 14, 2020.

In "Acoustic Patterning of Growth Factor for 3D Tissue Engineering," Yunzhi Peter Yang, PhD, Stanford University, and coauthors report the establishment of this new combination method. The authors started with biodegradable polycaprolactone (PCL) constructs, which were terminated with succinimide as test material or with sodium hydroxide or fibrin as controls, and then used ADE to deposit bone morphogenetic protein 2 (BMP-2) in precise locations throughout the lattice. C2C12 mouse myoblast cells were incubated with the constructs, and the osteogenic differentiation of these cells was monitored by alkaline phosphatase (AP) staining. The PCL-succinimide samples showed AP staining mirroring the patterned BMP-2 spots, thus demonstrating the potential of ADE growth factor patterning in tissue engineering.

"This novel combination of 3D printing and ADE technologies addresses the crucial need for high spatial resolution of growth factor presentation in tissue engineered constructs," says Tissue Engineering Co-Editor-in-Chief Antonios G. Mikos, PhD, Louis Calder Professor at Rice University, Houston, TX. "While the use of this technology demonstrated great success in osteogenic differentiation, these techniques will also be tremendously beneficial to many other tissue types as well."

Credit: 
Mary Ann Liebert, Inc./Genetic Engineering News

Goals, opportunities, guides for advancing soft tissue and soft materials research

image: Christopher Barney, left, and Prof. Al Crosby, perform a cavitation experiment in Crosby's materials science lab at UMass Amherst

Image: 
UMass Amherst

AMHERST, Mass. - A type of damage in soft materials and tissue called cavitation is one of the least-studied phenomena in physics, materials science and biology, say expert observers. But strong evidence suggesting that cavitation occurs in the brain during sudden impact leading to traumatic brain injury (TBI) has accelerated interest recently, say materials scientist Alfred Crosby at the University of Massachusetts Amherst and his team.

Crosby is the senior author of a new "Perspectives" paper this week in Proceedings of the National Academy of Sciences. The researchers intend it to spark fresh discussion and drive collaboration among new communities of biologists, chemists, materials scientists, physicists and others to advance knowledge. They define high-priority goals and point out new opportunities in the field of how matter deforms and flows with cavitation.

Crosby says, "We're breaking down barriers that separate different scientific fields to spur progress in understanding cavitation - how it causes difficult-to-diagnose injuries or unseen failure in soft materials."

He and Ph.D. students Christopher Barney and Carey Dougan, co-first authors of the paper, worked with chemical engineer Shelly Peyton, mechanical engineer Jae-Hwang Lee and polymer scientist Greg Tew at UMass Amherst. Others on the "CAVITATE" team are chemical engineer Rob Riggleman at the University of Pennsylvania and mechanical engineer Shengqiang Cai at the University of California, San Diego. Support is from a $2.6 million grant from the U.S. Office of Naval Research.

"While the world of cavitation seems to be historically the realm of engineers and physicists, there are growing opportunities for synthetic chemistry to contribute to the field," the authors state. "The chemistry community will significantly aid both the mechanics and biology communities in understanding the physical principles of cavitation as well as using them to advantage in chemical reactions."

Studied mainly in fluids for many years, cavitation is the creation and collapse of bubbles in liquids, Crosby explains. When bubbles collapse they force liquid into a smaller area, causing a pressure wave and increased temperature, which lead to damage. In a pump, cavitation can erode metal parts over time, for example. Cavitation inside artificial heart valves can damage not only the parts but the blood, he says. Microcavitation in the brain as a result of high-impact blows or being near an explosion are factors in TBI.

Crosby says the team's perspective paper explores how cavitation can be used not only for preventing damage but also how to use cavitation as a unique tool for understanding soft tissues. For example, new methods use cavitation to study how properties like strength evolve in tissues. Co-first author Barney says the researchers hope to spur new research and development in medicine, chemistry, biology, mechanics and to new uses.

Crosby invented a new experimental tool called cavitation rheology for measuring the local mechanical properties of soft matter. He says, "We hope this will lead to advances in medical devices for diagnosing disease, novel devices for protective gear and new sustainable approaches for cleaning materials." 

Co-first author Dougan adds, "While cavitation is often thought of as something to be avoided, we aim to use it to benefit medicine and the development of new treatments." For example, cavitation rheology can be used to measure the strength of interfaces within the brain, which is difficult to achieve with any other method, she notes. Specifically for TBI, the authors outline techniques for biologists to establish cavitation rheology as a tool for characterizing mechanical responses of soft biological tissues.

Credit: 
University of Massachusetts Amherst

New boron material of high hardness created by plasma chemical vapor deposition

image: This is Yogesh Vohra.

Image: 
UAB

BIRMINGHAM, Ala. - Yogesh Vohra, Ph.D., uses microwave-plasma chemical vapor deposition to create thin crystal films of never-before-seen materials. This effort seeks materials that approach a diamond in hardness and are able to survive extreme pressure, temperature and corrosive environments. The search for new materials is motivated by the desire to overcome limitations of diamond, which tends to oxidize at temperatures higher than 600 degrees Celsius and also chemically reacts with ferrous metals.

Vohra, a professor and university scholar in the University of Alabama at Birmingham Department of Physics, now reports, in the journal Scientific Reports, synthesis of a novel boron-rich boron-carbide material. This film, grown on a 1-inch wafer of silicon, is chemically stable, has 37 percent the hardness of cubic diamond and acts as an insulator.

Equally important, experimental testing of the new material -- including X-ray diffraction and measurement of the material's hardness and Young's modulus -- agrees closely with predicted values computed by the UAB team of researchers led by Cheng-Chien Chen, Ph.D., assistant professor of physics at UAB. The predicted values come from first-principles analysis, which uses supercomputer-driven density functional theory calculations of positively charged nuclei and negatively charged electrons. Thus, Vohra, Chen and colleagues have both made a novel boron-carbon compound and have shown the predictive power of first principles analysis to foretell the properties of these materials.

The new material has the chemical formula B50C2, which means 50 atoms of boron and two atoms of carbon in each subunit of the crystal structure. The crucial issue is where the two carbon atoms are placed in each crystal subunit; insertion of the carbons at other sites leads to materials that are unstable and metallic. The precise placement of carbons is achieved by varying growth conditions.

The current B50C2 material was grown in a microwave plasma chemical vapor deposition system using hydrogen as the carrier gas and diborane -- 90 percent hydrogen gas, 10 percent B2H6 and parts per million carbon -- as the reactive gas. Samples were grown at a low pressure equivalent to the atmospheric pressure 15 miles above Earth. The substrate temperature was about 750 degrees Celsius.

"Boron-rich boron-carbide materials synthesis by chemical vapor deposition methods continues to be relatively unexplored and a challenging endeavor," Vohra said. "The challenge is to find the correct set of conditions that are favorable for growth of the desired phase."

"Our present studies provide validation of the density functional theory in predicting stable crystal structure and providing a metastable synthesis pathway for boron-rich boron-carbide material for applications under extreme conditions of pressure, temperature and corrosive environments."

Credit: 
University of Alabama at Birmingham

NASA observes rainfall from tornado-spawning storms in the southern US

video: This animation shows rainfall estimates for the region from April 11 to 13, derived from NASA's Integrated Multi-satellite Retrievals for GPM (IMERG) data product, along with NOAA tornado reports (red triangles). The animation showed several storms that dropped more than 0.6 inches/16 mm of rainfall (light green) per hour.

Image: 
Credits: Jason West (KBR / NASA GSFC)

For two days in mid-April, severe storms raced through the southern U.S. and NASA created an animation using satellite data to show the movement and strength of those storms.

From Sunday, April 12 into Monday, April 13, 2020, a series of powerful thunderstorms developed across the southern U.S., bringing heavy rainfall and spawning several destructive tornadoes.

At NASA's Goddard Space Flight Center in Greenbelt, Md. an animation was created that shows rainfall estimates for the region from April 11 to 13, derived from NASA's Integrated Multi-satellite Retrievals for GPM (IMERG) data product. The GPM or the Global Precipitation Measurement mission is a constellation of satellites provide the data for NASA's IMERG. The animation showed several storms that dropped more than 0.6 inches/16 mm of rainfall per hour.

On April 12, two sets of storms with those rainfall rates almost blanketed the state of Arkansas and then moved into the Tennessee Valley. Storms also generated tornadoes as they moved through Alabama, Mississippi, northern Georgia, and the Carolinas. The tornadoes in the animation were confirmed with reports from the National Oceanic and Atmospheric Administration or NOAA.

What is IMERG?

The near-realtime rain estimate comes from the NASA's Integrated Multi-satellitE Retrievals for GPM (IMERG) algorithm, which combines observations from a fleet of satellites, in near-realtime, to provide near-global estimates of precipitation every 30 minutes. By combining NASA precipitation estimates with other data sources, we can gain a greater understanding of major storms that affect our planet.

IMERG fills in the "blanks" between weather observation stations. IMERG satellite-based rain estimates can be compared to that from a National Weather Service ground radar. Such good detection of large rain features in real time would be impossible if the IMERG algorithm merely reported the precipitation observed by the periodic overflights of various agencies' satellites. Instead, what the IMERG algorithm does is "morph" high-quality satellite observations along the direction of the steering winds to deliver information about rain at times and places where such satellite overflights did not occur. Information morphing is particularly important over the majority of the world's surface that lacks ground-radar coverage.

For more information about NASA's Precipitation Measurement missions, visit: https://gpm.nasa.gov

Credit: 
NASA/Goddard Space Flight Center

Questionnaire survey identifies potential separation-related problems in cats

image: Domestic cat at home, looking outside

Image: 
Felipe Santos

The first questionnaire survey to identify possible separation-related problems in cats found 13.5 percent of all sampled cats displayed potential issues during their owner's absence, according to a study published April 15, 2020 in the open-access journal PLOS ONE by Daiana de Souza Machado, from the Universidade Federal de Juiz de Fora, Brazil, and colleagues.

Though many studies have been conducted on owner separation problems in dogs, little work has been done to assess potential separation-related problems in cats. Despite the common belief that cats are happy being left alone for long periods of time, recent studies in cats and their owners suggest that pet cats are social and develop bonds with their owners.

In order to assess separation-related problems in cats, de Souza Machado and colleagues developed a questionnaire for use with cat owners. Based on surveys in similar studies with dogs, the questionnaire asked owners to provide basic information on each cat; describe whether their cat displayed certain behaviors when the owner was absent; and describe themselves and their cat interactions, as well as the cat's living environment. The questionnaire was given to 130 owners of adult cats living in the city of Juiz de Fora in Minas Gerais, Brazil, for a total of 223 completed questionnaires (one per cat).

After assessing and categorizing responses for each category, the authors statistically analyzed their results. The data showed 13.5 percent of the sampled cats (30 out of 223) met at least one of the criteria for separation-related problems, with destructive behaviour most frequently-reported (present in 20 of the 30 cats). The other behaviors or mental states identified were: excessive vocalization (19 out of 30 cats), inappropriate urination (18 cats), depression-apathy (16 cats), aggressiveness (11 cats), agitation-anxiety (11 cats) and inappropriate defecation (7 cats). The data also showed these cats were associated with households with no female residents, households with owners aged 18 to 35 years, and/or households with at least two female residents, as well as with not having access to toys (P=0.04) and/or having no other animal in the house (P=0.04).

This questionnaire still requires further validation based on direct observation of cat behavior. It's also limited by a reliance on owners being able to accurately interpret and report their cats' actions in their absence (for instance, scratching on surfaces is normal in cats, though some owners may consider it destructive).

Although there's more work to be done elucidating the relationship between humans and pet cats, this questionnaire can act as a starting point for future research, in addition to indicating certain environmental factors (like toys) that could help cats with separation issues.

The authors add: "This study provides information about behavioral signs consistent with separation-related problems (SRP) in a sampled population of domestic cats, as well as about the management practices used by their owners. The questionnaire identified that about 13% of cats may have signs consistent with SRP according to their owners' reports, and therefore, it could be a promising tool for future research investigating SRP in cats."

Credit: 
PLOS

Breathing heavy wildfire smoke may increase risk of out-of-hospital cardiac arrest

DALLAS, April 15, 2020 -- Exposure to heavy smoke during recent California wildfires raised the risk of out-of-hospital cardiac arrests up to 70%, according to new research published today in the Journal of the American Heart Association, the open access journal of the American Heart Association.

Cardiac arrest occurs when the heart abruptly stops beating properly and can no longer pump blood to vital organs throughout the body. While often referred to interchangeably, cardiac arrest is not the same as a heart attack. A heart attack is when blood flow to the heart is blocked, and sudden cardiac arrest is when the heart malfunctions and suddenly stops beating unexpectedly. A heart attack is a "circulation" problem and sudden cardiac arrest is an "electrical" problem. Out-of-hospital cardiac arrests (OHCAs) are more dangerous because they can lead to death within minutes if no one performs CPR or uses a defibrillator to shock the heart back into a normal rhythm.

The natural cycle of large-scale wildfires is accelerating and exposing both rural and urban communities to wildfire smoke, according to the study. While adverse respiratory effects associated with wildfire smoke are well established, cardiovascular effects are less clear.

"In recent decades, we experienced a significant increase in large-scale wildfires, therefore, more people are being exposed to wildfire smoke. In order to respond properly, it is important for us to understand the health impacts of wildfire smoke exposure," said study author Ana G. Rappold, Ph.D., a research scientist at the U.S. Environmental Protection Agency's Center for Public Health and Environmental Assessment in the Office of Research and Development.

Researchers examined cardiac arrests during 14 wildfire-affected counties in California between 2015 and 2017, using information submitted to a health registry established by the Centers for Disease Control and Prevention, and the Cardiac Arrest Registry to Enhance Survival (CARES). Smoke density exposure was rated as light, medium or heavy according to mapping data from the National Oceanic and Atmospheric Association. The researchers compared smoke exposure on the day of the OHCA to the exposure on the same day of the week in the 3 prior weeks. They also compared the exposure 1, 2 and 3 days before the OHCA to the exposure on the corresponding days in 3 weeks prior to the cardiac arrest.

The analysis found that the risk of out-of-hospital cardiac arrests:

increased on days rated as heavy smoke density and for several days afterwards, with the highest risk (70% higher than on days with no smoke) on the second day after smoke exposure;

increased among both men and women and in people age 35 and older exposed to heavy smoke; and

increased in communities with lower socioeconomic status (20% or more people living below the poverty line) with both medium and heavy smoke exposure.

"Particulate matter from smoke that is inhaled can penetrate deeply into the lungs, and very small particles may cross into the bloodstream. These particles can create an inflammatory reaction in the lungs and throughout the body. The body's defense system may react to activate the fight-or-flight system, increasing heart rate, constricting blood vessels and increasing blood pressure. These changes can lead to disturbances in the heart's normal rhythm, blockages in blood vessels and other effects creating conditions that could lead to cardiac arrest," Rappold said.

Although the researchers had no information about the actions taken by individuals, the increased risk they found in people living in lower-income communities might reflect less access to strategies to reduce exposure. Previous studies have shown that there are more respiratory problems in lower-income communities and worsening congestive heart failure in response to wildfire smoke exposure.

"People in a higher socioeconomic status group who have pre-existing cardiopulmonary conditions may be better able to take effective action to decrease exposure, such as staying indoors, using portable air filters or using effective respirator masks. They may also be more likely to live in homes with air conditioning and efficient air filtration," Rappold said.

"While other studies have found that older adults are more affected, we also observed elevated effects among middle-aged adults (aged 35-64). It is possible that this population may not be aware of their risk and may not have flexibility to discontinue activities that involve exertion and exposure during wildfire smoke episodes," concluded Rappold.

To reduce exposure to wildfire smoke, researchers advise people to stay indoors with doors and windows closed, to use high-efficiency air filters in air conditioning systems, avoid exertion, and consider seeking shelter elsewhere if the home does not have an air conditioner and it is too warm to stay inside.

The small sample size limited the researchers' ability to determine how the risks of smoke exposure might differ among people of different ages and genders. Individuals with personal health questions or concerns should consult with their doctor, researchers said.

Credit: 
American Heart Association

How does sugar drive consumption? Scientists discover gut-brain sugar sensor in mice

image: Sugar-sensing neurons (pink) in cNST brain region of a mouse.

Image: 
Hwei-Ee Tan/Zuker lab/Columbia's Zuckerman Institute

NEW YORK -- Artificial sweeteners have never fully succeeded in impersonating sugar. Now, a Columbia study in mice has identified a brain mechanism that may explain why.

In a scientific first, researchers have shown that the brain responds not only when sugar touches the tongue but also when it enters the gut. Their discovery of this specialized gut-brain circuit offers new insight into the way the brain and body evolved to seek out sugar. And because artificial sweeteners do not activate this circuit, the study also offers compelling evidence as to why these sweeteners are never quite as satisfying as the real thing.

The findings, reported today in Nature, stand to have a substantial positive impact on public health. Excess sugar intake has been linked to obesity-related conditions such as diabetes, which affects more than 500 million people worldwide. By laying the foundation for new ways to modify this gut-brain circuit, this research offers promising new paths to reducing sugar overconsumption.

"When we drink diet soda, or use sweetener in coffee, it may taste similar but our brains can tell the difference," said Hwei-Ee Tan, the paper's co-first author who completed his doctoral research in the lab of Charles Zuker, PhD, at Columbia's Zuckerman Institute. "The discovery of this specialized gut-brain circuit that responds to sugar -- and sugar alone -- could pave the way for sweeteners that don't just trick our tongue but also our brain."

Today's study was led by Dr. Zuker and builds upon decades of work by he and his lab to map the brain's taste system.

When the tongue encounters a taste -- sweet, bitter, salty, sour or umami -- specialized cells on the tongue called taste receptors send hardwired signals to the brain. Artificial sweeteners, such as NutraSweet and Stevia, work by co-opting these hardwired signals. They switch on sweet taste receptors to fool the brain into thinking that sugar has landed on the tongue.

When the sweet taste receptors are deleted in mice, which should have eliminated the animals' desire for anything sweet, the animals still display a preference for sugar. The research team's goal was to was to find out why and how, and to uncover the neural basis for our insatiable desire for sugar.

The Columbia team focused on a brain area called the caudal nucleus of the solitary tract, or cNST. The cNST is tucked inside the brain stem, the most primitive area of the brain.

"We discovered that the cNST lit up with activity when we bypassed the sweet taste receptors on the animals' tongues and delivered sugar directly to the gut," said Alexander Sisti, PhD, the paper's co-first author who also completed his doctoral research in the Zuker lab. "Something was transmitting a signal, indicating the presence of sugar, from the gut to the brain."

The research team then turned their attention to the vagus nerve, a well-known conduit between the brain and the body's internal organs.

In a series of experiments in mice, the scientists developed techniques to monitor the real-time activity of cells in the vagus nerve. The team observed how these cells' activity changed when sugar was delivered into the animals' gut.

"By recording brain-cell activity in the vagus nerve, we pinpointed a cluster of cells in the vagus nerve that respond to sugar," said Dr. Sisti. "We saw, for the first time, sugar-sensing via this direct pathway from the gut to the brain."

Further experiments revealed the circuit in greater detail. Inhibiting a specific sugar-transporting protein in the gut eliminated the animals' neural response to sugar, showing that this protein, called SGLT-1, is a key sensor transmitting the presence of sugar from the gut to the brain via what is known as the gut-brain axis.

In a key prediction from this study, the team then showed that silencing this gut-brain circuit completely abolishes the animals' craving and preference for sugar, demonstrating how controlling the function of this circuit can dramatically impact the animal's desire for sugar. In one of their final set of experiments, the researchers also switched on the brain cells that normally respond to sugar signals from the gut. This time, however, they activated these cells every time the animal consumed a sugar-free Kool-Aid drink, in essence hijacking this circuit. Remarkably, says Dr. Zuker, the animals acted as if they were getting real sugar. In effect, fooling the brain into responding as if they were consuming sugar.

Taken together, these findings demonstrate the existence of two complementary, yet independent, systems for sensing energy-rich sugar, one getting input from the tongue, the other from the gut.

"These findings could spur the development of more effective strategies to meaningfully curtail our unquenchable drive for sugar, from modulating various components of this circuit to potentially sugar substitutes that more closely mimic the way sugar acts on the brain," said Tan.

Going forward, the team plans to link the gut-brain axis to other brain circuits, including those involved in reward, feeding, and emotions.

Credit: 
The Zuckerman Institute at Columbia University

Journey to the center of the Earth

image: Different layers within the Earth have differing compositions and densities.

Image: 
© 2020 Kelvinsong - CC BY-SA 3.0

In an effort to investigate conditions found at the Earth's molten outer core, researchers successfully determined the density of liquid iron and sound propagation speed through it at extremely high pressures. They achieved this with use of a highly specialized diamond anvil which compresses samples, and sophisticated X-ray measurements. Their findings confirm the molten outer core is less dense than liquid iron and also put values on the discrepancy.

Jules Verne's 1864 novel Journey to the Center of the Earth takes explorers on an imaginative trip down to the Earth's core where they find a gargantuan hollow cavern hosting a prehistoric environment, and dinosaurs. They get there thanks to a tanklike drilling machine which navigates through volcanoes. It sounds fun, but needless to say, it's a far cry from reality, where researchers explore the inner Earth with a range of techniques and instruments from the comparative safety of the Earth's surface.

Seismic apparatus which measure how earthquakes travel through the planet are pivotal to map some of the larger structural arrangements within the Earth, and thanks to this it has long been known that at the heart of the Earth lies a solid core surrounded by a less dense liquid outer core. For the first time, experiments and simulations have shown researchers details about this outer core which were previously unobtainable. And these experiments include some fascinating details.

"Recreating conditions found at the center of the Earth up here on the surface is not easy," remarked Project Assistant Professor Yasuhiro Kuwayama from the Department of Earth and Planetary Science. "We used a diamond anvil to compress a sample of liquid iron subject to intense heat. But more than just creating the conditions, we needed to maintain them long enough to take our measurements. This was the real challenge."

It is harder to measure the density of a liquid sample than a solid one as it takes the apparatus longer to do so. But with a unique experimental set up crafted over two decades, centered around the diamond anvil, Kuwayama and his team maintained their sample sufficiently to collect the data they required. They used a highly focused X-ray source from the SPring-8 synchrotron in Japan to probe the sample and measure its density.

"We found the density of liquid iron such as you'd find in the outer core to be about 10 tons per cubic meter at a pressure of 116 gigapascals, and the temperature to be 4,350 Kelvin," explained Kuwayama. "For reference, typical room temperature is about 273 Kelvin. So this sample is over 16 times hotter than your room, and 10 times denser than water."

When compared to this new measurement, the density of the Earth's outer core appears to be about 8 percent less dense than pure liquid iron. The suggestion here is that there are additional lighter elements in the molten outer core which are currently unidentified. This research could aid others in their quest to reveal more unobtainable secrets from deep within the Earth.

"It's important to investigate these things to understand more, not only about the Earth's core, but about the composition, and thus behavior, of other planets as well," concluded Kuwayama. "It's important to note that it was not just elaborate equipment that helped us find this new information, but also meticulous mathematical modeling and analytical methods. We were pleasantly surprised by how effective this approach was and hope it can lead to a greater understanding of the world beneath our feet."

Credit: 
University of Tokyo

Shedding light on dark traps

image: Researchers from the OIST Femtosecond Spectroscopy Unit conduct experiments in the laser lab.

Image: 
OIST/Togo

A multi-institutional collaboration, co-led by scientists at the University of Cambridge and Okinawa Institute of Science and Technology Graduate University (OIST), has discovered the source of efficiency-limiting defects in potential materials for next generation solar cells and flexible LEDs.

In the last decade, perovskites - a diverse range of materials with a specific crystal structure - have emerged as promising alternatives to silicon solar cells, as they are cheaper and greener to manufacture, whilst achieving a comparable level of efficiency.

However, perovskites still show significant performance losses and instabilities. Most research to date has focused on ways to remove these losses, but their actual physical causes remain unknown.

Now, in a paper published today in Nature, researchers from Dr. Sam Stranks's group at Cambridge University's Department of Chemical Engineering and Biotechnology and Cavendish Laboratory, and Professor Keshav Dani's Femtosecond Spectroscopy Unit at OIST in Japan, identify the source of the problem. Their discovery could streamline efforts to increase the efficiency of perovskites, bringing them closer to mass-market production.

When light hits a perovskite solar cell or when electricity passes through a perovskite LED, electrons are excited and can jump to a higher energy state. The negatively-charged electrons leave behind spaces, called holes, which then have a relatively positive charge. Both excited electrons and holes can move through the perovskite material, and therefore act as charge carriers.

But in perovskites, a certain type of defect called a "deep trap" can occur, where energized charge carriers can get stuck. The trapped electrons and holes recombine, losing their energy to heat rather than converting it into useful electricity or light, which significantly reduces the efficiency and stability of solar panels and LEDs.

Until now, very little was known about the cause of these traps, in part because they appear to behave rather differently to traps in traditional solar cell materials.

In 2015, Dr. Stranks's group published a Science paper looking at the luminescence of perovskites, which reveals how good they are at absorbing or emitting light. We found that the material was very heterogeneous; you had quite large regions that were bright and luminescent, and other regions that were really dark," said Dr. Stranks. "These dark regions correspond to power losses in solar cells or LEDs. But what was causing the power loss was always a mystery, especially because perovskites are otherwise so defect-tolerant."

Due to limitations of standard imaging techniques, the group couldn't tell if the darker areas were caused by one, large trap site, or many smaller traps, making it difficult to establish why they were forming only in certain regions.

Later on in 2017, Prof. Dani's group at OIST published a paper in Nature Nanotechnology, where they made a movie of how electrons behave in semiconductors after absorbing light. "You can learn a lot from being able to see how charges move in a material or device after shining light. For example, you can see where they might be getting trapped," said Prof. Dani. "However, these charges are hard to visualize as they move very fast - on the timescale of a millionth of a billionth of a second; and over very short distances - on the length scale of a billionth of a meter.

On hearing of Prof. Dani's work, Dr. Stranks reached out to see if they could work together to address the problem visualizing the dark regions in perovskites.

The team at OIST used a technique called photoemission electron microscopy (PEEM) for the first time on perovskites, where they probed the material with ultraviolet light and formed an image from the emitted electrons.

When they looked at the material, they found that the dark regions contained traps, around 10-100 nanometers in length, which were clusters of smaller atomic-sized trap sites. These trap clusters were spread unevenly throughout the perovskite material, explaining the heterogeneous luminescence seen in Dr. Stranks's earlier research.

Intriguingly, when the researchers overlaid images of the trap sites onto images that showed the crystal grains of the perovskite material, they found that the trap clusters only formed at specific places, at the boundaries between certain grains.

To understand why this only occurred at certain grain boundaries, the groups worked with Professor Paul Midgley's team from Cambridge University's Department of Materials Science and Metallurgy, who used a technique called scanning electron diffraction to create detailed images of the perovskite crystal structure. Prof. Midgley's team made use of the electron microscopy setup at the ePSIC facility at the Diamond Light Source Synchrotron, which has specialized equipment for imaging beam-sensitive materials, like perovskites.

"Because these materials are super beam-sensitive, typical techniques that you would use to probe local crystal structure on these length scales can actually quite quickly change the material as you're looking at it," explained Tiarnan Doherty, a PhD student in Stranks's group and co-lead author of the study. "Instead, we were able to use very low exposure doses and therefore prevent damage.

"From the work at OIST, we knew where the trap clusters were located, and at ePSIC, we scanned around that same area to see the local structure. We were able to quickly pinpoint unexpected variations in the crystal structure around the trap sites."

The group discovered that the trap clusters only formed at junctions where an area of the material with slightly distorted structure met an area with pristine structure.

"In perovskites, we have these regular mosaic grains of material and most of the grains are nice and pristine - the structure we would expect," said Dr. Stranks. "But every now and again, you get a grain that's slightly distorted and the chemistry of that grain is inhomogeneous. What was really interesting, and which initially confused us, was that it's not the distorted grain that's the trap but where that grain meets a pristine grain; it's at that junction that the traps form."

With this understanding of the nature of the traps, the team at OIST also used the custom-built PEEM instrumentation to visualize the dynamics of the charge carrier trapping process happening in the perovskite material. "This was possible as one of the unique features of our PEEM setup is that it can image ultrafast processes - as short as femtoseconds," explained Andrew Winchester, a PhD student in Prof. Dani's Unit, and co-lead author of this study. "We found that the trapping process was dominated by charge carriers diffusing to the trap clusters."

These discoveries represent a major breakthrough in the quest to bring perovskites to the solar energy market.

"We still don't know exactly why the traps are clustering there, but we now know that they do form there, and only there," said Dr. Stranks. "That's exciting because it means we now know what to target to bring up the performances of perovskites. We need to target those inhomogeneous phases or get rid of these junctions in some way."

"The fact that charge carriers must first diffuse to the traps could also suggest other strategies to improve these devices," said Prof. Dani. "Maybe we could alter or control the arrangement of the trap clusters, without necessarily changing their average number, such that charge carriers are less likely to reach these defect sites."

The teams' research focused on one particular perovskite structure. The scientists will now be investigating whether the cause of these trapping clusters is universal across all perovskite materials.

"Most of the progress in device performance has been trial and error and so far, this has been quite an inefficient process," said Dr. Stranks. "To date, it really hasn't been driven by knowing a specific cause and systematically targeting that. This is one of the first breakthroughs that will help us to use the fundamental science to engineer more efficient devices."

Credit: 
Okinawa Institute of Science and Technology (OIST) Graduate University

T2K insight into the origin of the universe

image: Kamioka Observatory, ICRR (Institute for Cosmic Ray Research), The University of Tokyo.

Image: 
Kamioka Observatory, ICRR (Institute for Cosmic Ray Research), The University of Tokyo

Lancaster physicists working on the T2K major international experiment in Japan are closing in on the mystery of why there is so much matter in the Universe, and so little antimatter.

The Big Bang should have created equal amounts of matter and antimatter in the early Universe but instead the Universe is made of matter. One of the greatest challenges in physics is to determine what happened to the antimatter, or why we see an asymmetry between matter and antimatter.

Tokai to Kamioka (T2K) researchers have revealed in the journal Nature that almost half of the possible parameter values that determine matter-antimatter asymmetry in the Universe have been ruled out.

Dr Laura Kormos, Senior Lecturer in Physics at Lancaster University, head of Lancaster's neutrino physics group and researcher at T2K, said: "Our data continue to suggest that Nature prefers almost the maximal value of asymmetry for this process. It would be just like Mother Nature to have these seemingly insignificant, difficult to study, tiny particles be the driver for the existence of the universe."

The T2K experiment studies neutrinos, one of the fundamental particles that make up the Universe and one of the the least well understood. Yet every second trillions of neutrinos from the sun pass through your body. These tiny particles, produced copiously within the sun and other stars, come in three varieties or flavours, and may spontaneously change, or oscillate, from one to another.

Each flavour of neutrino has an associated antineutrino. If flavour-changing, or oscillations, are different for neutrinos and antineutrinos, it could help to explain the observed dominance of matter over antimatter in our Universe, a question that has puzzled scientists for a century.

For most phenomena, the laws of physics provide a symmetric description of the behaviour of matter and antimatter. However, this symmetry must have been broken soon after the Big Bang in order to explain the observation of the Universe, which is composed of matter with little antimatter.

A necessary condition is the violation of the so-called Charge-Parity (CP) symmetry. Until now, there has not been enough observed CP symmetry violation to explain the existence of our Universe.

T2K is searching for a new source of CP symmetry violation in neutrino oscillations that would manifest itself as a difference in the measured oscillation probability for neutrinos and antineutrinos.

The parameter governing the matter/antimatter symmetry breaking in neutrino oscillation, called the δcp phase, can take a value from -180º to 180º. For the first time, T2K has disfavoured almost half of the possible values at the 99.7% (3σ) confidence level, and is starting to reveal a basic property of neutrinos that has not been measured until now.

Dr Helen O'Keeffe, Senior Lecturer in Physics at Lancaster University and researcher at T2K, said: "This result will help shape future stages of T2K and the development of next-generation experiments. It is a very exciting outcome from many years of work."

This is an important step on the way to knowing whether or not neutrinos and antineutrinos behave differently.

Credit: 
Lancaster University