Tech

All disease models are 'wrong,' but scientists are working to fix that

An international team of researchers has developed a new mathematical tool that could help scientists to deliver more accurate predictions of how diseases, including COVID-19, spread through towns and cities around the world.

Rebecca Morrison, an assistant professor of computer science at the University of Colorado Boulder, led the research. For years, she has run a repair shop of sorts for mathematical models--those strings of equations and assumptions that scientists use to better understand the world around them, from the trajectory of climate change to how chemicals burn up in an explosion.

As Morrison put it, "My work starts when models start to fail."

She and her colleagues recently set their sights on a new challenge: epidemiological models. What can researchers do, in other words, when their forecasts for the spread of infectious diseases don't match reality?

In a study published today in the journal Chaos, Morrison and Brazilian mathematician Americo Cunha turned to the 2016 outbreak of the Zika virus as a test case. They report that a new kind of tool called an "embedded discrepancy operator" might be able to help scientists fix models that fall short of their goals--effectively aligning model results with real-world data.

Morrison is quick to point out that her group's findings are specific to Zika. But the team is already trying to adapt their methods to help researchers to get ahead of a second virus, COVID-19.

"I don't think this tool is going to solve any epidemiologic crisis on its own," Morrison said. "But I hope it will be another tool in the arsenal of epidemiologists and modelers moving forward."

When models fail

The study highlights a common issue that modelers face.

"There are very few situations where a model perfectly corresponds with reality. By definition, models are simplified from reality," Morrison said. "In some way or another, all models are wrong."

Cunha, an assistant professor at Rio de Janeiro State University, and his colleagues ran up against that very problem several years ago. They were trying to adapt a common type of disease model--called a Susceptible, Exposed, Infected or Recovered (SEIR) model--to recreate the Zika virus outbreak from start to finish. In 2015 and 2016, this pathogen ran rampant through Brazil and other parts of the world, causing thousands of cases of severe birth defects in infants.

The problem: No matter what the researchers tried, their results didn't match the recorded number of Zika cases, in some cases miscalculating the number of infected people by tens of thousands.

Such a shortfall isn't uncommon, Cunha said.

"The actions you take today will affect the course of the disease," he said. "But you won't see the results of that action for a week or even a month. This feedback effect is extremely difficult to capture in a model."

Rather than abandon the project, Cunha and Morrison teamed up to see if they could fix the model. Specifically, they asked: If the model wasn't replicating real-world data, could they use that data to fashion a better model?

Enter the embedded discrepancy operator. You can picture this tool, which Morrison first developed to study the physics of combustion, as a sort of spy that sits within the guts of a model. When researchers feed data to the tool, it sees and responds to the information, then rewrites the model's underlying equations to better match reality.

"Sometimes, we don't know the correct equations to use in a model," Cunha said. "The idea behind this tool is to add a correction to our equations."

The method worked. After letting their operator do its thing, Morrison and Cunha discovered that they had nearly eliminated the gap between the model's results and public health records.

Being honest

The team isn't stopping at Zika. Morrison and Cunha are already working to deploy their same strategy to try to improve models of the coronavirus pandemic.

Morrison doubts that any disease model will ever be 100% accurate. But, she said, these tools are still invaluable for informing public health decisions--especially if modelers are up front about what their results can or can't tell you about a disease.

"This epidemic has revealed how difficult it is to model a real system," Morrison said. "But I hope that people don't take that to mean that we shouldn't trust our scientists."

Credit: 
University of Colorado at Boulder

Heating could be the best way to disinfect N95 masks for reuse

Since the outbreak of the COVID-19 pandemic, N95 face masks have been in short supply. Health care workers, in particular, desperately need these masks to protect themselves from the respiratory droplets of infected patients. But because of the shortage, many have to wear the same mask repeatedly. Now, researchers reporting in ACS Nano have tested several methods for disinfecting N95 materials, finding that heating them preserves their filtration efficiency for 50 cycles of disinfection.

N95 masks contain a layer of "meltblown" polypropylene fibers that form a porous, breathable network. To help capture smaller particles that could slip through the holes, the fibers are electrostatically charged. The U.S. Centers for Disease Control and Prevention has recommended several methods for disinfecting N95 masks, such as heating, ultraviolet (UV) radiation and bleach treatment, but so far they have not been tested extensively, especially for multiple rounds of disinfection. Yi Cui and colleagues wanted to compare five of the methods that could reasonably be used within a hospital setting to see how mask materials hold up to repeated disinfections.

In this study, instead of analyzing N95 masks -- which should be reserved for health care workers -- the researchers examined pieces of the meltblown fabric used to make these masks. They treated the material with a particular disinfectant and compared its ability to filter aerosol particles (resembling respiratory droplets, but lacking coronavirus) before and after disinfection. The team found that spraying the fabric with an ethanol or chlorine bleach solution drastically reduced the filtration efficiency after only one treatment, from about 96% to 56% (ethanol) or 73% (bleach). A single steam treatment maintained filtration, but five steam treatments led to a sharp decline in efficiency. UV radiation allowed up to 20 cycles of disinfection; however, administering the exact dose of UV that kills the virus without damaging mask materials could be problematic, the researchers note. The best disinfection method appeared to be heating. For example, heating at 185 F for 20 minutes allowed the fabric to be treated 50 times without loss of filtration efficiency. But frequently donning and removing N95 masks could affect fit, which also impacts performance, the researchers point out.

Credit: 
American Chemical Society

Veterans battle homelessness long after discharge from the military

image: Cumulative percentage of homeless VA service users after military discharge from 2000-2018.

Image: 
Jack Tsai, PhD, Dorota Szymkowiak, PhD, and Robert H. Pietrzak, PhD, MPH

Ann Arbor, May 5, 2020 - According to a new study in the American Journal of Preventive Medicine, published by Elsevier, homelessness among US military veterans rarely occurs immediately after military discharge, but instead takes years to manifest with risk increasing over subsquent years. The study shows that this sleeper effect delay is evident among veterans who served before the Persian Gulf War era, as well as more recent groups from the post-9/11 conflicts in Afghanistan and Iraq.

"The study points to the long-life cycle leading to homelessness among veterans. It often takes years for problems stemming from military service to build up before a veteran becomes homeless," explained Jack Tsai, who holds a doctorate in clinical psychology and is a clinical psychologist and health services researcher. "The team and I found that the risk increases exponentially over time in the period 5-15 years post-military discharge."

Dr. Tsai is research director for the US Department of Veterans Affairs, National Center on Homelessness Among Veterans, Tampa, Florida, USA. He is also affiliated with the School of Public Health, University of Texas Health Science Center at Houston, San Antonio, TX, and the Department of Psychiatry, Yale University School of Medicine, New Haven, CT.

Data from two nationally representative samples were analyzed, including the records of 275,775 homeless veterans who used Department of Veterans Affairs (VA) services from 2000-2019, as well as a 2018 population-based community survey of 115 veterans with a history of homelessness. The average time between discharge and homelessness was found to be 5.5 years in the VA sample and 9.9 years in the survey sample.

Significant factors associated with longer discharge-to-homelessness periods include service in the Vietnam War, younger age at military discharge, income, and chronic medical and psychiatric conditions (e.g., depression and alcohol abuse). The findings suggest that some medical and psychiatric conditions take time to develop and do not quickly lead to homelessness but follow a more chronic course that, if untreated, can eventually lead to homelessness.

Deployments to the post-9/11 conflicts in Iraq and Afghanistan were significantly associated with shorter duration between discharge and homelessness, a phenomenon that is accelerating. Among homeless VA service users discharged from 2000 to 2008, it took 10 years or more for 10 percent of them to become homeless; among those discharged from 2009 to 2014, more than 10 percent were homeless seven years after discharge. This finding confirms previous research that veterans returning from Iraq and Afghanistan experience considerable difficulties with social adjustment.

"Understanding what happens to people after they leave the military and at what point they become homeless is important for policymakers, service providers, veterans, and their family members in order to prevent new generations of veterans from becoming homelessness. Those who end up homeless have very low quality of life, and developing strategic early interventions at various stages after military discharge can mitigate that risk," noted Dr. Tsai.

Primary and secondary prevention focused on chronic health conditions and social adjustment are crucial to prevent homelessness among these veterans. Further research is needed on how best to deploy resources to develop innovative interventions to prevent homelessness among veterans. The study also points to the effect of intersectional macro socioeconomic issues, such as the lack of affordable housing, unemployment, and barriers against subgroups (women veterans with children and veterans with cognitive impairments).

Credit: 
Elsevier

Malaria risk is highest in early evening, study finds

image: The researchers found that mosquitoes are most likely to transmit malaria in the early evening, when people are exposed, then at midnight, when people are protected by bed nets, or in the morning.

Image: 
Eunho Suh, Penn State

UNIVERSITY PARK, Pa. -- Wide-scale use of insecticide-treated bed nets has led to substantial declines in global incidences of malaria in recent years. As a result, mosquitos have been shifting their biting times to earlier in the evening and later in the morning. In a new study, an international team of researchers has found that mosquitoes are most likely to transmit malaria in the early evening, when people are exposed, then at midnight, when people are protected by bed nets, or in the morning. The findings may have implications for malaria prevention initiatives.

"Wide-scale use of insecticide-treated bed nets has led to substantial declines in the global burden of malaria in recent years; however, evidence from a number of locations suggests that mosquitoes might be changing their biting behavior in order to avoid contact with these nets," said Matthew Thomas, professor and Huck Scholar in Ecological Entomology, Penn State. "This so-called 'behavioral resistance' could have enormous implications for public health because if more mosquitoes feed in the evening or the morning, the protective efficacy of nets could be reduced."

The team conducted a series of laboratory studies to examine whether timing of feeding affects a mosquito's ability to become infectious with the malaria parasite. They presented the two most important malaria mosquitoes -- Anopheles stephensi and Anopheles gambiae -- with infected blood meals at different times of day and under different temperature conditions and monitored them to determine their "vector competence" -- the ability to successfully acquire malaria parasites and become infectious. The results appear today (May 4) in Nature Ecology and Evolution.

The researchers found that time-of-day of feeding did not affect vector competence when the temperature was maintained at a constant 80°F. However, when mosquitoes were maintained under conditions representing more realistic temperature variation -- ranging from a few degrees above and below 80°F -- there was significant variation in vector competence, with approximately 88% of evening biters, 65% of midnight biters and 13% of morning biters testing positive for parasites in Anopheles stephensi mosquitoes. For Anopheles gambiae, 55% of evening biters, 26% of midnight biters and 0.8 % of morning biters were positive for parasites.

"Warm temperatures can inhibit parasite establishment, so the longer the time before mosquitoes are exposed to warm daytime temperatures, the better the chances that the mosquito becomes infected," said Eunho Suh, postdoctoral scholar, Penn State. "Mosquitoes feeding in the morning have only 4 hours before temperatures become too hot for the parasite to be transmitted, while those that feed in the evening have 16 hours of cooler temperatures."

Thomas added that many studies have investigated malaria infection in mosquitoes in lab settings, but this work has tended to ignore the possible influence of environmental factors such as time of day and temperature variation.

"It is really striking that when you add this ecological complexity, plus or minus six hours in the time of feeding can transform a mosquito from being extremely susceptible to infection by malaria to becoming almost completely refractory," he said.

Next, the researchers created a mathematical model to explore the potential public health implications of a change in mosquito infectivity driven by the timing of mosquito bites. The model results support their laboratory findings.

"There is major concern that shifts in patterns of mosquito feeding could reduce the effectiveness of bed nets, which are our most important tool in the fight against malaria," said Thomas. "Key next steps are to extend the work to field systems to evaluate the robustness of the findings in the real world."

Credit: 
Penn State

Intensive farming increases risk of epidemics, warn scientists

video: Evangelos Mourkas, first author of the paper, talks about the research.

Image: 
University of Bath

Overuse of antibiotics, high animal numbers and low genetic diversity caused by intensive farming techniques increase the likelihood of pathogens becoming a major public health risk, according to new research led by UK scientists.

An international team of researchers led by the Universities of Bath and Sheffield, investigated the evolution of Campylobacter jejuni, a bacterium carried by cattle which is the leading cause of gastroenteritis in high income countries.

Campylobacter facts:

Causes bloody diarrhoea in humans

Transferred to humans from eating contaminated meat and poultry

Although not as dangerous as typhoid, cholera or E.coli, it causes serious illness in patients with underlying health issues and can cause lasting damage.

Around 1 in 7 people suffer from an infection at some point in their life

Causes three times more cases than E.coli, Salmonella and listeria combined

Carried in the faeces of chickens, pigs, cattle and wild animals

Campylobacter is estimated to be present in the faeces of 20% cattle worldwide

The bug is very resistant to antibiotics due to their use in farming

The researchers, publishing in the prestigious journal Proceedings of the National Academy of Sciences, studied the genetic evolution of the pathogen and found that cattle-specific strains of the bacterium emerged at the same time as a dramatic rise in cattle numbers in the 20th Century.

The authors of the study suggest that changes in cattle diet, anatomy and physiology triggered gene transfer between general and cattle-specific strains with significant gene gain and loss. This helped the bacterium to cross the species barrier and infect humans, triggering a major public health problem.

Combine this with the increased movement of animals globally, intensive farming practices have provided the perfect environment in which to spread globally through trade networks.

Professor Sam Sheppard from the Milner Centre for Evolution at the University of Bath, said: "There are an estimated 1.5 billion cattle on Earth, each producing around 30 kg of manure each day; if roughly 20 per cent of these are carrying Campylobacter, that amounts to a huge potential public health risk.

"Over the past few decades, there have been several viruses and pathogenic bacteria that have switched species from wild animals to humans: HIV started in monkeys; H5N1 came from birds; now Covid-19 is suspected to have come from bats.

"Our work shows that environmental change and increased contact with farm animals has caused bacterial infections to cross over to humans too.

"I think this is a wake-up call to be more responsible about farming methods, so we can reduce the risk of outbreaks of problematic pathogens in the future."

Professor Dave Kelly from the Department of Molecular Biology and Biotechnology at the University of Sheffield said: "Human pathogens carried in animals are an increasing threat and our findings highlight how their adaptability can allow them to switch hosts and exploit intensive farming practices."

Credit: 
University of Bath

Study: Climate change has been influencing where tropical cyclones rage

image: This graphic depicts the global pattern of where the frequency of tropical cyclones has increased and where is has decreased from 1980 to 2018. New NOAA research shows that while the annual average number of tropical cyclones has remained at 86, climate change has influenced where the frequency of tropical cyclones have increased, decreased.

Image: 
NOAA

While the global average number of tropical cyclones each year has not budged from 86 over the last four decades, climate change has been influencing the locations of where these deadly storms occur, according to new NOAA-led research published in Proceedings of the National Academy of Sciences.

New research indicates that the number of tropical cyclones has been rising since 1980 in the North Atlantic and Central Pacific, while storms have been declining in the western Pacific and in the southern Indian Ocean.

"We show for the first time that this observed geographic pattern cannot be explained only by natural variability," said Hiroyuki Murakami, a climate researcher at NOAA's Geophysical Fluid Dynamics Laboratory and lead author.

Murakami used climate models to determine that greenhouse gases, manmade aerosols including particulate pollution, and volcanic eruptions were influencing where tropical cyclones were hitting.

3 forces influence where storms are hitting

Greenhouse gases are warming the upper atmosphere and the ocean. This combines to create a more stable atmosphere with less chance that convection of air currents will help spawn and build up tropical cyclones.

Particulate pollution and other aerosols help create clouds and reflect sunlight away from the earth, causing cooling, Murakami said. The decline in particulate pollution due to pollution control measures may increase the warming of the ocean by allowing more sunlight to be absorbed by the ocean.

Diminishing manmade aerosols is one of the reasons for the active tropical cyclones in the North Atlantic over the last 40 years, Murakami said. However, toward the end of this century, tropical cyclones in the North Atlantic are projected to decrease due to the "calming" effect of greenhouse gases.

Volcanic eruptions have also altered the location of where tropical cyclones have occurred, according to the research. For example, the major eruptions in El Chichón in Mexico in 1982 and Pinatubo in the Philippines in 1991 caused the atmosphere of the northern hemisphere to cool, which shifted tropical cyclone activity southward for a few years. Ocean warming has resumed since 2000, leading to increased tropical cyclone activity in the northern hemisphere.

Looking ahead: Scientists predict fewer tropical cyclones by 2100 but likely more severe

Climate models project decreases in tropical cyclones toward the end of the 21st century from the annual average of 86 to about 69 worldwide, according to the new study. Declines are projected in most regions except in the Central Pacific Ocean, including Hawaii, where tropical cyclone activity is expected to increase.

Despite a projected decline in tropical cyclones by 2100, many of these cyclones will be significantly more severe. Why? Rising sea surface temperatures fuel the intensity and destructiveness of tropical storms.

"We hope this research provides information to help decision-makers understand the forces driving tropical cyclone patterns and make plans accordingly to protect lives and infrastructure," Murakami said.

Credit: 
NOAA Headquarters

Blood flows could be more turbulent than previously expected

video: Flow behavior during a complete flow cycle imposed, where an emergence of helical structure can be discerned during the deceleration phase of the cycle.

Image: 
© Hof group / IST Austria.

Blood flow in the human body is generally assumed to be smooth due to its low speed and high viscosity. Unsteadiness in blood flow is linked to various cardiovascular diseases and has been shown to promote dysfunction and inflammation in the inner layer of blood vessels, the endothelium. In turn, this can lead to the development of arteriosclerosis--a leading cause of death worldwide--where arterial pathways in the body narrow due to plaque buildup. However, the source of this unsteadiness is not well understood. Now, IST Austria professor Björn Hof, together with an international team of researchers, has shown that pulsating blood flows, such as those from our heart, react strongly to geometric irregularities in vessels (such as plaque buildup) and cause much higher levels of velocity fluctuations than previously expected. The research could have implications on how we study blood flow related diseases in the future.

"In this project, we wanted to explore if insights we recently gained regarding the origin of turbulence in pipe flow can shed light on instabilities in pulsatile flows and to cardiovascular flow in blood vessels," says Hof. "Our results indicate that a previously unknown mechanism may cause turbulence in pulsating flows within the human body at lower flow velocities than previously thought."

Why is turbulent blood flow hazardous to health?

The inner wall of a blood vessel, the endothelium, is very sensitive to a force known as 'shear stress' which, in this case, refers to the friction created by blood flow on the inside of a blood vessel. Normally, the cells within the endothelium are adapted to relatively steady flow rates in one direction. However, if turbulence arises in the vessel (e.g., due to a geometric irregularity), the flow becomes multi-directional and results in changing shear stress forces on the endothelium. Such stress fluctuations can trigger cellular dysfunction, inflammation of the endothelium and, in the long term, arteriosclerosis.

Modeling turbulence in blood flow

The team has proven both experimentally and theoretically, that blood vessels with geometric irregularities are likely to cause more turbulence than previously thought. In their experiments, which were conducted at IST Austria, team member Dr. Atul Varshney was able to demonstrate that, when pulsating blood flow slows down (e.g., in between heartbeats), turbulence was created, especially in areas that had geometric irregularity. Once the flow was accelerated again, such as with the beat of a heart, it became smooth and turbulent free (otherwise known as laminar flow). This means that if a blood vessel is not ideally shaped or has geometric irregularities, more turbulent flow is likely to occur with each pulse cycle or heartbeat. The research could have important ramifications in how the medical community models blood flow, especially in large blood vessels such as the aorta.

Hof concludes: "It is astonishing that this instability has been overlooked in earlier studies. We suspect, also because of the complex composition of blood, that there may be other mechanisms that can cause turbulence in cardiovascular flow at even lower speeds. Like in the present study, also our future work will aim to identify fundamental mechanisms that are relevant to other areas such as medicine."

Credit: 
Institute of Science and Technology Austria

Research suggests new therapeutic target for kidney diseases

(Boston)--Researchers have published a new study that suggests a signaling pathway called ROBO2 is a therapeutic target for kidney diseases, specifically kidney podocyte injury and glomerular diseases.

Kidney podocytes are special octopus-like cells that are critical in maintaining the kidney glomerular filtering system and normal kidney function. This is the first time the ROBO2 pathway has been linked to glomerular diseases such as membranous nephropathy (affecting the filters) and focal segmental glomerulosclerosis (scarring in the kidney).

Chronic kidney disease affects an estimated 37 million people in the United States and more than 850 million people worldwide, and causes substantial morbidity and mortality worldwide. A significant proportion of patients with chronic kidney disease eventually will develop kidney failure and need dialysis or kidney transplantation to prolong their life.

Researchers at Boston University School of Medicine (BUSM) analyzed two induced kidney podocyte injury experimental models and found that those models without the ROBO2 gene were protected from kidney injury, while those with the ROBO2 gene developed severe kidney damage after kidney injury. Using cell culture analysis, they also found that higher ROBO2 protein levels resulted in reduced podocyte adhesion.

"As ROBO2 podocyte expression is well conserved among different mammalian species, our research suggests that ROBO2 is a novel drug target for glomerular diseases such as membranous nephropathy and focal segmental glomerulosclerosis, which is one of the most common causes of kidney failure in patients with no cure or treatment currently available," said corresponding author, Weining Lu, MD, associate professor of medicine and pathology & laboratory medicine at BUSM.

In collaboration with Pfizer, Lu's research has led to a compound targeting the ROBO2 pathway, which is currently being tested in phase 2 clinical trials for chronic kidney disease. "The study may ultimately lead to new treatment for patients so they can live to normal life expectancy on their own kidney and avoid dialysis or kidney transplantation."

Credit: 
Boston University School of Medicine

Study highlights gallium oxide's promise for next generation radiation detectors

New research from North Carolina State University finds that radiation detectors making use of single-crystal gallium oxide allow for monitoring X-ray radiation in near-real time.

"We found that the gallium oxide radiation detector worked very fast, which could offer benefits to many applications such as medical imaging," says Ge Yang, an assistant professor of nuclear engineering at NC State and corresponding author of a paper on the work. "This is particularly exciting because recent research tells us that gallium oxide has excellent radiation hardness - meaning it will keep doing its job even when exposed to high amounts of radiation.

"In short, we think this material is faster than many existing materials used in X-ray detection - and able to withstand higher levels of radiation."

For this study, the researchers made a radiation detector that incorporated a single-crystal sample of gallium oxide with electrodes attached on either side. The researchers applied different bias voltages across the gallium oxide while exposing the material to X-ray radiation.

The researchers found that there was a linear increase in current passing out of the gallium oxide relative to the level of X-ray exposure. In other words, the higher the level of X-ray radiation exposure, the higher the increase in current from the gallium oxide.

"This linear relationship, coupled with the fast response time and radiation hardness, make this a very exciting material for use in radiation detector technologies," Yang says. "These could be used in conjunction with medical imaging technologies, or in security applications like those found at airports."

Credit: 
North Carolina State University

Solar and wind energy sites mapped globally for the first time

image: This is a wind farm in Caithness, Scotland.

Image: 
Seb Dunnett

Researchers at the University of Southampton have mapped the global locations of major renewable energy sites, providing a valuable resource to help assess their potential environmental impact.

Their study, published in the Nature journal Scientific Data, shows where solar and wind farms are based around the world - demonstrating both their infrastructure density in different regions and approximate power output. It is the first ever global, open-access dataset of wind and solar power generating sites.

The estimated share of renewable energy in global electricity generation was more than 26 per cent by the end of 2018 and solar panels and wind turbines are by far the biggest drivers of a rapid increase in renewables. Despite this, until now, little has been known about the geographic spread of wind and solar farms and very little accessible data exists.

Lead researcher and Southampton PhD student Sebastian Dunnett explains: "While global land planners are promising more of the planet's limited space to wind and solar energy, governments are struggling to maintain geospatial information on the rapid expansion of renewables. Most existing studies use land suitability and socioeconomic data to estimate the geographical spread of such technologies, but we hope our study will provide more robust publicly available data."

While bringing many environmental benefits, solar and wind energy can also have an adverse effect locally on ecology and wildlife. The researchers hope that by accurately mapping the development of farms they can provide an insight into the footprint of renewable energy on vulnerable ecosystems and help planners assess such effects.

The study authors used data from OpenStreetMap (OSM), an open-access, collaborative global mapping project. They extracted grouped data records tagged 'solar' or 'wind' and then cross-referenced these with select national datasets in order to get a best estimate of power capacity and create their own maps of solar and wind energy sites. The data show Europe, North America and East Asia's dominance of the renewable energy sector, and results correlate extremely well with official independent statistics of the renewable energy capacity of countries.

Study supervisor, Professor Felix Eigenbrod of Geography and Environmental Science at the Southampton comments: "This study represents a real milestone in our understanding of where the global green energy revolution is occurring. It should be an invaluable resource for researchers for years to come, as we have designed it so it can be updated with the latest information at any point to allow for changes in what is a quickly expanding industry."

Credit: 
University of Southampton

Water-splitting module a source of perpetual energy

image: A schematic and electron microscope cross-section show the structure of an integrated, solar-powered catalyst to split water into hydrogen fuel and oxygen. The module developed at Rice University can be immersed into water directly to produce fuel when exposed to sunlight.

Image: 
Illustration by Jia Liang/Rice University

HOUSTON - (May 4, 2020) - Rice University researchers have created an efficient, low-cost device that splits water to produce hydrogen fuel.

The platform developed by the Brown School of Engineering lab of Rice materials scientist Jun Lou integrates catalytic electrodes and perovskite solar cells that, when triggered by sunlight, produce electricity. The current flows to the catalysts that turn water into hydrogen and oxygen, with a sunlight-to-hydrogen efficiency as high as 6.7%.

This sort of catalysis isn't new, but the lab packaged a perovskite layer and the electrodes into a single module that, when dropped into water and placed in sunlight, produces hydrogen with no further input.

The platform introduced by Lou, lead author and Rice postdoctoral fellow Jia Liang and their colleagues in the American Chemical Society journal ACS Nano is a self-sustaining producer of fuel that, they say, should be simple to produce in bulk.

"The concept is broadly similar to an artificial leaf," Lou said. "What we have is an integrated module that turns sunlight into electricity that drives an electrochemical reaction. It utilizes water and sunlight to get chemical fuels."

Perovskites are crystals with cubelike lattices that are known to harvest light. The most efficient perovskite solar cells produced so far achieve an efficiency above 25%, but the materials are expensive and tend to be stressed by light, humidity and heat.

"Jia has replaced the more expensive components, like platinum, in perovskite solar cells with alternatives like carbon," Lou said. "That lowers the entry barrier for commercial adoption. Integrated devices like this are promising because they create a system that is sustainable. This does not require any external power to keep the module running."

Liang said the key component may not be the perovskite but the polymer that encapsulates it, protecting the module and allowing to be immersed for long periods. "Others have developed catalytic systems that connect the solar cell outside the water to immersed electrodes with a wire," he said. "We simplify the system by encapsulating the perovskite layer with a Surlyn (polymer) film."

The patterned film allows sunlight to reach the solar cell while protecting it and serves as an insulator between the cells and the electrodes, Liang said.

"With a clever system design, you can potentially make a self-sustaining loop," Lou said. "Even when there's no sunlight, you can use stored energy in the form of chemical fuel. You can put the hydrogen and oxygen products in separate tanks and incorporate another module like a fuel cell to turn those fuels back into electricity."

The researchers said they will continue to improve the encapsulation technique as well as the solar cells themselves to raise the efficiency of the modules.

Credit: 
Rice University

Imaging technology allows visualization of nanoscale structures inside whole cells

image: This image shows a 3D super-resolution reconstruction of dendrites in primary visual cortex. Purdue University innovators created an imaging tool that allows visualization of nanoscale structures inside whole cells and tissues.

Image: 
Fang Huang/Purdue University

WEST LAFAYETTE, Ind. - Since Robert Hooke's first description of a cell in Micrographia 350 years ago, microscopy has played an important role in understanding the rules of life.

However, the smallest resolvable feature, the resolution, is restricted by the wave nature of light. This century-old barrier has restricted understanding of cellular functions, interactions and dynamics, particularly at the sub-micron to nanometer scale.

Super-resolution fluorescence microscopy overcomes this fundamental limit, offering up to tenfold improvement in resolution, and allows scientists to visualize the inner workings of cells and biomolecules at unprecedented spatial resolution.

Such resolving capability is impeded, however, when observing inside whole-cell or tissue specimens, such as the ones often analyzed during the studies of the cancer or the brain. Light signals, emitted from molecules inside a specimen, travel through different parts of cell or tissue structures at different speeds and result in aberrations, which will deteriorate the image.

Now, Purdue University researchers have developed a new technology to overcome this challenge.

"Our technology allows us to measure wavefront distortions induced by the specimen, either a cell or a tissue, directly from the signals generated by single molecules - tiny light sources attached to the cellular structures of interest," said Fang Huang, an assistant professor of biomedical engineering in Purdue's College of Engineering. "By knowing the distortion induced, we can pinpoint the positions of individual molecules at high precision and accuracy. We obtain thousands to millions of coordinates of individual molecules within a cell or tissue volume and use these coordinates to reveal the nanoscale architectures of specimen constituents."

The Purdue team's technology is recently published in Nature Methods. A video showing an animated 3D super-resolution is available at https://youtu.be/c9j621vUFBM.

"During three-dimensional super-resolution imaging, we record thousands to millions of emission patterns of single fluorescent molecules," said Fan Xu, a postdoctoral associate in Huang's lab and a co-first author of the publication. "These emission patterns can be regarded as random observations at various axial positions sampled from the underlying 3D point-spread function describing the shapes of these emission patterns at different depths, which we aim to retrieve. Our technology uses two steps: assignment and update, to iteratively retrieve the wavefront distortion and the 3D responses from the recorded single molecule dataset containing emission patterns of molecules at arbitrary locations."

The Purdue technology allows finding the positions of biomolecules with a precision down to a few nanometers inside whole cells and tissues and therefore, resolving cellular and tissue architectures with high resolution and fidelity.

"This advancement expands the routine applicability of super-resolution microscopy from selected cellular targets near coverslips to intra- and extra-cellular targets deep inside tissues," said Donghan Ma, a postdoctoral researcher in Huang's lab and a co-first author of the publication. "This newfound capacity of visualization could allow for better understanding for neurodegenerative diseases such as Alzheimer's, and many other diseases affecting the brain and various parts inside the body."

The National Institutes of Health provided major support for the research.

Other members of the research team include Gary Landreth, a professor from Indiana University's School of Medicine; Sarah Calve, an associate professor of biomedical engineering in Purdue's College of Engineering (currently an associate professor of mechanical engineering at the University of Colorado Boulder); Peng Yin, a professor from Harvard Medical School; and Alexander Chubykin, an assistant professor of biological sciences at Purdue. The complete list of authors can be found in Nature Methods.

"This technical advancement is startling and will fundamentally change the precision with which we evaluate the pathological features of Alzheimer's disease," Landreth said. "We are able to see smaller and smaller objects and their interactions with each other, which helps reveal structure complexities we have not appreciated before."

Calve said the technology is a step forward in regenerative therapies to help promote repair within the body.

"This development is critical for understanding tissue biology and being able to visualize structural changes," Calve said.

Chubykin, whose lab focuses on autism and diseases affecting the brain, said the high-resolution imaging technology provides a new method for understanding impairments in the brain.

"This is a tremendous breakthrough in terms of functional and structural analyses," Chubykin said. "We can see a much more detailed view of the brain and even mark specific neurons with genetic tools for further study."

The team worked with the Purdue Research Foundation Office of Technology Commercialization to patent the technology. The office recently moved into the Convergence Center for Innovation and Collaboration in Discovery Park District, adjacent to the Purdue campus.

Credit: 
Purdue University

How many jobs do robots really replace?

In many parts of the U.S., robots have been replacing workers over the last few decades. But to what extent, really? Some technologists have forecast that automation will lead to a future without work, while other observers have been more skeptical about such scenarios.

Now a study co-authored by an MIT professor puts firm numbers on the trend, finding a very real impact -- although one that falls well short of a robot takeover. The study also finds that in the U.S., the impact of robots varies widely by industry and region, and may play a notable role in exacerbating income inequality.

"We find fairly major negative employment effects," MIT economist Daron Acemoglu says, although he notes that the impact of the trend can be overstated.

From 1990 to 2007, the study shows, adding one additional robot per 1,000 workers reduced the national employment-to-population ratio by about 0.2 percent, with some areas of the U.S. affected far more than others.

This means each additional robot added in manufacturing replaced about 3.3 workers nationally, on average.

That increased use of robots in the workplace also lowered wages by roughly 0.4 percent during the same time period.

"We find negative wage effects, that workers are losing in terms of real wages in more affected areas, because robots are pretty good at competing against them," Acemoglu says.

The paper, "Robots and Jobs: Evidence from U.S. Labor Markets," appears in advance online form in the Journal of Political Economy. The authors are Acemoglu and Pascual Restrepo PhD '16, an assistant professor of economics at Boston University.

Displaced in Detroit

To conduct the study, Acemoglu and Restrepo used data on 19 industries, compiled by the International Federation of Robotics (IFR), a Frankfurt-based industry group that keeps detailed statistics on robot deployments worldwide. The scholars combined that with U.S.-based data on population, employment, business, and wages, from the U.S. Census Bureau, the Bureau of Economic Analysis, and the Bureau of Labor Statistics, among other sources.

The researchers also compared robot deployment in the U.S. to that of other countries, finding it lags behind that of Europe. From 1993 to 2007, U.S. firms actually did introduce almost exactly one new robot per 1,000 workers; in Europe, firms introduced 1.6 new robots per 1,000 workers.

"Even though the U.S. is a technologically very advanced economy, in terms of industrial robots' production and usage and innovation, it's behind many other advanced economies," Acemoglu says.

In the U.S., four manufacturing industries account for 70 percent of robots: automakers (38 percent of robots in use), electronics (15 percent), the plastics and chemical industry (10 percent), and metals manufacturers (7 percent).

Across the U.S., the study analyzed the impact of robots in 722 commuting zones in the continental U.S. -- essentially metropolitan areas -- and found considerable geographic variation in how intensively robots are utilized.

Given industry trends in robot deployment, the area of the country most affected is the seat of the automobile industry. Michigan has the highest concentration of robots in the workplace, with employment in Detroit, Lansing, and Saginaw affected more than anywhere else in the country.

"Different industries have different footprints in different places in the U.S.," Acemoglu observes. "The place where the robot issue is most apparent is Detroit. Whatever happens to automobile manufacturing has a much greater impact on the Detroit area [than elsewhere]."

In commuting zones where robots were added to the workforce, each robot replaces about 6.6 jobs locally, the researchers found. However, in a subtle twist, adding robots in manufacturing benefits people in other industries and other areas of the country -- by lowering the cost of goods, among other things. These national economic benefits are the reason the researchers calculated that adding one robot replaces 3.3 jobs for the country as a whole.

The inequality issue

In conducting the study, Acemoglu and Restrepo went to considerable lengths to see if the employment trends in robot-heavy areas might have been caused by other factors, such as trade policy, but they found no complicating empirical effects.

The study does suggest, however, that robots have a direct influence on income inequality. The manufacturing jobs they replace come from parts of the workforce without many other good employment options; as a result, there is a direct connection between automation in robot-using industries and sagging incomes among blue-collar workers.

"There are major distributional implications," Acemoglu says. When robots are added to manufacturing plants, "The burden falls on the low-skill and especially middle-skill workers. That's really an important part of our overall research [on robots], that automation actually is a much bigger part of the technological factors that have contributed to rising inequality over the last 30 years."

So while claims about machines wiping out human work entirely may be overstated, the research by Acemoglu and Restrepo shows that the robot effect is a very real one in manufacturing, with significant social implications.

"It certainly won't give any support to those who think robots are going to take all of our jobs," Acemoglu says. "But it does imply that automation is a real force to be grappled with."

Credit: 
Massachusetts Institute of Technology

Study reveals single-step strategy for recycling used nuclear fuel

image: One-step chemical reaction prescribed in the study leads to the formation of crystals containing uranium (yellow-filled circles) and small quantities of other leftover fuel elements (green-filled circles).

Image: 
Texas A&M University College of Engineering

A typical nuclear reactor uses only a small fraction of its fuel rod to produce power before the energy-generating reaction naturally terminates. What is left behind is an assortment of radioactive elements, including unused fuel, that are disposed of as nuclear waste in the United States. Although certain elements recycled from waste can be used for powering newer generations of nuclear reactors, extracting leftover fuel in a way that prevents possible misuse is an ongoing challenge.

Now, Texas A&M University engineering researchers have devised a simple, proliferation-resistant approach for separating out different components of nuclear waste. The one-step chemical reaction, described in the February issue of the journal Industrial & Engineering Chemistry Research, results in the formation of crystals containing all of the leftover nuclear fuel elements distributed uniformly.

The researchers also noted that the simplicity of their recycling approach makes the translation from lab bench to industry feasible.

"Our recycling strategy can be easily integrated into a chemical flow sheet for industrial-scale implementation," said Jonathan Burns, research scientist in the Texas A&M Engineering Experiment Station's Nuclear Engineering and Science Center. "In other words, the reaction can be repeated multiple times to maximize fuel recovery yield and further reduce radioactive nuclear waste."

The basis of energy production in nuclear reactors is thermonuclear fission. In this reaction, a heavy nucleus, usually uranium, when hit by subatomic particles called neutrons, becomes unstable and tears apart into smaller, lighter elements. However, uranium can absorb neutrons and get progressively heavier to form elements like neptunium, plutonium and americium, before once again splitting and releasing energy.

Over time, these fission reactions lead to a buildup of lighter elements in the nuclear reactor. But roughly half of these fission products are deemed neutron poisons -- they also absorb neutrons just like used nuclear fuel, leaving fewer for the fission reaction, eventually bringing the energy production to a halt.

Hence, used fuel rods contain fission products, leftover uranium and small quantities of plutonium, neptunium and americium. Currently, these items are collectively considered nuclear waste in the United States and are destined to be stowed away in underground repositories because of their high radioactivity.

"Nuclear waste is a two-pronged problem," Burns said. "First, almost 95% of the starting material of the fuel is left unused, and second, the waste we produce contains long-lived, radioactive elements. Neptunium and americium, for example, can persist and radiate for up to hundreds of thousands of years."

Scientists have had some success with separating uranium, plutonium and neptunium. However, these methods have been very complex and have had limited success at separating americium. Furthermore, Burns said that the United States Department of Energy requires the recycling strategy to be proliferation-resistant, meaning that plutonium, which can be used in weapons, must never be separated from other nuclear fuel elements during the recycling process.

To address the unmet needs of nuclear waste recycling, the researchers investigated if there was a simple chemical reaction that could separate all the desirable used nuclear fuel chemical elements together.

From earlier studies, the researchers knew that at room temperature, uranium forms crystals in strong nitric acid. Within these crystals, uranium atoms are arranged in a unique profile -- a central uranium atom is sandwiched between two oxygen atoms on either side by sharing six electrons with each oxygen atom.

"We immediately realized that this crystal structure could be a way to separate out plutonium, neptunium and americium since all of these heavy elements belong to the same family as uranium," Burns said.

The researchers hypothesized that if plutonium, neptunium and americium assumed a similar bonding structure with oxygen as uranium, then these elements would integrate themselves into the uranium crystal.

For their experiments, they prepared a surrogate solution of uranium, plutonium, neptunium and americium in highly concentrated nitric acid at 60-90 degrees Celsius to mimic dissolving of a real fuel rod in the strong acid. They found when the solution reached room temperature, as predicted, that uranium, neptunium, plutonium and americium separated from the solution together, uniformly distributing themselves within the crystals.

Burns noted that this simplified, single-step process is also proliferation-resistant since plutonium is not isolated but incorporated within the uranium crystals.

"The idea is that the reprocessed fuel generated from our prescribed chemical reaction can be used in future generations of reactors, which would not only burn uranium like most present-day reactors but also other heavy elements such as neptunium, plutonium and americium," Burns said. "In addition to addressing the fuel recycling problem and reducing proliferation risk, our strategy will drastically reduce nuclear waste to just the fission products whose radioactivity is hundreds rather than hundreds of thousands of years."

Credit: 
Texas A&M University

Engineers demonstrate next-generation solar cells can take the heat, maintain efficiency

image: Iowa State engineers fabricated this proof-of-concept perovskite solar cell in their research lab.

Image: 
Photo courtesy of Harshavardhan Gaonkar

AMES, Iowa - Perovskites with their crystal structures and promising electro-optical properties could be the active ingredient that makes the next generation of low-cost, efficient, lightweight and flexible solar cells.

A problem with the current generation of silicon solar cells is their relatively low efficiency at converting solar energy into electricity, said Vikram Dalal, an Iowa State University Anson Marston Distinguished Professor in Engineering, the Thomas M. Whitney Professor in Electrical and Computer Engineering and the director of Iowa State's Microelectronics Research Center.

The best silicon solar cells in the laboratory are about 26% efficient while commercial cells are about 15%. That means bigger systems are necessary to produce a given amount of electricity, and bigger systems mean higher costs.

That has researchers looking for new ways to raise efficiency and decrease costs. One idea that could boost efficiency by as much as 50% is a tandem structure that stacks two kinds of cells on top of each other, each using different, complementary parts of the solar spectrum to produce power.

Perovskite promise, problems

Researchers have recently started looking at hybrid organic-inorganic perovskite materials as a good tandem partner for silicon cells. Perovskite calls have efficiency rates nearing 25%, have a complementary bandgap, can be very thin (just a millionth of meter), and can easily be deposited on silicon.

But Dalal said researchers have learned those hybrid perovskite solar cells break down when exposed to high temperatures.

That's a problem when you try to put solar arrays where the sunshine is - hot, dry deserts in places such as the American southwest, Australia, the Middle East and India. Ambient temperatures in such places can hit the 120 to 130 degrees Fahrenheit and solar cell temperatures can hit 200 degrees Fahrenheit.

Iowa State University engineers, in a project partially supported by the National Science Foundation, have found a way to take advantage of perovskite's useful properties while stabilizing the cells at high temperatures. They describe their discovery in a paper recently published online by the scientific journal American Chemical Society Applied Energy Materials.

"These are promising results in pursuit of the commercialization of perovskite solar cell materials and a cleaner, greener future," said Harshavardhan Gaonkar, the paper's first author who recently earned his doctorate in electrical and computer engineering from Iowa State and is now working in Boise, Idaho, as an engineer for ON Semiconductor.

Tweaking the material

Dalal, the corresponding author of the paper, said there are two key developments in the new solar cell technology:

First, he said the engineers made some tweaks to the makeup of the perovskite material.

They did away with organic components in the material - particularly cations, materials with extra protons and a positive charge - and substituted inorganic materials such as cesium. That made the material stable at higher temperatures.

And second, they developed a fabrication technique that builds the perovskite material one thin layer - just a few billionths of a meter - at a time. This vapor deposition technique is consistent, leaves no contaminants, and is already used in other industries so it can be scaled up for commercial production.

The result of those changes?

"Our perovskite solar cells show no thermal degradation even at 200 degrees Celsius (390 degrees Fahrenheit) for over three days, temperatures far more than what the solar cell would have to endure in real-world environments," Gaonkar said.

And then Dalal did a little comparing and contrasting: "That's far better than the organic-inorganic perovskite cells, which would have decomposed totally at this temperature. So this is a major advance in the field."

Raising performance

The paper reports the new inorganic perovskite solar cells have a photoconversion efficiency of 11.8%. That means there's more work ahead for the engineers.

"We are now trying to optimize this cell - we want to make it more efficient at converting solar energy into electricity," Dalal said. "We still have a lot of research to do, but we think we can get there by using new combinations of materials."

The engineers, for example, replaced the iodine common in perovskite materials with bromine. That made the cells much less sensitive to moisture, solving another problem with standard hybrid perovskites. But, that substitution changed the cells' properties, reducing efficiency and how well they work in tandem with silicon cells.

And so the tweaks and trials will continue.

As they move ahead, the engineers believe they're on a proven path: "This study demonstrates a more robust thermal stability of inorganic perovskite materials and solar cells at higher temperatures and over extended periods of time than reported elsewhere," they wrote in their paper. "(These are) promising results in pursuit of the commercialization of perovskite solar cell materials."

Credit: 
Iowa State University