Tech

Health policy researchers propose filling health care coverage gap to help 'near poor'

image: Assistant professor, Department of Health Policy and Management, University of Pittsburgh Public Health.

Image: 
University of Pittsburgh

PITTSBURGH, April 5, 2021 - "Near-poor" Americans--people just above the federal poverty level but still well below the average U.S. income--who rely on Medicare for health insurance face high medical bills and may forgo essential health care, according to new research led by health policy scientists at the University of Pittsburgh Graduate School of Public Health. This is due to a coverage "cliff" in Medicaid, which supplements Medicare for people with incomes below poverty but excludes individuals above the federal poverty threshold, including the near-poor.

In a report published today in the April issue of the journal Health Affairs, the authors describe the effects of this cliff and propose solutions to fix it, with the aim of lessening barriers to care among near-poor people with Medicare.

"Medicaid provides vital assistance to low-income people with Medicare by covering Medicare's high out-of-pocket costs and filling in gaps in Medicare coverage. However, Medicaid's eligibility rules for low-income, older Americans have changed little in 30 years, exclude people barely above poverty, and make it difficult for those below poverty to enroll. As a result, many older Americans who live on modest incomes have difficulty affording care," said lead author Eric T. Roberts, Ph.D., assistant professor in Pitt Public Health's Department of Health Policy and Management. "We're overdue for modernization of the Medicaid program for older adults. The solutions we propose incorporate consumer responsibility while creating substantial improvements in health insurance coverage and access to care."

The near-poor are those whose incomes are between 100% and 200% of the federal poverty level, or $12,880 to $25,760 for a single person in 2021. About 30% of the Medicare population--generally, people who are age 65 or older, and younger people with disabilities--are near-poor. Medicare provides health insurance, but there are still out-of-pocket costs, such as deductibles, copays and premiums. Medicaid--which provides health coverage for people with low income--can serve as a supplemental insurance for Medicare recipients who qualify, covering these out-of-pocket costs.

But near-poor people receiving Medicare generally don't qualify for Medicaid, meaning that they have to purchase alternative supplemental insurance or pay Medicare's costs out-of-pocket. Recent estimates suggest that 40% of near-poor Medicare beneficiaries spend at least one-fifth of their income on health care costs.

Roberts and his team analyzed a diverse sample of 4,602 Medicare beneficiaries with an income less than double the federal poverty line, over multiple years between 2008 and 2016.

They found that 73.3% of Medicare beneficiaries whose incomes were just below Medicaid's eligibility threshold had supplemental health insurance coverage from Medicaid or another source, whereas only 47.5% of the near-poor had such supplemental coverage. The researchers define this 25.8 percentage point difference as "the coverage cliff."

Near-poor beneficiaries affected by this coverage cliff incurred $2,288 in additional out-of-pocket health care spending over two years and were 33.1% more likely to spend more than one month's income on health care costs than their counterparts below the poverty limit.

The team then looked at measures of health care utilization and found that the near-poor used 55% fewer outpatient and preventive health services and filled fewer prescriptions per year, including fewer chronic disease medications.

"These are the kinds of medications and doctor's appointments that help people manage their health conditions and avoid costly hospital care," Roberts said. "But, even more important, prior research has shown that these medications and doctor's appointments can save lives."

Roberts and his colleagues suggest several solutions to mitigate the Medicaid coverage cliff, namely:

Expand Medicaid supplemental coverage to Medicare beneficiaries who make at least 150%, and preferably 200%, of the federal poverty limit.

Offer this coverage on a sliding scale so recipients pay no more than a fixed proportion of their income on Medicare costs.

Simplify Medicaid's application process for seniors, which currently is far more complex than the application process for children and nonelderly adults.

Expand opportunities for qualifying Medicare beneficiaries to enroll in the Medicare Part D "Low-Income Subsidy" to reduce prescription drug costs.

"As the U.S. population ages, analysts forecast a 40% growth in the Medicare population, and that over one-third of Medicare beneficiaries will have low to moderate incomes, making it ever more important to modernize Medicaid now," Roberts said.

Credit: 
University of Pittsburgh

CRISPR-SNP-chip enables amplification-free electronic detection of single point mutations

video: Keck Graduate Institute (KGI) Assistant Professor and University of California, Berkeley Visiting Scientist Dr. Kiana Aran first introduced the CRISPR-Chip technology in 2019. Now just two years later, she has expanded on its application to develop CRISPR-SNP-Chip, which enables detection of single point mutations without amplification in Sickle Cell Disease and Amyotrophic lateral sclerosis (ALS).

Image: 
Keck Graduate Institute (KGI) and Cardea Bio

CLAREMONT, CA - Keck Graduate Institute (KGI) Assistant Professor and University of California, Berkeley Visiting Scientist Dr. Kiana Aran first introduced the CRISPR-Chip technology in 2019. Now just two years later, she has expanded on its application to develop CRISPR-SNP-Chip, which enables detection of single point mutations without amplification in Sickle Cell Disease and Amyotrophic lateral sclerosis (ALS).

"The field of CRISPR-based diagnostics is rapidly evolving due to CRISPR programmability and ease of use," Aran says. "However, the majority of CRISPR-based diagnostics platforms are still relying on target amplifications or optical detections. The reprogrammability of CRISPR combined with optics-free highly scalable graphene transistors will allow us to bring the diagnostics power of the CRISPR to its full potential.

"The ability to detect single nucleotide polymorphisms (SNPs) is at the core of human health genetics but detection of SNPs is also very important in pharmacology, and agriculture, and is a driving force in evolutionary change such as mutations conferring resistance to antibiotics. Eliminating the need for amplification and optics will make SNP genotyping readily accessible."

Aran led the research team responsible for the work described in the paper "CRISPR-based Transistors for Amplification-free Electronic Detection of Single Point Mutations," to be published in the journal Nature Biomedical Engineering on April 5, 2021 (https://www.nature.com/articles/s41551-021-00706-z). It was a collaborative effort between Cardea Bio, KGI, UC Berkeley, UC Irvine, Vilnius University, and CasZyme.

The SNP-Chip technology is an extension of previously reported CRISPR-ChipTM, a technology that is capable of detecting large insertion and deletions. It earned a spot on the cover of Nature Biomedical Engineering in June 2019.

With graphene transistors, the authors now utilized a few versions of CAS enzymes and gRNA designs and monitored various different electrical signals obtained from graphene transistors to construct a new version of CRISPR-ChipTM, which ultimately enabled SNP detection without amplification. The newly developed CRISPR-Chip set, called SNP-Chip, is another major milestone in reshaping nucleic-acid-based detection methods.

"Merging a diversity of CRISPR-Cas biology with electronics via Cardean Transistors opens up a whole new range of possibilities for diagnostic applications, "said Dr. Virginjus Siksnys, founder and chairman of the CasZyme management board, professor at Vilnius University, Lithuania, and co-author on the paper. "Using the Cas9 orthologue for SNP detection is just the tip of the iceberg."

In this article, the utility of SNP-Chip was validated for testing SNP mutation in samples obtained from patients with Sickle Cell Disease and ALS. In both of these clinical models, the platform was able to discriminate healthy from mutated gene within the whole human genome without amplification and by simple swapping of gRNA to target desired DNA sequences indicating the ease of platform reconfiguration for different DNA targets.

SNP-Chip has the potential to greatly impact medical diagnostics and basic research as it can dramatically reduce the time and cost of SNP geotyping, monitor the efficiency of gRNA designs, and facilitate the quality control process involved in CRISPR-based gene editing.

"SNP-Chip's digital, direct, rapid, and accurate SNP analysis will revolutionize the screening for genetic mutations," said Irina Conboy, PhD, Professor of Bioengineering at UC Berkeley and co-author on the paper. "This new technology will inform the discovery of processes underlying disease and aging and will enable faster, more effective clinical translation."

Amplification-free detection of a target gene with single nucleotide mismatch specificity has the potential to streamline genetic research and diagnostics. Furthermore, it would provide more flexibility for biosensing applications previously confined to a laboratory setting.

Credit: 
Keck Graduate Institute

A new, positive approach could be the key to next-generation, transparent electronics

image: The optical transparency of the new materials could enable futuristic, flexible, transparent electronics.

Image: 
RMIT University

A new study, out this week, could pave the way to revolutionary, transparent electronics.

Such see-through devices could potentially be integrated in glass, in flexible displays and in smart contact lenses, bringing to life futuristic devices that seem like the product of science fiction.

For several decades, researchers have sought a new class of electronics based on semiconducting oxides, whose optical transparency could enable these fully-transparent electronics.

Oxide-based devices could also find use in power electronics and communication technology, reducing the carbon footprint of our utility networks.

A RMIT-led team has now introduced ultrathin beta-tellurite to the two-dimensional (2D) semiconducting material family, providing an answer to this decades-long search for a high mobility p-type oxide.

"This new, high-mobility p-type oxide fills a crucial gap in the materials spectrum to enable fast, transparent circuits," says team leader Dr Torben Daeneke, who led the collaboration across three FLEET nodes.

Other key advantages of the long-sought-after oxide-based semiconductors are their stability in air, less-stringent purity requirements, low costs and easy deposition.

"In our advance, the missing link was finding the right, 'positive' approach," says Torben.

Positivity has been lacking

There are two types of semiconducting materials. 'N-type' materials have abundant negatively-charged electrons, while 'p-type' semiconductors possess plenty of positively-charged holes.

It's the stacking together of complementary n-type and p-type materials that allows electronic devices such as diodes, rectifiers and logic circuits.

Modern life is critically reliant on these materials since they are the building blocks of every computer and smartphone.

A barrier to oxide devices has been that while many high-performance n-type oxides are known, there is a significant lack of high-quality p-type oxides.

Theory prompts action

However in 2018 a computational study revealed that beta-tellurite (β-TeO2) could be an attractive p-type oxide candidate, with tellurium's peculiar place in the periodic table meaning it can behave as both a metal and a non-metal, providing its oxide with uniquely useful properties.

"This prediction encouraged our group at RMIT University to explore its properties and applications," says Dr Torben Daeneke, who is a FLEET associate investigator.

Liquid metal - pathway to explore 2D materials

Dr Daeneke's team demonstrated the isolation of beta-tellurite with a specifically developed synthesis technique that relies on liquid metal chemistry.

"A molten mixture of tellurium (Te) and selenium (Se) is prepared and allowed to roll over a surface," explains co-first author Patjaree Aukarasereenont.

"Thanks to the oxygen in ambient air, the molten droplet naturally forms a thin surface oxide layer of beta-tellurite. As the liquid droplet is rolled over the surface, this oxide layer sticks to it, depositing atomically thin oxide sheets in its way."

"The process is similar to drawing: you use a glass rod as a pen and the liquid metal is your ink," explains Ms Aukarasereenont, who is a FLEET PhD student at RMIT.

While the desirable β-phase of tellurite grows below 300 °C, pure tellurium has a high melting point, above 500 °C. Therefore, selenium was added to design an alloy that has a lower melting point, making the synthesis possible.

"The ultrathin sheets we obtained are just 1.5 nanometres thick - corresponding to only few atoms. The material was highly transparent across the visible spectrum, having a bandgap of 3.7 eV which means that they are essentially invisible to the human eye" explains co-author Dr Ali Zavabeti.

Assessing beta-tellurite: up to 100 times faster

To assess the electronic properties of the developed materials, field-effect transistors (FETs) were fabricated.

"These devices showed characteristic p-type switching as well as a high hole mobility (roughly 140 cm2V-1s-1), showing that beta-tellurite is ten to one hundred times faster than existing p-type oxide semiconductors. The excellent on/off ratio (over 106) also attests the material is suitable for power efficient, fast devices" Ms Patjaree Aukarasereenont said.

"The findings close a crucial gap in the electronic material library," Dr Ali Zavabeti said.

"Having a fast, transparent p-type semiconductor at our disposal has the potential to revolutionise transparent electronics, while also enabling better displays and improved energy-efficient devices."

The team plans to further explore the potential of this novel semiconductor. "Our further investigations of this exciting material will explore integration in existing and next-generation consumer electronics," says Dr Torben Daeneke.

Credit: 
ARC Centre of Excellence in Future Low-Energy Electronics Technologies

Lightning strikes will more than double in Arctic as climate warms

Irvine, Calif. -- In 2019, the National Weather Service in Alaska reported spotting the first-known lightning strikes within 300 miles of the North Pole. Lightning strikes are almost unheard of above the Arctic Circle, but scientists led by researchers at the University of California, Irvine have published new research in the journal Nature Climate Change detailing how Arctic lightning strikes stand to increase by about 100 percent over northern lands by the end of the century as the climate continues warming.

"We projected how lightning in high-latitude boreal forests and Arctic tundra regions will change across North America and Eurasia," said Yang Chen, a research scientist in the UCI Department of Earth System Science who led the new work. "The size of the lightning response surprised us because expected changes at mid-latitudes are much smaller."

The finding offers a glimpse into the changes that're in store for the Arctic as the planet continues warming; it suggests Arctic weather reports during summertime will be closer to those seen today far to the south, where lightning storms are more common.

James Randerson, a professor in UCI's Department of Earth System Science who co-authored the study, was part of a NASA-led field campaign that studied wildfire occurrence in Alaska during 2015, which was a extreme year for wildfires in the state. "2015 was an exceptional fire year because of a record number of fire starts," Randerson said. "One thing that got us thinking was that lightning was responsible for the record-breaking number of fires."

This led Chen to look at over-twenty-year-old NASA satellite data on lighting strikes in northern regions, and construct a relationship between the flash rate and climatic factors. By using future climate projections from multiple models used by the United Nations, the team estimated a significant increase in lightning strikes as a result of increases in atmospheric convection and more intense thunderstorms.

A lightning strike bump could open a Pandora's box of related troubles. Fires, Randerson explained, burn away short grasses, mosses, and shrubs that are important components of Arctic tundra ecosystems. Such plants cover much of the landscape, and one thing they do is keep the seeds of trees from taking root in the soil. After a fire burns away low-lying plants, however, seeds from trees can more easily grow on bare soil, allowing forests stands to expand north. Evergreen forests will replace what's typically a snow-covered landscape; snow's white hue reflects sunlight back out into space, but darker forests absorb solar energy, helping warm the region even further.

And there's more trouble: more fires mean more permafrost -- perennially frozen soil that defines much of the Arctic landscape -- will melt as the fires strip away protective insulative layers of moss and dead organic matter that keep soils cool. Permafrost stores a lot of organic carbon that, if melted out of the ice, will convert to greenhouse gases carbon dioxide and methane, which, when released, will drive even more warming.

The lighting finding comes of the heels of another study that, led by Randerson, published in the Journal of Geophysical Research on Monday, April 5 describes how amplified Arctic warming and the melting of the Greenland ice sheet will scramble food webs in the surrounding oceans.

Now, Chen and Randerson say, scientists need to start paying more attention to the frequency of Arctic lightning strikes so they can gauge how the story unfolds in the coming decades.

"This phenomenon is very sporadic, and it's very difficult to measure accurately over long time periods," said Randerson. "It's so rare to have lightning above the Arctic Circle." Their results, he hopes, will galvanize calls for new satellite missions that can monitor Arctic and boreal latitudes for lightning strikes and the fires they might ignite.

Back in 2019, the National Weather Service in Alaska released a special announcement about the North Pole lightning strikes. Such announcements, however, may struggle to make headlines by the end of the century.

Credit: 
University of California - Irvine

USC study projects significant savings from potential Alzheimer's disease treatment

Alzheimer's disease treatments that slow progression of the disease could significantly reduce the financial burden to U.S. state budgets, according to a new USC study.

The study outlines how states -- which have been hit particularly hard by the COVID-19 pandemic -- would see relief: Medicare would cover the costs of treating the disease, while Medicaid expenditures would be reduced due to fewer patients entering nursing homes.

Assuming a 40% relative reduction of Alzheimer's disease progression rates with treatment, researchers projected two decades of savings beginning in 2021, using a simulation model of state Medicaid programs. They forecast annual savings for Medicaid programs of $7.4 billion in 2030; by 2040, the annual savings would be more than $22 billion.

All told, the researchers calculated that disease-modifying treatments would help Medicaid avoid paying $186 billion from 2021-2040 by preventing over one million patient-years of nursing home use. The study was published in Alzheimer's & Dementia: Diagnosis, Assessment and Disease Monitoring.

"With the introduction of an Alzheimer's disease-modifying drug, Medicaid programs would be in a position to reap significant savings from avoided or delayed nursing home admissions because of reduced dementia progression and dependency on care," said lead author Soeren Mattke, a research professor of economics at the Center for Economic and Social Research (CESR) in the USC Dornsife College of Letters, Arts and Sciences.

The model assumes treatment would begin in the mild cognitive impairment or mild dementia disease stages and delay progression to more advanced stages. Delayed progression will not only improve quality of life for patients and caregivers, but it will also allow patients to live independently in their homes and communities longer, reducing the number of years which they would otherwise spend in nursing homes.

The study projected higher per capita savings for states with an older population, those with higher Medicaid payment rates, those with more nursing home residents covered by Medicaid and those with a lower federal contribution to their Medicaid programs.

State's COVID-19 costs could be counterbalanced with savings from an Alzheimer's drug

The researchers say an important consideration is that states will only realize the projected savings if patients are diagnosed and treated in a timely manner, which is challenging because of the large number of patients and the subtle nature of early-stage cognitive decline.

"How prepared each state's health system will be to handle the large number of patients seeking treatment will influence the magnitude of the savings -- and how fast they will accrue", Mattke said. "The COVID-19 pandemic has taught us the need to plan ahead, and that is exactly what states need to do now."

The study authors documented the devastating effect of the COVID-19 pandemic on U.S. state coffers. Sweeping budget cuts in many states are inevitable; at the same time, Medicaid enrollment is expected to increase.

However, the researchers said, an unexpected source of savings to states may be on the horizon with disease-modifying treatments for Alzheimer's disease. For example, the FDA is reviewing an application for the first disease-modifying treatment, with a decision expected in early June. Applications for similar drug candidates may be forthcoming.

Credit: 
University of Southern California

New paper explores possible effects of bridge construction on manatees

image: A manatee swims between the spans of the Mobile Bay Bridge in lower Alabama.

Image: 
DISL's Manatee Sighting Network Contributor R. Symes

A new publication from the Dauphin Island Sea Lab's Marine Mammal Research Program (DISL) examines how bridge-building and in-water construction activities may affect manatees and other large aquatic species. The article, which was recently published in The Journal of Wildlife Management, addresses the direct causes of injury and death and the longer-term, cumulative impacts on manatees and their habitats.

Some issues associated with construction activity include possible entanglement in barriers such as booms and siltation screens, loss of important habitats such as seagrass beds, and increased vessel activity near construction sites.

"Boat strikes are a major cause of manatee deaths, and increased presence of boats and barges in construction zones puts manatees at greater risk in these areas," stated lead author and manager of DISL's Manatee Sighting Network, Elizabeth Hieb. "Increased noise in construction areas can also mask the sound of approaching vessels, making it more difficult for manatees to avoid collisions," added Hieb.

DISL's new publication also reviews best practices to reduce the negative effects of construction on aquatic species. DISL researchers hope their work can be used to better understand and reduce the scope of risks associated with the construction of bridges, marinas, boat launches, and other infrastructure.

Manatees may be particularly vulnerable in areas along the northern Gulf of Mexico coast where less is known about their abundance and distribution. Data collected by DISL's Manatee Sighting Network since 2007 suggest that more manatees are seasonally migrating from Florida to Alabama and nearby waters in recent years. Construction projects planned in Mobile Bay, such as the expansion of the I-10 Bayway and deepening and widening of the Mobile Bay ship channel will benefit from the data and other information compiled in this timely review.

"This is not just an issue in Alabama or the U.S., but also globally," said Hieb. "More and more people are living in coastal areas where large species like manatees, dolphins, turtles, and fish also live, so manatees are a great model species for understanding how construction may affect many different species."

Credit: 
Dauphin Island Sea Lab

Ozone pollution harms maize crops, study finds

image: Maize experimental field plot fumigated with elevated ozone (green pipes). Maize leaf samples were collected from similar rings throughout the growing season, to understand the response in diverse maize lines.

Image: 
Ainsworth lab

Although stratospheric ozone protects us by filtering out the sun's ultraviolet radiation, tropospheric ozone is a harmful pollutant. A new study has shown that ozone in the lower layers of the atmosphere decreases crop yields in maize and changes the types of chemicals that are found inside the leaves.

Ozone is formed when nitrous oxide, released from industries and tail pipes of cars, is broken down by sunlight and chemically reacts to form ozone. Researchers at the University of Illinois Urbana-Champaign have been studying the effects of ozone pollution on crops for over 20 years at a unique facility where crops can be grown under real-world farm field conditions but with increased concentrations of ozone pollution.

"Ozone pollution is higher in the northern hemisphere, and peaks in the warmer, summer months. High concentrations of ozone pollution overlap temporally and spatially with crop growth, so it is important to study how the high ozone concentrations affect crop yields," said Jessica Wedow, a former PhD student in the Ainsworth lab.

The researchers looked at three types of maize: two inbred lines B73 and Mo17, and the hybrid cross B73 × Mo17. Surprisingly, they found that chronic ozone stress caused a 25% decrease in yield in the hybrid crops while the inbred plants remained unaffected. The hybrid plants also aged faster than the inbred crops.

To understand why B73 × Mo17 was affected, the researchers measured the chemical composition of the leaves. "The inbred plants did not respond to ozone. On the other hand, the hybrid plants produced more α?tocopherol and phytosterols, which help quench reactive oxygen molecules and stabilize the chloroplast membranes," Wedow said. These results suggest that the since the hybrid maize is more sensitive to ozone exposure, they may be producing more chemicals that deal with the consequences of chronic ozone stress.

"This study provides clues to improve maize tolerance to ozone pollution," said Lisa Ainsworth (GEGC), the Research Leader of the USDA ARS Global Change and Photosynthesis Research Unit. The group is currently studying whether these responses are consistent across other important grasses, including those used for bioenergy.

Credit: 
Carl R. Woese Institute for Genomic Biology, University of Illinois at Urbana-Champaign

Approaches for disinfecting occupied rooms efficiently and safely with UV light

A new study published in Indoor Air provides design-based solutions on how to best use ultraviolet germicidal irradiation (UVGI) to disinfect occupied rooms without harming individuals. This research was conducted by Dorit Aviv, assistant professor of architecture and director of the Thermal Architecture Lab at the University of Pennsylvania Stuart Weitzman School of Design, Penn visiting scholar Miaomiao Hou, and Jovan Pantelic, an air quality expert at Katholieke Universiteit Leuven.

Ultraviolet germicidal irradiation (UVGI) devices use short-wavelength ultraviolet light to inactivate viruses, bacteria, and other pathogens by destroying their DNA or RNA. UV light is highly effective and has long been used to clean air and surfaces, with increased uptake in settings such as subway cars during the pandemic. However, UV light can also damage skin and eyes and must be used cautiously in occupied spaces.

Striking a balance between efficient disinfection and personal safety is fundamentally a spatial problem, says Aviv, and in this study the researchers used their architectural expertise to determine optimal placement of a UVGI device to sterilize a space safely. "You're trying to disinfect the air but also make sure people are safe, so it means you need to understand how the device is working throughout the space," she says.

The researchers used simulations of an industry-standard UVGI device and looked at how different design variables impacted the distribution of UV light between a room's upper zone, where disinfection of aerosols should take place, and a lower "occupied" zone which people inhabit and where UV light leakage should be avoided as much as possible. "The idea is to create a high intensity irradiation zone in the upper part of the room and make sure no high-intensity radiation reaches the person in the lower zone," says Aviv about the setup.

The researchers found that ceiling and mounting height had a major impact on the efficiency of disinfection in the upper zone. Based on their simulations, the researchers recommend that UVGI device height be increased wherever possible. This not only increases the disinfection rate but also reduces the possibility of UV exposure in the occupied zone.

The variable that was found to have the biggest impact on reducing leakage was material reflectance. Because UV light can be absorbed or reflected by a material, much like visible light is, using paints or wall coverings with lower reflectance coefficients reduced the likelihood of UV forming dangerous "hot spots" in the occupied zone. Importantly, changes in reflectance didn't impact disinfection efficiency in the upper zone.

Along with guidance on how people can install these devices in a way that is both effective and safe, Aviv's group is now studying the role of air flows in UVGI disinfection, key insights which could be used to direct how contaminated and clean air moves within a room.

Credit: 
University of Pennsylvania

Unique mini-microscope provides insight into complex brain functions

video: Researchers at the University of Minnesota College of Science and Engineering used a unique mini-microscope device to image complex brain activity of mice that show multiple areas of the brain interacting. This colorized imaging shows the mouse's brain activity during stimulation of the left eye.

Image: 
Rynes and Surinach, et al., Kodandaramaiah Lab, University of Minnesota

Researchers from the University of Minnesota Twin Cities College of Science and Engineering and Medical School have developed a unique head-mounted mini-microscope device that allows them to image complex brain functions of freely moving mice in real time over a period of more than 300 days.

The device, known as the mini-MScope, offers an important new tool for studying how neural activity from multiple regions of the outer part of the brain, called the cortex, contribute to behavior, cognition and perception. The groundbreaking study provides new insight into fundamental research that could improve human brain conditions such as concussions, autism, Alzheimer's, and Parkinson's disease, as well as better understanding the brain's role in addiction.

The research was published today in the peer-reviewed journal Nature Methods. The study authors will also present their research at the virtual 2021 OSA Biophotonics Congress: Optics in the Life Sciences on Thursday, April 15.

In the past, scientists have studied how neural activity in specific regions of the brain's cortex contribute to behavior, but it has been difficult to study activity from multiple cortical regions at once. For mice, even the simple task of moving a single whisker in response to a stimulus involves processing information in several cortical areas. Mice are often used to study the brain because they have many of the same brain structures and connectivity as humans.

"This device enables us to image most of the mouse's brain during free and unrestrained behaviors, whereas previous mesoscale imaging was usually done in immobile mice using devices like the MRI or two photon microscopes" said Suhasa Kodandaramaiah, the senior author of the study and University of Minnesota Benjamin Mayhugh Assistant Professor of Mechanical Engineering in the College of Science and Engineering. "This new device allows us to understand how different areas of the brain interact during complex behaviors where multiple areas of the brain are working together simultaneously. This opens up research into understanding how connectivity changes in diseased states, traumatic brain injury or addiction."

The new mini-MScope is a fluorescence microscope that can image an area about 10 millimeters by 12 millimeters and weighs about 3 grams. This allows holistic imaging of much of the mouse brain surface. The device is used for calcium imaging, a technique commonly used to monitor the electrical activity of the brain. The device mounted on the mouse's head captures images at near cellular level, making it possible to study connections between regions across the cortex.

The researchers created the miniaturized microscope using LEDs for illumination, miniature lenses for focusing and a complementary metal-oxide-semiconductor (CMOS) for capturing images. It includes interlocking magnets that let it be easily affixed to structurally realistic 3D-printed transparent polymer skulls, known as See-Shells, that the researchers developed in previous studies. When implanted into mice, the See-Shells create a window through which long-term microscopy can be performed. The new microscope can capture the brain activity of mice for almost a year.

The researchers demonstrated the mini-MScope by using it to image mouse brain activity in response to a visual stimulus to the eye, a vibrational stimulus to the hindlimb and a somatosensory stimulus presented to the whisker. They also created functional connectivity maps of the brain as a mouse wearing the head-mounted microscope interacted with another mouse. They saw that intracortical connectivity increased when the mouse engaged in social behaviors with the other mouse.

"Our team is creating a suite of tools that will enable us to access and interface with large parts of the cortex at high spatial and temporal resolution," said Mathew Rynes, a University of Minnesota biomedical engineering Ph.D. candidate who co-led the study. "This study shows that the mini-MScope can be used to study functional connectivity in freely behaving mice, making it an important contribution to this toolkit," Rynes added.

The team had to overcome several engineering challenges to create the device.

"To image the brain in freely behaving mice, the device had to be light-weight enough to be supported and carried by the mice," said Daniel Surinach, a recent University of Minnesota mechanical engineering master's degree graduate who also co-led the study. "Within this small range, we also needed to optimize optics, electrical and imaging hardware resolution, focusing, illumination designs to provide light to the brain for imaging, and other elements to get clear images of the mouse brain during natural and vigorous behaviors. We ended up designing and testing more than 175 unique prototypes to get the finalized device working!"

The researchers are now using the mini-MScope to investigate how cortical connectivity changes in a variety of behavioral paradigms, such as exploring a new space. They are also working with collaborators who are using the mini-MScope to study how cortical activity is altered when mice learn difficult motor tasks.

"This device allows us to study the brain in ways we could have never done before," said Kodandaramaiah, who also holds appointments in the University of Minnesota's Department of Biomedical Engineering and the Medical School. "For example, we can image the mouse's brain activity as it ebbs and flows during natural movement within its space, as it goes to sleep, and when it wakes up. This provides a lot of valuable information that will help us better understand the brain to help people with disease or injury to improve their lives."

The researchers said the next steps are to improve the resolution of the imaging and study the brain at even finer detail down to examining individual neurons.

Credit: 
University of Minnesota

NIST demo adds key capability to atom-based radio communications

image: NIST researchers and collaborators determined the direction of an incoming radio signal based on laser measurements at two locations in this sensor filled with a gas of cesium atoms.

Image: 
NIST

Researchers at the National Institute of Standards and Technology (NIST) and collaborators have demonstrated an atom-based sensor that can determine the direction of an incoming radio signal, another key part for a potential atomic communications system that could be smaller and work better in noisy environments than conventional technology.

NIST researchers previously demonstrated that the same atom-based sensors can receive commonly used communications signals. The capability to measure a signal's "angle of arrival" helps ensure the accuracy of radar and wireless communications, which need to sort out real messages and images from random or deliberate interference.

"This new work, in conjunction with our previous work on atom-based sensors and receivers, gets us one step closer to a true atom-based communication system to benefit 5G and beyond," project leader Chris Holloway said.

In NIST's experimental setup, two different-colored lasers prepare gaseous cesium atoms in a tiny glass flask, or cell, in high-energy ("Rydberg") states, which have novel properties such as extreme sensitivity to electromagnetic fields. The frequency of an electric field signal affects the colors of light absorbed by the atoms.

An atom-based "mixer" takes input signals and converts them into different frequencies. One signal acts as a reference while a second signal is converted or "detuned" to a lower frequency. Lasers probe the atoms to detect and measure differences in frequency and phase between the two signals. Phase refers to the position of electromagnetic waves relative to one another in time.

The mixer measures the phase of the detuned signal at two different locations inside the atomic vapor cell. Based on the phase differences at these two locations, researchers can calculate the signal's direction of arrival.

To demonstrate this approach, NIST measured phase differences of a 19.18 gigahertz experimental signal at two locations inside the vapor cell for various angles of arrival. Researchers compared these measurements to both a simulation and a theoretical model to validate the new method. The selected transmission frequency could be used in future wireless communications systems, Holloway said.

The work is part of NIST's research on advanced communications, including 5G, the fifth-generation standard for broadband cellular networks, many of which will be much faster and carry far more data than today's technologies. The sensor research is also part of the NIST on a Chip program, which aims to bring world-class measurement-science technology from the lab to users anywhere and anytime. Co-authors are from the University of Colorado Boulder and ANSYS Inc. in Boulder.

Atom-based sensors in general have many possible advantages, notably measurements that are both highly accurate and universal, that is, the same everywhere because the atoms are identical. Measurement standards based on atoms include those for length and time.

With further development, atom-based radio receivers may offer many benefits over conventional technologies. For example, there is no need for traditional electronics that convert signals to different frequencies for delivery because the atoms do the job automatically. The antennas and receivers can be physically smaller, with micrometer-scale dimensions. In addition, atom-based systems may be less susceptible to some types of interference and noise.

Credit: 
National Institute of Standards and Technology (NIST)

Amazing integration of technology and art: a 3D LotusMenu in your palm

image: Interactive gestures of LoutsMenu (a) Rolling gesture; (b) Pitching gesture; (c) Yawing gesture.

Image: 
@Science China Press

A recent study proposed a three-dimensional LotusMenu that can "bloom in the palm". With this menu, even if you are not Nezha, you can also control your own lotus.

The research paper is titled: "LotusMenu: A 3D Menu using Wrist and Elbow Rotation Inspired by Chinese Traditional Symbol". It's published in SCIENCE CHINA Information Sciences recently, written by Associate Professor Lu Fei's human-computer interaction research team from Beijing University of Posts and Telecommunications. Based on the metaphor of the traditional lotus pattern, the researchers proposed a 3D LotusMenu, which uses the rotational motion of wrist and elbow to control the menu selection.

In the design of this interactive technology, the researchers corresponded the shape of the lotus to the 3D rotation gesture: mapping the selection of circular petal groups to the rolling gesture of elbow (Figure 1(a)), the selection of layered petals to the pitching gesture of wrist (Figure 1(b)), and the selection of circular lotus seeds to the yawing gesture of wrist (Figure 1(c)). During the interaction, there is almost no shoulder movement, so that the gesture can be easily performed within a small range of motion. In addition, on this basis, the rotation of the wrist and arm is merged as much as possible, so the fatigue caused by gestures is further reduced.

In the experiment, the researchers compared the user performance of LotusMenu and traditional linear menus. The results show that LotusMenu can significantly reduce the completion time of the selection task, while it perceived less fatigue as well. This result can expand the application of 3D rotating gestures in human-computer interaction, and has significant reference value for the future application of 3D rotating interactive components in large-screen interactive systems and virtual reality systems.

Credit: 
Science China Press

Reclamation releases technical reports supporting the 2021 SECURE Water Act Report

image: A pond near the Rocky Mountains of Montana.

Image: 
Bureau of Reclamation

The Bureau of Reclamation today released final technical reports supporting the Water Reliability in the West - 2021 SECURE Water Act Report. Reclamation's 2021 West-Wide Climate and Hydrology Assessment and seven individual basin reports provide detailed information on climate change impacts and adaptation strategies to increase water supply reliability in the West. A new 2021 SECURE Report Web Portal is also available to provide a user-friendly, web-based format for delivery of information in the reports.

"Western water supply and delivery systems are affected by changing hydrologic conditions and competing demands," Deputy Commissioner Camille Calimlim Touton said. "These reports highlight Reclamation's effort to use the best-available science to meet its mission while also collaborating with its water and power customers, states and local agencies, and tribes to address critical western water management issues."

The 2021 West-Wide Assessment provides estimates of changes in temperature, precipitation, snowpack, and streamflow across the West using consistent methodology, similar to previous SECURE Water Act Reports. For this report, additional drought analyses based on paleohydrology (using tree rings) was performed. These results will enable water managers to compare the frequency and severity of droughts that occurred several hundred years ago to projections of future droughts and develop water management strategies in time to take action.

The West-Wide Assessment finds that temperatures are expected to increase across the West while precipitation changes are variable. With warmer temperatures, more precipitation will fall as rain and snow will melt sooner, reducing snowpack in the future that can impact streamflow timing. These key findings on future climate and hydrology are consistent with the conclusions of the 2016 SECURE Water Act Report.

The seven basin reports analyze climate change impacts to water resources within each basin. Each report also identifies Reclamation's collaborative actions to increase water and power delivery reliability since the last SECURE Water Act Report in 2016, including science and research, planning, infrastructure sustainability, efficient hydropower production and on-the-ground activities to meet irrigation needs and water needed for municipalities, power, Tribes and the environment. These basin reports also describe some of the innovative approaches underway locally to address vulnerabilities.

Reclamation is collaborating with its customers, stakeholders, and other partners to develop appropriate mitigation strategies to increased risks of drought and changes to precipitation, runoff, and increased temperatures. These strategies include:

Supporting reliable water deliveries through construction activities and water management improvements, as well as diversifying supplies through water reuse and ground and surface water conjunctive use.

Improving hydropower generation capability, flexibility, and reliability through new advanced decision support tools to maximize the amount of power produced with available water supplies and new technologies to keep hydropower plants operating.

Maintaining healthy ecosystems and protecting federally listed fish, wildlife, plants, and designated critical habitat affected by Reclamation facilities through a range of programs and activities.

Addressing drought risks by proactively building resilience as the severity, duration, and frequency of drought increases.

Credit: 
Bureau of Reclamation

The Deep-time Digital Earth program: data-driven discovery in geosciences

image: DDE aims to harmonize deep-time Earth data based on a knowledge system to investigate the evolution of Earth, including life, Earth materials, geography, and climate. Integrated methods include artificial intelligence (AI), high performance computing (HPC), cloud computing, semantic web, natural language processing, and other methods.

Image: 
@Science China Press

Humans have long explored three big scientific questions: evolution of the universe, evolution of Earth, and evolution of life. Geoscientists have embraced the mission of elucidating the evolution of Earth and life, which are preserved in the information-rich but incomplete geological record that spans more than 4.5 billion years of Earth history. Delving into Earth's deep-time history helps geoscientists decipher mechanisms and rates of Earth's evolution, unravel the rates and mechanisms of climate change, locate natural resources, and envision the future of Earth.

Two common approaches, deductive reasoning and inductive reasoning, have been widely employed for studying Earth's history. In contrast to deduction and induction, abduction is derived from accumulation and analysis of large amounts of reliable data, independently of a premise or generalization. Abduction thus has the potential to generate transformative discoveries in science. With the accumulation of enormous volumes of deep-time Earth data, we are poised to transform research in deep-time Earth Science through data-driven abductive discovery.

However, three issues must be resolved to facilitate abductive discovery utilizing deep-time databases. First, many relevant geodata resources are not in compliance with FAIR (findable, accessible, interoperable, and reusable) principles for scientific data management and stewardship. Second, concepts and terminologies used in databases are not well defined, thus the same terms may have different meanings across databases. Without standardized terminology and definitions of concepts, it is difficult to achieve data interoperability and reusability. Third, databases are highly heterogeneous in terms of geographic regions, spatial and temporal resolution, coverages of geological themes, limitations of data availability, formats, languages, and metadata. Due to the complex evolution of Earth and interactions among multiple spheres (e.g., lithosphere, hydrosphere, biosphere, and atmosphere) in Earth systems, it is difficult to see the whole picture of Earth's evolution from separated thematic views, each with limited scope.

Big data and artificial intelligence are creating opportunities for resolving these issues. To explore Earth's evolution efficiently and effectively through deep-time big data, we need FAIR, synthetic, and comprehensive databases across all fields of deep-time Earth science, couple with tailored computation methods. This goal motivates the Deep-time Digital Earth program (DDE), which is the first "big science program" initiated by the International Union of Geological Sciences (IUGS) and developed in cooperation with national geological surveys, professional associations, academic institutions, and scientists around the world. The main objective of DDE is to facilitate deep-time, data-driven discoveries through international and interdisciplinary collaborations. DDE aims to provide an open platform for linking existing deep-time Earth data and integrating geological data that users can interrogate by specifying time, space, and subject (i.e., a "Geological Google") and for processing data for knowledge discovery using a knowledge engine (Deep-time Earth Engine) that provides computing power, models, methods, and algorithms (Figure 1).

To achieve its mission and vision, the DDE program has three main components: program management committees, centers of excellence, and working, platform and task groups. And DDE will build on existing deep-time Earth knowledge systems and develop an open platform (Figure 2). A deep-time Earth knowledge system consists of the basic definitions and relationships among concepts in deep-time Earth, which are necessary for harmonizing deep-time Earth data and developing a knowledge engine for supporting abductive exploration of Earth's evolution. The first step in DDE's research plan is to build on existing deep-time Earth knowledge systems. The second step in DDE's research plan is to build an interoperable deep-time Earth data infrastructure. And the third step in DDE's research plan is to develop a deep-time Earth open platform.

The execution of the DDE program consists of four phases. In Phase 1, DDE establishes an organizational structure with international standards of policy and management. In Phase 2, DDE forms the initial teams and builds on existing deep-time Earth knowledge systems and data standards by collaborating with existing ontology researchers in the geosciences, while working to link and harmonize deep-time Earth databases. In Phase 3, DDE develops tailored algorithms and techniques for environments of cloud computing and supercomputing. In Phase 4, Earth scientists and data scientists collaborate seamlessly on compelling and integrative scientific problems.

As integrative and international ambitions of the DDE program, several challenges were anticipated. However, by creating an open-access data resource that for the first time integrates all aspects of Earth's narrated past, DDE holds the promise of understanding our planet's past, present, and future in new and vivid detail.

Credit: 
Science China Press

New batteries give jolt to renewables, energy storage

ITHACA, N.Y. - The cost of harvesting solar energy has dropped so much in recent years that it's giving traditional energy sources a run for their money. However, the challenges of energy storage - which require the capacity to bank an intermittent and seasonally variable supply of solar energy - have kept the technology from being economically competitive.

Cornell University researchers led by Lynden Archer, Dean and Professor of Engineering, have been exploring the use of low-cost materials to create rechargeable batteries that will make energy storage more affordable. Now, they have shown that a new technique incorporating aluminum results in rechargeable batteries that offer up to 10,000 error-free cycles.

This new kind of battery could provide a safer and more environmentally friendly alternative to lithium-ion batteries, which currently dominate the market but are slow to charge and have a knack for catching fire.

The team's paper, "Regulating Electrodeposition Morphology in High-Capacity Aluminium and Zinc Battery Anodes Using Interfacial Metal-Substrate Bonding," published in Nature Energy.

Among the advantages of aluminum is that it is abundant in the earth's crust, it is trivalent and light, and it therefore has a high capacity to store more energy than many other metals. However, aluminum can be tricky to integrate into a battery's electrodes. It reacts chemically with the glass fiber separator, which physically divides the anode and the cathode, causing the battery to short circuit and fail.

The researchers' solution was to design a substrate of interwoven carbon fibers that forms an even stronger chemical bond with aluminum. When the battery is charged, the aluminum is deposited into the carbon structure via covalent bonding, i.e., the sharing of electron pairs between aluminum and carbon atoms.

While electrodes in conventional rechargeable batteries are only two dimensional, this technique uses a three-dimensional - or nonplanar - architecture and creates a deeper, more consistent layering of aluminum that can be finely controlled.

The aluminum-anode batteries can be reversibly charged and discharged one or more orders of magnitude more times than other aluminum rechargeable batteries under practical conditions.

Credit: 
Cornell University

UMass Amherst team discovers use of elasticity to position microplates on curved 2D fluids

image: Arrangements of black plate-like domains that depend on membrane curvature.

Image: 
Weiyue Xin of Santore lab.

AMHERST, Mass. - A team of polymer science and engineering researchers at the University of Massachusetts Amherst has demonstrated for the first time that the positions of tiny, flat, solid objects integrated in nanometrically thin membranes - resembling those of biological cells - can be controlled by mechanically varying the elastic forces in the membrane itself. This research milestone is a significant step toward the goal of creating ultrathin flexible materials that self-organize and respond immediately to mechanical force.

The team has discovered that rigid solid plates in biomimetic fluid membranes experience interactions that are qualitatively different from those of biological components in cell membranes. In cell membranes, fluid domains or adherent viruses experience either attractions or repulsions, but not both, says Weiyue Xin, lead author of the paper detailing the research, which recently appeared in Science Advances. But in order to precisely position solid objects in a membrane, both attractive and repulsive forces must be available, adds Maria Santore, a professor of polymer science and engineering at UMass. In the Santore Lab at UMass, Xin used giant unilamellar vesicles, or GUVs, which are cell-like membrane sacks, to probe the interactions between solid objects in a thin, sheet-like material. Like biological cells, GUVs have fluid membranes and form a nearly spherical shape. Xin modified the GUVs so that the membranes included tiny, solid, stiff plate-like masses. The team, a collaboration between the Santore lab and the Grason theory group in UMass's polymer science and engineering department, is the first to show that by modifying the curvature and tension of the membrane, the plate-like masses could be made to attract and repel each other. This allowed the researchers to control the plates' positions within the membrane.

The membrane tension can be adjusted mechanically, using a micropipette to inflate or deflate the GUV, or physically, by osmosis. In either case when the membrane is tensed, the flat plates attract each other progressively, forming predictable, repeatable arrangements. By contrast, decreasing the tension causes the plates to migrate apart. In both cases the movement and positioning of the plates is predictable and controllable.

This ability to direct the positioning of the plates in a membrane is a giant step toward engineering a material that is responsive to stimuli and can self-organize in controllable and reconfigurable ways. "Our research has applications in nanotechnology and other spheres where it's desirable to have sophisticated, flexible devices that can respond to their environment," says Xin. One real-world application of the team's research includes flexible, ultrathin, and reconfigurable, wearable electronics.

Credit: 
University of Massachusetts Amherst