Tech

Self-powered alarm fights forest fires, monitors environment

image: The new device generates electrical power by harvesting energy from the sporadic movement of the tree branches from which it hangs.

Image: 
Michigan State University

Smokey the Bear says that only you can prevent wildfires, but what if Smokey had a high-tech backup? In a new study, a team of Michigan State University scientists designed and fabricated a remote forest fire detection and alarm system powered by nothing but the movement of the trees in the wind.

As detailed in the journal Advanced Functional Materials, the device, known as MC-TENG -- short for multilayered cylindrical triboelectric nanogenerator (TENG) -- generates electrical power by harvesting energy from the sporadic movement of the tree branches from which it hangs.

"As far as we know, this is the first demonstration of such a novel MC-TENG as a forest fire detection system," said lead author Changyong Cao, who directs the Laboratory of Soft Machines and Electronics in MSU's School of Packaging and is an assistant professor in the Packaging School and Departments of Mechanical Engineering, and Electrical and Computer Engineering.

"The self-powered sensing system could continuously monitor the fire and environmental conditions without requiring maintenance after deployment," he said.

For Cao and his team, the tragic forest fires in recent years across the American West, Brazil and Australia were driving forces behind this new technology. Cao believes that early and quick response to forest fires will make the task of extinguishing them easier, significantly reducing the damage and loss of property and life.

Traditional forest fire detection methods include satellite monitoring, ground patrols, watch towers, among others, which have high labor and financial costs in return for low efficiency.

Current remote sensor technologies are becoming more common, but primarily rely on battery technology for power.

"Although solar cells have been widely used for portable electronics or self-powered systems, it is challenging to install these in a forest because of the shading or covering of lush foliage," said Yaokun Pang, co-author and postdoc associate at Cao's lab.

TENG technology converts external mechanical energy -- such as the movement of a tree branch -- into electricity by way of the triboelectric effect, a phenomenon where certain materials become electrically charged after they separate from a second material with which they were previously in contact.

The simplest version of the TENG device consists of two cylindrical sleeves of unique material that fit within one another. The core sleeve is anchored from above while the bottom sleeve is free to slide up and down and move side to side, constrained only by an elastic connective band or spring. As the two sleeves move out of sync, the intermittent loss of contact generates electricity. The MC-TENG are equipped with several hierarchical triboelectric layers, increasing the electrical output.

The MC-TENG stores its sporadically generated electrical current in a carbon-nanotube-based micro supercapacitor. The researchers selected this technology for its rapid charge and discharge times, allowing the device to adequately charge with only short but sustained gusts of wind.

"At a very low vibration frequency, the MC-TENG can efficiently generate electricity to charge the attached supercapacitor in less than three minutes," Cao said.

The researchers outfitted the initial prototype with both carbon monoxide (CO) and temperature sensors. The addition of a temperature sensor was intended to reduce the likelihood of a false positive carbon dioxide reading.

Credit: 
Michigan State University

A shorter IQ test for children with special needs

image: Researchers at the University of Missouri's Thompson Center for Autism and Neurodevelopmental Disorders identified measures in an intelligence quotient (IQ) test for kids with special needs that appeared to be repetitive and successfully shortened the test while still maintaining its accuracy in determining a child's IQ.

Image: 
MU Thompson Center for Autism and Neurodevelopmental Disorders

COLUMBIA, Mo. - For decades, neuropsychologists have used the Wechsler Intelligence Scale for Children test as the gold-standard intelligence quotient (IQ) test to determine the intellectual abilities of children with special needs. However, this comprehensive test can take up to 2 hours to complete, and many children with special needs have a difficult time participating in such long tests.

To solve this problem, researchers at the University of Missouri's Thompson Center for Autism and Neurodevelopmental Disorders identified measures in the test that appeared to be repetitive and succeeded in shortening the test by up to 20 minutes while still maintaining its accuracy in determining a child's IQ.

"As neuropsychologists, we spend a considerable amount of time - usually a full day or a full afternoon at least - with patients to really get to know them, and that can be a lot for a child with a neurological disorder like autism or attention deficit hyperactivity disorder (ADHD)," said John Lace, a doctoral student who is completing an internship in clinical neuropsychology in the MU School of Health Professions. "If we can efficiently maximize the information we get from our patients during this test without overburdening them, we can save time and money for both clinicians and patients, which reduces the overall health care burden on families with neurodevelopmental disabilities."

Neuropsychologists use the Wechsler Intelligence Scale for Children test to not only assist in diagnosing individuals with neurodevelopmental disorders, but also to help inform decisions about treatment and educational plans.

"Our overall goal is to help people understand any cognitive or learning differences they may have, which can lead to treatment options such as behavioral therapy or interventions at school," said Lace. "As neuropsychologists, our profession is at the crux of addressing these challenges both academically and practically to help clinicians streamline what they do and positively impact patient care."

Credit: 
University of Missouri-Columbia

Undergrad-led study suggests light environment modifications could maximize productivity

image: Maize (left) and Miscanthus (right)

Image: 
WEST project/University of Illinois

CHAMPAIGN, Ill. -- The crops we grow in the field often form dense canopies with many overlapping leaves, such that young "sun leaves" at the top of the canopy are exposed to full sunlight with older "shade leaves" at the bottom. In order to maximize photosynthesis, resource-use efficiency, and yield, sun leaves typically maximize photosynthetic efficiency at high light, while shade leaves maximize efficiency at low light.

"However, in some of our most important crops, a maladaptation causes a loss of photosynthetic efficiency in leaves at the bottom of the canopy, which limits the plants' ability to photosynthesize and produce yields," said Charles Pignon, a former postdoctoral researcher at the University of Illinois. "In order to address this problem, it's important to know whether this is caused by leaves being older or exposed to a different light environment at the bottom of the canopy."

This question was answered in a recent study published in Frontiers in Plant Science, where researchers from the University of Illinois and the University of Oxford worked with maize and the bioenergy crop Miscanthus to find that the decline in the efficiency of leaves at the bottom of the canopy was not due to their age but to their altered light environment.

This work was conducted through the Illinois Summer Fellows (ISF) program. Launched in 2018, ISF allows undergraduate students to conduct plant science research alongside highly skilled scientists at Illinois. 2018 Fellows Robert Collison and Emma Raven worked with Pignon and Stephen Long, the Stanley O. Ikenberry Chair Professor of Plant Biology and Crop Sciences at Illinois, to confirm and better understand results from previous studies for Water Efficient Sorghum Technologies (WEST), a research project that aimed to develop bioenergy crops that produce more biomass with less water.

Photosynthesis is the natural process that plants use to convert sunlight into energy. Plants usually fall under the two main types of photosynthesis -- C3 and C4. The difference between these types is that C4 plants have a mechanism that concentrates carbon dioxide inside their leaves, allowing them to photosynthesize more efficiently. However, most plants, trees, and crops operate using the less efficient C3 photosynthesis.

Both sun and shade leaves contribute to photosynthetic carbon assimilation, producing the sugars that feed the plant and fuel yield. Therefore, lower canopy photosynthesis is an important process that affects the yield of the whole plant, with an estimated 50 percent of total canopy carbon gain contributed by shade leaves.

Previous studies of C3 plants have shown that shaded leaves are typically more efficient than sun leaves at low light intensities, meaning shaded leaves adapt to their low light environment. However, a previous study by Pignon and Long showed that this is not the case for all plants. The canopies of maize and Miscanthus, C4 crops that usually photosynthesize more efficiently than C3 crops, had shade leaves that were less photosynthetically efficient, suggesting a maladaptation in these important crops.

"Shade leaves receive very little light, so they usually become very efficient with low light use," said Pignon, now a plant physiologist at Benson Hill in St. Louis. "Essentially, they make the most of what little light they do receive. However, in the C4 crops we studied, shade leaves in these crops not only receive very little light, but they also use it less efficiently. It's a very costly maladaptation in crops that are otherwise highly productive -- hence our calling it an Achilles' heel."

With six to eight layers of leaves in our modern maize crop stands, most leaves are shaded and can account for half of the plant's growth during the critical phase of grain filling.

"In the previous study, researchers estimated that this maladaptation was causing a loss of 10 percent in potential canopy photosynthesis gain," said Raven, who recently graduated from Oxford with plans to pursue her doctorate. "There are essentially two potential reasons: the age of the leaves or the light conditions, so we investigated which factor was causing this inefficiency."

Collison and Raven, co-first authors of this newly published paper, collected data and analyzed the maximum quantum yield of photosynthesis -- the maximum efficiency with which light is used to assimilate carbon -- in leaves of the same chronological age but different light environments to discover the crops' Achilles' heel. This was achieved by comparing leaves of the same age in the center of plots of these species versus those on the sunlight southern edge of these plots. From this, they showed that the poor photosynthetic efficiency of these crops' lower leaves is caused by altered light conditions and not age.

"Maize and Miscanthus are both closely related to sugarcane and sorghum, so other C4 crops could potentially have this loss in photosynthetic efficiency caused by the light environment," explained Collison, who has also graduated from Oxford and may pursue graduate studies. "By finding the cause of this loss in efficiency, we can begin to look at potential solutions to this problem, modifying plants to improve their productivity."

Illinois Summer Fellows Program

The ISF program has cultivated an environment where the Fellows have the independence needed to develop as scientists while knowing that they have the support and encouragement of their supervisors. Fellows are paired with a scientist supervisor to assist them with a specific element of a project aimed to increase crops' photosynthetic and/or water-use efficiency. The program aims to provide a rewarding experience that helps students develop as scientists, and ultimately, to consider pursuing careers in plant biology.

"The opportunity to travel to another country and conduct meaningful research in a real-world field environment alongside mentors in their field is invaluable," said Long, who launched and directs the ISF program at the Carl R. Woese Institute for Genomic Biology. "At the end of their time at Illinois, our Fellows have expressed that this experience allowed them to contribute to the world and take back valuable skills they can apply in their future endeavors as innovators in the field of agriculture and beyond."

Collison reflects on his time at Illinois as an experience that not many students, especially so early in their career, get to take part in. "The chance to do any research so early in your career as a scientist is really exciting," he said. "Everyone we met-- including our supervisors and other scientists -- was always willing to help us."

Raven also shared her insights on the value of doing research at Illinois and what differences there may be in other academic or work settings. "When you are attending lectures or practical classes, you never quite get that feeling of true ownership of your own projects because you just follow whatever your professor tells you to do," Raven said. "But having ownership of this paper at Illinois is gratifying. It is also exciting to be a part of something that is bigger than us and will ultimately help farmers in other countries to grow food more sustainably."

Credit: 
Carl R. Woese Institute for Genomic Biology, University of Illinois at Urbana-Champaign

Study quantifies socioeconomic benefits of satellites for harmful algal bloom detection

WASHINGTON, DC--Heading to the lake this summer? While harmful algal blooms can cause health problems for lake visitors, satellite data can provide early detection of harmful algae, resulting in socioeconomic benefits worth hundreds of thousands of dollars from one harmful algal bloom event, a new study finds. A Resources for the Future (RFF) and NASA VALUABLES Consortium study published in GeoHealth examines the benefits of using satellite data to detect harmful algal blooms and manage recreational advisories in Utah Lake.

In 2017, toxic cyanobacteria permeated portions of Utah Lake, threatening the health of swimmers and other visitors. It wasn't the first time; harmful algal blooms (HABs) like this have occurred intermittently at the lake.

Local officials, informed by satellite detection, were able to act quickly to respond to the toxic blooms, resulting in socioeconomic benefits estimated at $370,000 in the form of reduced healthcare costs, the new VALUABLES study reports.

"Using satellite data to detect this harmful algal bloom potentially saved hundreds of thousands of dollars in social costs by preventing hundreds of cases of cyanobacteria-induced illness," says study coauthor Molly Robertson, a research assistant at RFF. "Incorporating satellite data into the HAB detection strategy for other large US lakes could yield similar benefits."

The study uses the VALUABLES Consortium's Impact Assessment Framework, a method that allows researchers to compare the outcomes with and without satellite data to see how new information can improve outcomes for people and the environment. An RFF explainer video lays out the framework and explains how it pertains to HAB detection.

"The impact assessment framework laid out in this study can measure the value of any information that can be used to make decisions," says study coauthor and VALUABLES Consortium Director Yusuke Kuwayama. "Scientists can use the framework to demonstrate the value of their research, and public officials can use it to make budgetary decisions, such as how much to invest in different information-gathering activities."

Credit: 
Resources for the Future (RFF)

Adirondack boreal peatlands near southern range limit likely threatened by warmer climate

A study published in the journal Wetlands documents an invasion happening in the Adirondacks: the black spruce, tamarack, and other boreal species are being overcome by trees normally found in warmer, more temperate forests. Ultimately, researchers from the SUNY College of Environmental Science and Forestry (ESF) predict that these invaders could overtake a variety of northern species, eliminating trees that have long been characteristic of wetlands like Shingle Shanty Preserve in the Adirondacks.

"Shingle Shanty Preserve was an ideal location to conduct this research," said Stephen Langdon, Shingle Shanty Director and principal investigator. "Peatlands, like those at Shingle Shanty Preserve, are some of the best protected, most-intact examples of the ecosystem around the world at this latitude: a fact that makes them a critical resource for understanding the biological response to climate change and nitrogen deposition." Langdon has 25 years of experience in the Adirondacks working in conservation from shovel-in-hand trail maintenance to biodiversity research with government and private organizations.

"These are hard-earned data," Langdon said. "The result of weeks of bushwhacking through buggy bogs and thick black spruce forests."

Researchers collected data on vascular species, including their composition, environmental drivers, and ages in 50 plots spread throughout a nearly 1000-acre portion of the Preserve.

"A peatland complex of this size at its southern geographical limit in the eastern U.S. is highly significant ecologically and for its conservation values," said Don Leopold, ESF Distinguished Teaching Professor and research collaborator. Leopold has studied peatlands throughout the U.S. for the past 35 years.

Large peatlands, like Shingle Shanty Preserve near their southern range limits in eastern North America, are particularly important for biodiversity conservation because they are nested in a relatively intact biome and they are a refuge for many disjunct boreal species at their southern range limits, like the threatened spruce grouse. The biodiversity in these peatlands is threatened by direct modification (e.g., drainage for agriculture) and by invasions of woody species linked to human-caused environmental changes such as climate warming and atmospheric nitrogen deposition

"This research should serve as a wake-up call, as it provides an early warning that even the most remote and protected boreal peatlands may be lost at their southern range limits, in potentially just over a few decades, due to this ongoing and abundant colonization by temperate tree species -a process likely to be dramatically accelerated by continuously warming climate and fertilizing effects of nitrogen emissions," said Martin Dovciak, ESF Associate Professor and research collaborator. Dovciak has studied forest dynamics in a variety of forested ecosystems in North America and Europe over the last 25 years.

Shingle Shanty Preserve is a 23-square-mile remote tract of land located in the middle of the 6-million-acre Adirondack Park, about nine miles west of Long Lake, NY. Although all of the Preserve is protected by a Forever Wild conservation easement, it is still quite susceptible to large-scale changes due to climate. The Preserve and Research Station was formally established as a 501(c)3 non-profit biological field research station in 2008 with a mission of supporting scientific research to improve the understanding and management of Adirondack Ecosystems. This remote, private Preserve is positioned at the top of the Beaver, Raquette and Moose River watersheds and has 2000 acres of pristine boreal wetlands, 9 lakes and ponds, 6 miles of headwater streams and over 12,500 acres of northern hardwood and successional northern hardwood forests. Shingle Shanty is home to numerous species that benefit from large scale protection, including a suite of boreal bird species such as spruce grouse, rusty black birds and olive-sided flycatchers, as well as moose, American martin and river otters. The Preserve is located in an unusually cold area for its latitude in the southernmost area of USDA hardiness Zone 3 in the northeastern United States. Temperature data loggers at the site recorded a growing season of only 19-days as recently as 2014 in the low-lying peatland complex.

Credit: 
SUNY College of Environmental Science and Forestry

Simulations reveal how saltwater behaves in Earth's mantle

image: An artist's depiction of highly compressed saltwater at high temperature.

Image: 
Zhang et al

Scientists estimate that the Earth's mantle holds as much water as all the oceans on the planet, but understanding how this water behaves is difficult. Water in the mantle exists under high pressure and at elevated temperatures, extreme conditions that are challenging to recreate in the laboratory.

That means many of its physical and chemical properties--relevant to understanding magma production and the Earth's carbon cycle -- aren't fully understood. If scientists could better understand these conditions, it would help them better understand the carbon cycle's consequences for climate change.

A team led by Prof. Giulia Galli and Prof. Juan de Pablo from the Pritzker School of Molecular Engineering (PME) at the University of Chicago and Prof. Francois Gygi from the University of California, Davis has created complex computer simulations to better understand the properties of salt in water under mantle conditions.

By coupling simulation techniques developed by the three research groups and using sophisticated codes, the team has created a model of saltwater based on quantum mechanical calculations. Using the model, the researchers discovered key molecular changes relative to ambient conditions that could have implications in understanding the interesting chemistry that lies deep beneath the Earth's surface.

"Our simulations represent the first study of the free energy of salts in water under pressure," Galli said. "That lays the foundation to understand the influence of salt present in water at high pressure and temperature, such as the conditions of the Earth's mantle." The results were published June 16 in the journal Nature Communications.

Important in fluid-rock interactions

Understanding the behavior of water in the mantle is challenging -- not only because it is difficult to measure its properties experimentally, but because the chemistry of water and saltwater differs at such extreme temperatures and pressures (which include temperatures of up to 1000K and pressures of up to 11 GPa, 100,000 times greater than on the Earth's surface.)

While Galli previously published research on the behavior of water in such conditions, she and her collaborators at the Midwest Integrated Center for Computational Materials (MICCoM) have now extended their simulations to salt in water, managing to predict much more complex properties than previously studied.

The simulations, performed at UChicago's Research Computing Center using optimized codes supported by MICCoM, showed key changes of ion-water and ion-ion interactions at extreme conditions. These ion interactions affect the free energy surface of salt in water.

Specifically, researchers found that dissociation of water that happens due to high pressure and temperature influences how the salt interacts with water and in turn how it is expected to interact with surfaces of rocks at the Earth's surface.

"This is foundational to understanding chemical reactions at the conditions of the Earth's mantle," de Pablo said.

"Next we hope to use the same simulation techniques for a variety of solutions, conditions, and other salts," Gygi said.

Credit: 
University of Chicago

Subtypes in Alzheimer's disease may be linked to tau protein modifications

A new study reveals a possible biological reason that Alzheimer's Disease (AD) progresses at different rates in different patients.

The study, which was led by Massachusetts General Hospital researchers, focused on tau, a protein found in the neurofibrillary tangles in the brain that are a well-known sign of AD.

Tau can undergo a variety of modifications during the course of the disease including phosphorylations. Researchers found that the presence of different forms of phosphorylated tau could explain why the disease has variable effects.

The study's lead author is Simon Dujardin, PhD, post-doctoral research fellow at Mass General.

Physicians have long known that, from patient to patient, there can be substantial variation in the clinical presentation of Alzheimer's Disease, including age of onset, rate of memory decline and other clinical measures.

Also, higher levels of pathological tau in the brain are associated with more severe disease. However, there are few clues as to what causes this variation between patients.

This team studied samples from 32 patients who were diagnosed with what is considered "typical AD" while living, and that diagnosis was confirmed after death.

The age at diagnosis and the rate of disease progression varied markedly among these patients.

More Details About the Study

The researchers also conducted an in-depth characterization of the molecular features of tau proteins within the brains of these patients.

This included levels of different species of tau, capacity of tau to induce aggregation (also called seeding), as well as the presence of specific post-translational modifications using biochemical, biophysical and bioactivity assays, as well as advanced mass spectrometry techniques, working with teams at Children's Hospital, Boston and Merck.

The researchers found "striking" variation in the presence of phosphorylated tau oligomers that associates with greater tau spread, and, importantly, worse disease.

Different specific modifications were associated with different degrees of severity and progression rate.

Notably, these specific molecular characteristics led to variable recognition by antibodies which are currently being considered for the therapeutic targeting of tau proteins in AD and associated diseases.

"We speculate that there are different molecular 'drivers' of Alzheimer's progression, with each patient having their own set of these," says Bradley Hyman, MD, PhD, senior author of the report and director of the Alzheimer's Disease Research Center at the Massachusetts General Institute for Neurodegenerative Disease (MIND). Hyman adds that,

"This is similar to what we see in cancer, where there are several types of lung or breast cancer, for example, and the treatment depends on the particular molecular drivers in the patient's tumor."

Credit: 
Massachusetts General Hospital

Steep NYC traffic toll would reduce gridlock, pollution

image: New research by Cornell University and the City College of New York (CCNY), shows that by enforcing a $20 toll for cars and taxis to enter the central business district of Manhattan, traffic congestion could be reduced by up to 40%, public transit ridership could grow by 6% and greenhouse gas emissions could be reduced by 15%.

Image: 
Cornell University

ITHACA, N.Y. - New research by Cornell University and the City College of New York (CCNY), shows that by enforcing a $20 toll for cars and taxis to enter the central business district of Manhattan, traffic congestion could be reduced by up to 40%, public transit ridership could grow by 6% and greenhouse gas emissions could be reduced by 15%.

"If we charge a high dollar amount of tolls, we can decrease the number of cars and taxis, shrink gridlock, bring down carbon dioxide emissions and reduce particulate matter," said Oliver Gao, professor of engineering and director of Cornell's Center for Transportation, Environment and Community Health. "This is good news for the environment and from a public health perspective."

About 1 million tons of greenhouse gas emissions - mostly carbon dioxide - come from automobile and truck traffic in lower Manhattan annually. In modeling different scenarios using air quality processing software, the researchers determined exhaust emission reductions based on the tolls charged to enter the central business district of Manhattan.
A toll of $5, they found, would result in a reduction of 72,648 tons of greenhouse gas emissions annually. For a $10 toll, the reduction would be 119,097 tons, and a $15 toll would yield a 157,747-ton drop.

A $20 toll would eliminate 40% of midtown traffic and reduce greenhouse gas emissions by 182,065 tons per year.

Entrance tolls will also drop the volume of particulate - soot and other tiny particles measuring less than 2.5 micrometers that are linked to poor health and give Manhattan a hanging haze, according to the paper.

Credit: 
Cornell University

UM researcher helps reveal changes in water of Canadian arctic

image: Crew members deploy equipment onto the ice from a Canadian icebreaker, CCGS Louis S. St. Laurent, in the Arctic Ocean.

Image: 
Photo by Gary Morgan, Canadian Coast Guard

MISSOULA - Melting of Arctic ice due to climate change has exposed more sea surface to an atmosphere with higher concentrations of carbon dioxide. Scientists have long suspected this trend would raise CO2 in Arctic Ocean water.

Now University of Montana researcher Michael DeGrandpre and his patented sensors have helped an international team determine that, indeed, CO2 levels are rising in water across wide swaths of the Arctic Ocean's Canada Basin. However, some areas have exhibited slower increases, suggesting other processes - such as biological uptake of CO2 - have counteracted expected increases.

The work was published this month in the journal Nature Climate Change. The study is online at https://www.nature.com/articles/s41558-020-0784-2.

DeGrandpre is a UM chemistry professor, and in 2015 he and the company he founded, Sunburst Sensors, won two coveted XPRIZE awards for developing inexpensive, durable sensors to better understand ocean acidification. Sunburst Sensor technology also was used in this recent study for a CO2 measurement system placed on board a Canadian icebreaker, the CCGS Louis S. St. Laurent.

DeGrandpre said ocean measurements are taken while the icebreaker is underway, sometimes crashing through ice one to two meters thick. DeGrandpre and UM research associate Cory Beatty have participated in these research cruises since 2012 with support from the National Science Foundation Office of Polar Programs.

"Because of the inaccessibility of the Arctic and the typically harsh work conditions, we really need a world-class icebreaker to access these areas," DeGrandpre said. "It also has given us a high-quality, consistent dataset, which really helped with this latest study. Most Arctic CO2 datasets are from infrequent cruises that do not visit the same locations year to year."

He said the new study combines sporadic data dating back to 1994 with the more-frequent data they have collected since 2012. DeGrandpre said their consistent dataset will only improve, as NSF recently awarded them an $890,000 grant to continue the icebreaker project through 2023.

Credit: 
The University of Montana

New drug pathway linked with tuberous sclerosis

Tuberous sclerosis complex (TSC) is a neurological disorder causing non-cancerous tumors, called cortical tubers, to grow throughout the brain and body, as well as other conditions like epilepsy and autism. While medications are used to treat some of the manifestations of the disease, safe and more effective treatments targeting disease at a fundamental level are lacking.

New research from the laboratory of Mustafa Sahin, MD, PhD, hopes to change that. In a new paper published today in Cell Reports, his research team discovered that a cell signaling pathway called the heat shock protein cascade may offer new drugs for TSC.

TSC is caused by mutations in either the TSC1 or TSC2 genes, which together make proteins known as the TSC1/2 protein complex. This protein complex acts on an important complex called the mechanistic target of rapamycin complex 1 (mTORC1). When the TSC1/2 protein complex fails to inhibit mTORC1, the overall mTOR pathway goes into hyperdrive, causing abnormal cell growth and other neurological manifestations of the disease.

In this paper, Sahin's team showed that the heat shock protein signaling machinery restored normal mTOR activity.

"Finding an alternative pathway, like the heat shock protein pathway, that corrects faulty mTORC1 signaling, may provide new drug targets and expand therapeutic landscape for TSC," says Sahin, director of the Translational Neuroscience Center and the Translational Research Program at Boston Children's Hospital.

Cilia are membrane extensions of a cell's surface. Some CNS disorders, like brain malformation, autism, and intellectual disability, are known to have mutations in cilia genes and reduced cilia. TSC cells also have less cilia.

Sahin's team wanted to know more about the potential relationship between cilia and disrupted mTOR activity in neurons. "We wanted to see the crosstalk between these two and see how it was regulated," says first author Alessia Di Nardo, PhD, research fellow in the Sahin laboratory.

In a mouse model of TSC, they found that the loss of TSC1/2 protein activity in neurons leads to a reduction of cilia. They found the same result studying the giant cells present in cortical tubers brain specimens from TSC patients with epilepsy.

"It is becoming more and more clear that there are a number of neuropsychiatric disorders that have altered cilia, and TSC is among those," says Sahin. "This puts the cilia as a potentially novel and possibly druggable signaling pathway that can be used to target some of the brain manifestation of TSC."

To identify some of those potential targets, the team set up a drug screening assay. Using mutated rat neuronal cells that lacked TSC1/2, they looked for compounds that interfered with mTORC1 activity causing loss of cilia.

"Our top hit was rapamycin, which confirmed that the screen was robust," says Di Nardo. The next hits included two inhibitors of Hsp90: geldanamycin (GA) and 17-allylamino-geldanamycin (17-AGG). 17-AGG restored cilia in the rat neuronal cells.

TSC mutant neurons not treated and treated with17-AGG, an HSP 90 inhibitor.

"This points to the heat shock response as a regulator at different nodes within the mTORC1 signaling cascade," she adds.

Since 2010, several compounds called rapalogs are FDA-approved for TSC. Rapalogs are compounds that act like the drug rapamycin and are mTOR inhibitors. While rapalogs have some benefit for treating TSC-associated tumors and suppressing some seizures in some patients, they are ineffective for neuropsychiatric symptoms. And, they can have unwanted side effects.

Sahin's lab has been trying to identify alternative treatments for TSC that might be potentially more effective, and also maybe even safer than the rapalogs.

HSP 90 has been used as a target for cancer development but has not previously been shown as a target in TSC neurons. The team will now test drugs that inhibit Hsp90 in neuronal mouse models of TSC. Looking forward, they envision using this screening platform for identifying other potential drugs for TSC-related neuronal cell dysfunction.

Credit: 
Boston Children's Hospital

Sunnier but riskier

image: New results suggest that conservation efforts to open up overgrown snake habitat do provide more opportunities for pregnant timber rattlesnakes to reach temperatures necessary for embryos to develop, as intended. Here, a mother snake watches over her young.

Image: 
Christopher Camacho

UNIVERSITY PARK, Pa. -- Conservation efforts that open up the canopy of overgrown habitat for threatened timber rattlesnakes--whose venom is used in anticoagulants and other medical treatments--are beneficial to snakes but could come at a cost, according to a new study by researchers at Penn State and the University of Scranton. The researchers confirmed that breeding areas with more open canopies do provide more opportunities for these snakes to reach required body temperatures, but also have riskier predators like hawks and bobcats. The study, which appears in the June issue of the Journal of Herpetology, has important implications for how forest managers might open up snake habitat in the future.

Timber rattlesnakes are a species of conservation concern in Pennsylvania and are considered threatened or endangered in many of the northern states within their range. Like other ectothermic animals, snakes do not produce their own body heat and must move to warmer or cooler areas to regulate their temperature. Timber rattlesnakes typically use sunny, rocky forest clearings to breed, however many of these "gestation sites" are becoming overgrown with vegetation, blocking much-needed sunlight.

"Pregnant timber rattlesnakes typically maintain a temperature 6 to 8 degrees Celsius higher than normal so that their embryos can develop," said Christopher Howey, assistant professor of biology at the University of Scranton and former postdoctoral researcher at Penn State. "If a gestation site doesn't provide enough opportunities for snakes to reach that temperature, a snake might abort its litter, or babies might be born too small or later in the season, which reduces their chances of obtaining an essential first meal before hibernation. We wanted to understand if existing conservation efforts to open up the canopy in gestation sites actually do provide more thermal opportunities for snakes, as intended, and if these efforts impact predation risk."

The research team first quantified thermal opportunities for rattlesnakes in known gestation sites that had open or closed canopies. They logged temperatures within thermal models--essentially a copper tube painted to have similar reflectivity and heat absorbance to a snake--placed in areas where the researchers had seen snakes basking.

"As expected, we found that gestation sites with more open canopies did indeed provide more opportunities for snakes to reach optimal temperatures," said Tracy Langkilde, professor and head of biology at Penn State. "This confirms that conservation efforts to open up the canopy do what they are intended to do. But we also found that this might come at a cost, in the form of more threatening predators."

The research team also placed foam models painted like rattlesnakes at gestation sites and monitored for predators using trail game cameras--remote cameras that are triggered by movement. While there was a similar overall number of predators at sites with open canopies and closed canopies, the more threatening species--red-tailed hawks, fishers, and bobcats--only appeared at open sites.

"Our results suggest that there are tradeoffs to any management strategy and that by opening up a gestation site, we may inadvertently put more predation risk on a species," said Julian Avery, assistant research professor of wildlife ecology and conservation at Penn State. "Our models were slightly less visible to potential predators than actual snakes, so our estimates of predation risk are probably conservative, and the tradeoff may be more pronounced than what we observed."

Less threatening predators--raccoons and black bears--appeared at sites with both open and closed canopies.

"As far as we know, this is the first time that a black bear has been observed preying on a rattlesnake, or at least a model," said Howey. "Until now, we always thought that black bears avoided rattlesnakes, but we observed one bear attack two models and bite into a third."

The team suggests that forest managers should balance canopy cover and predation risk during future conservation efforts, for example by selectively removing trees that block direct sunlight but that do not considerably open up the canopy.

Improving conservation efforts at rattlesnake gestation sites is particularly important because, as far as the researchers know, snakes return to the same sites year after year to breed. If a gestation site decreases in quality, they might leave the site to find a new area, but it is unclear how successful these efforts are and the act of moving to new sites could increase contact with humans.

The researchers are currently radio-tracking actual snakes and directly manipulating the canopy cover to better understand how snakes behave in response to predators at sites with open vs. closed canopies.

"Timber rattlesnakes are an important part of the ecosystem, and where you have more rattlesnakes, you tend to have lower occurrences of Lyme disease because the snakes are eating things like chipmunks and mice which are the main vectors for the disease," said Howey. "Rattlesnake venom is also used in anticoagulants, in blood pressure medicine, and to treat breast cancer. Our research will help us refine how we conserve these important animals."

Credit: 
Penn State

Scientists develop new tool to design better fusion devices

image: PPPL physicist Michael Cole

Image: 
Elle Starkman / PPPL Office of Communications

One way that scientists seek to bring to Earth the fusion process that powers the sun and stars is trapping hot, charged plasma gas within a twisting magnetic coil device shaped like a breakfast cruller. But the device, called a stellarator, must be precisely engineered to prevent heat from escaping the plasma core where it stokes the fusion reactions. Now, researchers at the U.S. Department of Energy's (DOE) Princeton Plasma Physics Laboratory (PPPL) have demonstrated that an advanced computer code could help design stellarators that confine the essential heat more effectively.

The code, called XGC-S, opens new doors in stellarator research. "The main result of our research is that we can use the code to simulate both the early, or linear, and turbulent plasma behavior in stellarators," said PPPL physicist Michael Cole, lead author of the paper reporting the results in Physics of Plasmas. "This means that we can start to determine which stellarator shape contains heat best and most efficiently maintains conditions for fusion."

Fusion combines light elements in the form of plasma -- the hot, charged state of matter composed of free electrons and atomic nuclei -- and generates massive amounts of energy in the sun and stars. Scientists aim to replicate fusion in devices on Earth for a virtually inexhaustible supply of safe and clean power to generate electricity.

The PPPL scientists simulated the behavior of plasma inside fusion machines that look like a donut but with pinches and deformations that make the device more efficient, a kind of shape known as quasi-axisymmetric . The researchers used an updated version of XGC, a state-of-the-art code developed at PPPL for modeling turbulence in doughnut-shaped fusion facilities called tokamaks, which have a simpler geometry. The modifications by Cole and his colleagues allowed the new XGC-S code to also model plasmas in the geometrically more complicated stellarators.

The simulations showed that a type of disturbance limited to a small area can become complex and expand to fill a larger space within the plasma. The results showed that XGC-S could simulate this type of stellarator plasma more accurately than what was previously possible.

"I think this is the beginning of a really important development in the study of turbulence in stellarators," said David Gates, head of the Department of Advanced Projects at PPPL. "It opens up a big window for getting new results."

The findings demonstrate the successful modification of the XGC code to simulate turbulence in stellarators. The code can calculate the turbulence in stellarators all the way from the plasma core to the edge, providing a more complete picture of the plasma's behavior.

"Turbulence is one of the primary mechanisms causing heat to leak out of fusion plasmas," Cole said. "Because stellarators can be built in a greater variety of shapes than tokamaks, we might be able to find shapes that control turbulence better than tokamaks do. Searching for them by building lots of big experiments is too expensive, so we need big simulations to search for them virtually."

The researchers plan to modify XGC-S further to produce an even clearer view of how turbulence causes heat leakage. The more complete a picture, the closer scientists will be to simulating stellarator experiments in the virtual realm. "Once you have an accurate code and a powerful computer, changing the stellarator design you are simulating is easy," Cole said.

Credit: 
DOE/Princeton Plasma Physics Laboratory

Ideologically extreme Facebook users spread the most fake news

Facebook is a more fertile breeding ground for fake news than Twitter, and those on the far ends of the liberal-conservative spectrum are most likely to share it, according to new CU Boulder research.

The paper, in the journal Human Communication Research, also found that people who lack trust in conventional media, and in one another, post misinformation more often.

"We found that certain types of people are disproportionally responsible for sharing the false, misleading, and hyper-partisan information on social media," said lead author Toby Hopp, an assistant professor in the Department of Advertising, Public Relations and Media Design. "If we can identify those types of users, maybe we can get a better grasp of why people do this and design interventions to stem the transfer of this harmful information."

The paper comes at a time when, amid a global pandemic and contentious run-up to a presidential election, social media companies are grappling with how to curb so-called fake news.

In the past month, Twitter, Facebook and Google began labeling misleading, disputed or unverified posts about coronavirus, vowing to delete those that threaten public health.

Twitter has also slapped labels on President Donald Trump's tweets, dubbing them as inaccurate or glorifying violence. Trump responded by accusing Twitter of silencing conservative speech. Meanwhile, Facebook employees staged a virtual walk-out saying their company wasn't doing enough to address suspect posts.

"A decade or two ago, traditional news organizations played a key gatekeeping role in determining what was true or not true," said Hopp. "Now, with the proliferation of social media and with traditional news organizations under financial distress, there is a sea change occurring in the way that information flows through society."

Previous research has shown that older adults and those who identify as Republican are more likely to share fake news. But Hopp wanted to go beyond demographic or political labels.

"We wanted to look at more nuanced factors indicating how these people see the world around them," Hopp said.

To do so, his team recruited 783 regular Facebook and Twitter users over the age of 18 and, with their permission, collected and analyzed all of their posts for the period between August 1, 2015, and June 6, 2017 (before, during, and after the 2016 election). Participants also took a lengthy survey to assess their ideological conservatism vs. liberalism and identify how much they trusted friends, family and community members, and mainstream media.

The researchers then looked at who shared content from 106 websites identified as fake news or "countermedia" sites by watchdog groups or legacy news organizations like NPR or U.S. News & World Report.

"Despite the fact that we tend to call it 'fake' news, a lot of this stuff is not completely false," said Hopp, who prefers the term "countermedia." "Rather, it is grossly biased, misleading and hyper-partisan, omitting important information."

The good news: 71% of Facebook users and 95% of Twitter users shared no countermedia posts. The bad news: 1,152 pieces of fake news were shared via Facebook, with a single user responsible for 171. On Twitter, 128 pieces of countermedia were shared.

"We found that Facebook is the central conduit for the transfer of fake news," said Hopp.

In the Facebook sample, those self-identified as extremely conservative--7 on a scale of 1 to 7--accounted for the most fake news shared, at 26%. In the Twitter sample, 32% of fake news shares came from those who scored a 7.

But those who scored a 1, identifying as extremely liberal, also shared fake news frequently, accounting for 17.5% of shares on Facebook and 16.4% on Twitter.

In all, about one-fifth of users at the far ideological extremes were responsible for sharing nearly half of the fake news on the two platforms.

"It is not just Republicans or just Democrats, but rather, people who are--left or right--more ideologically extreme," said Hopp.

Those in the ideological middle and those with high levels of media and social trust were--generally speaking--the least likely to share fake news.

"People with high levels of social trust are more likely to compile online social networks comprised of diverse individuals, and this can hamper the spread of fake news," said Hopp, noting that when a fellow user calls out a post as inaccurate, others may be less likely to share it. "If someone posts something that is incorrect, false or misleading, I don't think it hurts for individual users to provide a factual rebuttal."

The authors note that the sample is not necesarily representative of the general population of all social media users nationwide, and more research is necessary.

With several other papers in the works, the authors, including Assistant Professor of Journalism Pat Ferucci and Assistant Professor of Advertising Chris Vargo, hope to provide insight to lawmakers, companies and individual users hoping to stem the fake news tide.

"We can disagree, but when we have fundamentally different views about what information is true and what is not, democracy becomes very difficult to maintain," said Hopp.

Credit: 
University of Colorado at Boulder

Could drones deliver packages more efficiently by hopping on the bus?

video: Aerial package delivery could help reduce traffic congestion associated with e-commerce truck delivery, but the limited range and payload capacity of drones impose practical limits on that option. Stanford researchers have created an algorithm that can direct drones to hitch rides on buses to extend their range, and send them aloft for the last hop to their destination. Simulations on bus systems in San Francisco and Washington, D.C. show promise.

Image: 
Video by Shushman Choudhury

One-click purchases and instant delivery have helped fuel the growth of e-commerce, but this convenience has come at the cost of increased traffic congestion, longer commute times, and strained urban communities. A 2018 report from Texas A&M University found that delivery trucks represent just 7% of U.S. traffic but account for 28% of the nation's congestion. Delivery drones could help take some of the load off the pavement, and aerial delivery systems already operate in some countries. But even the best drones have limited payload capacity and flight range. What if we could combine the last-mile flexibility of drones with the long-haul capacity of ground-based vehicles to make e-commerce more traffic-friendly?

In a recent presentation at the IEEE International Conference on Robotics and Automation (ICRA), our Stanford research team unveiled a framework for routing a large fleet of delivery drones over ground transit networks. In our setup, the drones were able to hitch rides on public transit vehicles to save energy and increase flight range. Our algorithm decided which drones should make which deliveries, one package at a time, in what order - and when to fly versus hitching a ride.

In our experiments, we ran simulations over two real-world public bus networks and corresponding delivery areas in San Francisco (150 sq. km) and the Washington, D.C., Metropolitan Area (400 sq. km). We found that the drones could quadruple their effective flight range by strategically hitching rides on transit vehicles. We also found that the "makespan" of any batch of deliveries - the longest it took for any drone in the team to deliver one of the packages in the batch - was under an hour for San Francisco and under two hours for the Washington, D.C., area.

The framework was created by the Stanford Intelligent Systems Laboratory, led by Mykel Kochenderfer, and the Autonomous Systems Laboratory, led by Marco Pavone. Kochenderfer and Pavone are associate professors in the Department of Aeronautics and Astronautics.

"Delivery drones are the future," Kochenderfer said. "By using ground transit judiciously, drones have the potential to provide safe, clean and cost-effective transport."

Credit: 
Stanford University School of Engineering

Faulty brain processing of new information underlies psychotic delusions, finds new research

Problems in how the brain recognizes and processes novel information lie at the root of psychosis, researchers from the University of Cambridge and King's College London have found. Their discovery that defective brain signals in patients with psychosis could be altered with medication paves the way for new treatments for the disease.

The results, published today in the journal Molecular Psychiatry, describe how a chemical messenger in the brain called dopamine 'tunes' the brain to the level of novelty in a situation, and helps us to respond appropriately - by either updating our model of reality or discarding the information as unimportant.

The researchers found that a brain region called the superior frontal cortex is important for signaling the correct degree of learning required, depending on the novelty of a situation. Patients with psychosis have faulty brain activation in this region during learning, which could lead them to believe things that are not real.

"Novelty and uncertainty signals in the brain are very important for learning and forming beliefs. When these signals are faulty, they can lead people to form mistaken beliefs, which in time can become delusions," said Dr Graham Murray from the University of Cambridge's Department of Psychiatry, who jointly led the research.

In novel situations, our brain compares what we know with the new information it receives, and the difference between these is called the 'prediction error'. The brain updates beliefs according to the size of this prediction error: large errors signal that the brain's model of the world is inaccurate, thereby increasing the amount that is learned from new information.

Psychosis is a condition where people have difficulty distinguishing between what is real and what is not. It involves abnormalities in a brain chemical messenger called dopamine, but how this relates to patient experiences of delusions and hallucinations has until now remained a mystery.

The new study involved 20 patients who were already unwell with psychosis, 24 patients with milder symptoms that put them at risk of the condition, and 89 healthy volunteers.

Participants were put into a brain scanning machine called a functional MRI and asked to play a computer game. This allowed the researchers to record activity in the participants' brains as they engaged in situations with a potential variety of outcomes.

In a second part of the study, 59 of the healthy volunteers had their brains scanned after taking medications that act on the signaling of dopamine in the brain. These medications changed the way that the superior frontal cortex prediction error responses were tuned to the degree of uncertainty.

"Normally, the activity of the superior frontal cortex is finely tuned to signal the level of uncertainty during learning. But by altering dopamine signaling with medication, we can change the reactivity of this region. When we integrate this finding with the results from patients with psychosis, it points to new treatment development pathways," said Dr Kelly Diederen from the Institute of Psychiatry, Psychology & Neuroscience at King's College London, who jointly led the study with Dr Murray.

In addition to studying brain activation, the researchers developed mathematical models of the choices made by participants in the computer game, to better understand the strategies of how people learn. They found that patients with psychosis did not take into account the level of uncertainty during learning, which may be a good strategy in some circumstances but could lead to problems in others. Learning problems were related to alterations in brain activation in the superior frontal cortex, with patients with severe symptoms of psychosis showing more significant alterations.

"While these kind of abnormal brain responses were predicted several years ago, this is the first time the changes have actually been shown to be present. The results give us confidence that our theoretical models of psychosis are correct," said Dr Joost Haarsma from University College London, first author of the study.

Credit: 
University of Cambridge