Tech

California's cap-and-trade air quality benefits mostly go out of state

image: Smoke stacks release emissions into the atmosphere, contributing greenhouse gases.

Image: 
Tony Webster, Wikimedia Commons (CC BY-SA 2.0)

During the first three years of California's five-year-old cap-and-trade program, the bulk of greenhouse gas (GHG) reductions occurred out of state, thus forgoing in-state reductions in harmful co-pollutants, such as particulate matter, that could improve air quality for state residents, according to a new study led by San Francisco State University and University of California, Berkeley researchers.

The study assessed how patterns of greenhouse gases and associated air pollutants changed through time and with respect to environmental equity between 2011 and 2012, prior to the start of California's cap-and-trade program, and from 2013 through 2015, after carbon trading began.

California is a world leader in adopting ambitious greenhouse gas reduction targets and boasts the world's fourth-largest carbon-trading program.

Under cap-and-trade, regulated industries must hold tradable emissions permits or "allowances" equal to the amount of GHGs they emit. The total number of allowances in circulation among regulated industries is based on a cap that is lowered slightly each year. Companies can also offset their GHG emissions by purchasing credits through forestry or agriculture projects, which can be located in other states. Between 2013 and 2015, 75 percent of the offset credits purchased by regulated companies were outside of California.

In addition, slightly more than half of the regulated facilities (52 percent) reported increases in annual average in-state GHG emissions after the initiation of the cap-and-trade program. The cement, electricity generation and oil and gas production industries saw particularly large increases in their in-state GHG emissions.

The study also found that the neighborhoods that experienced increased emissions from regulated facilities nearby had higher proportions of people of color and low-income, less educated and non-English speaking residents. This is because those communities are more likely to have several regulated facilities located nearby. However, California law requires 25 percent of the revenue from the state's cap-and-trade program to be invested in greenhouse gas reduction measures that benefit disadvantaged communities.

"Good climate policy is good for environmental justice," said Lara Cushing, the study's lead author and an assistant professor of health education at San Francisco State. "What we've seen from our study is that so far, California's cap-and-trade program hasn't really delivered on that potential."

The impact on these communities could be severe and long-lasting, the authors said. Many other air pollutants -- including particulate matter, nitrogen oxides, sulfur oxides and volatile organic compounds -- are associated with carbon dioxide emissions, and these "co-pollutants" are linked to respiratory and cardiovascular disease.

"California's climate change law requires consideration of environmental equity in its implementation, and this is the first study to look at temporal and equity trends in greenhouse gas and co-pollutant emissions since the implementation of the state's cap-and-trade program," said senior author Rachel Morello-Frosch, a UC Berkeley professor of public health and of environmental science, policy and management. "The state could do more to ensure that residents receive the short-term health benefits from improved air quality by incentivizing deeper greenhouse gas reductions in CA among regulated facilities."

Despite the study's findings, Cushing said that California is to be commended for committing to ambitious climate goals and that things could improve moving forward. For example, the state's urban greening program funds urban forests and greenways in many disadvantaged communities, and because many of those projects have not yet matured their air quality benefits may not yet be obvious. Additional measures may be needed to ensure that California's cap-and-trade program truly benefits the state's disadvantaged communities.

"Placing geographic restrictions on trading and limiting the amount of pollution 'offset' credits that companies can use to comply with the program could help incentivize local emissions reductions," said Cushing. "The communities that live on the fence line near these industries saw hope in the [cap-and-trade program] that emissions might be reduced. But so far, we haven't seen the kind of environmental equity benefits people were hoping for."

Credit: 
San Francisco State University

LED lights reduce seabird death toll from fishing by 85 percent, research shows

image: Guanay cormorant stuck in a net.

Image: 
Andrew F Johnson

Illuminating fishing nets with low-cost lights could reduce the terrible impact they have on seabirds and marine-dwellers by more than 85 per cent, new research has shown.

A team of international researchers, led by Dr Jeffrey Mangel from the University of Exeter, has shown the number of birds caught in gillnets can be drastically reduced by attaching green battery-powered light-emitting diodes (LEDs).

For the study, the researchers compared 114 pairs of gillnets - which are anchored in fixed positions at sea and designed to snare fish by the gills - in fishing waters off the coast of Peru.

They discovered that the nets fitted with the LEDs caught 85 per cent fewer guanay cormorants - a native diving bird that commonly becomes entangled in nets - compared with those without lights.

Coupled with previous research conducted by the same team, that showed LED lighting also reduced the number of sea turtles caught in fishing nets by 64 per cent, the researchers believe the lights offer a cheap, reliable and durable way to dramatically reduce the capture and death of birds and turtles, without reducing the intended catch of fish.

The research is published in the Royal Society journal Open Science on Wednesday, July 11 2018.

Lead author Dr Mangel, from the Centre for Ecology and Conservation at the University's Penryn Campus, said: "We are very encouraged by the results from this study.

"It shows us that we may be able to find cost-effective ways to reduce bycatch of multiple taxa of protected species, and do so while still making it possible for fishers to earn a livelihood."

Peru's gillnet fleet comprises the largest component of the nation's small-scale fleet and is conservatively estimated to set 100,000km of net per year in which thousands of turtles and seabirds will die as "bycatch" or unintentionally.

The innovative study, carried out in Sechura Bay in northern Peru, saw the LED lights attached at regular intervals to commercial fishing gillnets which are anchored to the bottom of the water. The nets are left in situ from late afternoon until sunlight, when the fishermen collect their haul.

The researchers used 114 pairs of nets, each typically around 500-metres in length. In each pair, one was illuminated with light-emitting diodes (LEDs) placed every ten metres along the gillnet floatline. The other net in the pair was the control and not illuminated.

The control nets caught 39 cormorants, while the illuminated nets caught just six.

A previous study, using the same LED technology, showed they also reduced the number of sea turtles also caught in gillnets. Multiple populations of sea turtle species use Peruvian coastal waters as foraging grounds including green, olive ridley, hawksbill, loggerhead and leatherback.

For that study, the researchers found that the control nets caught 125 green turtles while illuminated nets caught 62. The target catch of guitarfish was unaffected by the net illumination. They are now working with larger fisheries in Peru and with different coloured lights to see if the results can be repeated and applied with more critically endangered species.

Professor Brendan Godley, who is an author of the study and Marine Strategy Lead for the University of Exeter, said: "It is satisfying to see the work coming from our Exeter Marine PhDs leading to such positive impact in the world. We need to find ways for coastal peoples to fish with the least impact on the rest of the biodiversity in their seas."

Credit: 
University of Exeter

Ecology and AI

It's poised to transform fields from earthquake prediction to cancer detection to self-driving cars, and now scientists are unleashing the power of deep learning on a new field - ecology.

A team of researchers from Harvard, Auburn University, the University of Wyoming, the University of Oxford and the University of Minnesota demonstrated that the artificial intelligence technique can be used to identify animal images captured by motion-sensing cameras.

Using more than three million photographs from the citizen science project Snapshot Serengeti, researchers trained the system to automatically identify, count and describe animals in their natural habitats. Results showed the system was able to automate the process for up to 99.3 percent of images as accurately as human volunteers. The study is described in a June 5 paper published in the Proceedings of the National Academy of Sciences.

Snapshot Serengeti has deployed a large number of "camera traps," or motion-sensitive cameras in Tanzania that collect millions of images of animals in their natural habitat, such as lions, leopards, cheetahs, and elephants.

While the images can offer insight into a host of questions, from how carnivore species co-exist to predator-prey relationships, they are only useful once they have been converted into data that can be processed.

For years, the best method for extracting such information was to ask crowdsourced teams of human volunteers to label each image manually - a laborious and time-consuming process.

"Not only does the artificial intelligence system tell you which of 48 different species of animal is present, it also tells you how many there are and what they are doing. It will tell you if they are eating, sleeping, if babies are present, etc," said Margaret Kosmala, one of the leaders of Snapshot Serengeti and a co-author of the study. "We estimate that the deep learning technology pipeline we describe would save more than 8 years of human labeling effort for each additional 3 million images. That is a lot of valuable volunteer time that can be redeployed to help other projects."

"While there are a number of projects that rely on images captured by camera traps to understand the natural world, few are able to recruit the large numbers of volunteers needed to extract useful data," said Snapshot Serengeti founder Ali Swanson. "The result is that potentially important knowledge remains locked away, out of the reach of scientists.

"Although projects are increasingly turning to citizen science for image classification, we're starting to see it take longer and longer to label each batch of images as the demand for volunteers grows," Swanson added. "We believe deep learning will be key in alleviating the bottleneck for camera trap projects: the effort of converting images into usable data."

A form of computational intelligence loosely inspired by how animal brains see and understand the world, deep learning relies on training neural networks using vast amounts of data. For that process to work, though, the training data must be properly labeled.

"When I told (senior author) Jeff Clune we had 3.2 million labeled images, he stopped in his tracks," said Craig Packer, who heads the Snapshot Serengeti project. "Our citizen scientists have done phenomenal work, but we needed to speed up the process to handle ever greater amounts of data. The deep learning algorithm is amazing and far surpassed my expectations. This is a game changer for wildlife ecology."

Going forward, first-author Mohammad Sadegh Norouzzadeh believes deep learning alogrithms will continue to improve and hopes to see similar systems applied to other ecological data sets.

"Here, we wanted to demonstrate the value of the technology to the wildlife ecology community, but we expect that as more people research how to improve deep learning for this application and publish their datasets, the sky's the limit," he said. "It is exciting to think of all the different ways this technology can help with our important scientific and conservation missions."

"This technology lets us accurately, unobtrusively, and inexpensively collect wildlife data, which could help catalyze the transformation of many fields of ecology, wildlife biology, zoology, conservation biology, and animal behavior into 'big data' sciences," said Jeff Clune, the Harris Associate Professor at the University of Wyoming and a Senior Research Manager at Uber's Artificial Intelligence Labs, and the senior author on the paper. "This will dramatically improve our ability to both study and conserve wildlife and precious ecosystems."

Credit: 
Harvard University

IBM-EPFL-NJIT team demonstrates novel synaptic architecture for brain inspired computing

Two New Jersey Institute of Technology (NJIT) researchers, working with collaborators from the IBM Research Zurich Laboratory and the École Polytechnique Fédérale de Lausanne, have demonstrated a novel synaptic architecture that could lead to a new class of information processing systems inspired by the brain.

The findings are an important step toward building more energy-efficient computing systems that also are capable of learning and adaptation in the real world. They were published last week in a paper in the journal Nature Communications.

The researchers, Bipin Rajendran, an associate professor of electrical and computer engineering, and S. R. Nandakumar, a graduate student in electrical engineering, have been developing brain-inspired computing systems that could be used for a wide range of big data applications.

Over the past few years, deep learning algorithms have proven to be highly successful in solving complex cognitive tasks such as controlling self-driving cars and language understanding. At the heart of these algorithms are artificial neural networks - mathematical models of the neurons and synapses of the brain - that are fed huge amounts of data so that the synaptic strengths are autonomously adjusted to learn the intrinsic features and hidden correlations in these data streams.

However, the implementation of these brain-inspired algorithms on conventional computers is highly inefficient, consuming huge amounts of power and time. This has prompted engineers to search for new materials and devices to build special-purpose computers that can incorporate the algorithms. Nanoscale memristive devices, electrical components whose conductivity depends approximately on prior signaling activity, can be used to represent the synaptic strength between the neurons in artificial neural networks.

While memristive devices could potentially lead to faster and more power-efficient computing systems, they are also plagued by several reliability issues that are common to nanoscale devices. Their efficiency stems from their ability to be programmed in an analog manner to store multiple bits of information; however, their electrical conductivities vary in a non-deterministic and non-linear fashion.

In the experiment, the team showed how multiple nanoscale memristive devices exhibiting these characteristics could nonetheless be configured to efficiently implement artificial intelligence algorithms such as deep learning. Prototype chips from IBM containing more than one million nanoscale phase-change memristive devices were used to implement a neural network for the detection of hidden patterns and correlations in time-varying signals.

"In this work, we proposed and experimentally demonstrated a scheme to obtain high learning efficiencies with nanoscale memristive devices for implementing learning algorithms," Nandakumar says. "The central idea in our demonstration was to use several memristive devices in parallel to represent the strength of a synapse of a neural network, but only chose one of them to be updated at each step based on the neuronal activity."

Nandakumar, who joined NJIT in 2016, is a recipient of the prestigious IBM Ph.D. fellowship and has been working with the Memory & Cognitive Technologies group at IBM Research - Zurich on this problem for the past year.

"The goal of our research is to build novel computing systems that are inspired by the architecture of the brain," notes Rajendran, his research advisor at NJIT. "While there have been significant successes in the past decade in using machine learning algorithms for a wide variety of complex cognitive tasks, their use in mobile devices and sensors embedded in the real world requires new technological solutions with substantially lower energy and higher efficiency. While significant challenges remain, our team has now shown that nanoscale memristive devices, albeit being noisy and non-ideal, can be used for such applications in a straightforward manner."

Credit: 
New Jersey Institute of Technology

Teen crash risk highest during first three months after getting driver's license

Teenage drivers are eight times more likely to be involved in a collision or near miss during the first three months after getting a driver's license, compared to the previous three months on a learner's permit, suggests a study led by the National Institutes of Health. Teens are also four times more likely to engage in risky behaviors, such as rapid acceleration, sudden braking and hard turns, during this period. In contrast, teens on a learner's permit drove more safely, with their crash/near crash and risky driving rates similar to those of adults. The study appears in the Journal of Adolescent Health.

"Given the abrupt increase in driving risks when teenagers start to drive independently, our findings suggest that they may benefit from a more gradual decrease in adult supervision during the first few months of driving alone," said Bruce Simons-Morton, Ed.D., M.P.H., senior investigator at NIH's Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD) and one of the authors of the study.

The study is one of the first to follow the same individuals over time, from the beginning of the learner period through the first year of independent driving, while continuously collecting information using software and cameras installed in the participants' vehicles.

The study also evaluated parents' driving in the same vehicles, over the same time, on similar roads and under similar driving conditions as their children. Near-crash events were those requiring a last-moment maneuver to avoid a crash, while crashes were physical contact between the driver's vehicle and another object.

The study enrolled 90 teenagers and 131 parents in Virginia, and the data collection system was developed by the Virginia Tech Transportation Institute, Blacksburg.

Overall, the study found that the crash/near crash rate for teenagers did not decline over the first year of independent driving, while the rate of risky driving modestly declined. According to the researchers, if teenagers were learning from their experiences, one would expect that the driving risks would consistently decline over time.

Teenagers also had a higher risky driving rate under favorable conditions--daytime or dry roads--compared to less favorable conditions--nighttime or wet roads. This finding implies that teenagers may be more careful and less inclined to take risks during unfavorable driving conditions.

When comparing male and female teens, the study team found that the risky driving rate did not differ by gender during the learning period. However, when teenagers entered independent driving stages, males had a higher risky driving rate. This rate did not consistently decrease over time for males but did decrease for females. The crash/near crash rate was similar across genders and driving periods.

"During the learner's permit period, parents are present, so there are some skills that teenagers cannot learn until they are on their own," said Pnina Gershon, Ph.D., the study's lead author. "We need a better understanding of how to help teenagers learn safe driving skills when parents or other adults are not present."

The researchers aim to identify factors that may improve safety and reduce specific driving risks. They plan to address whether the duration and quality of practice driving can predict future outcomes in the independent driving period. They also will explore how passengers influence driving risk during learner and independent driving periods.

Credit: 
NIH/Eunice Kennedy Shriver National Institute of Child Health and Human Development

Asian residents are exposed to 9 times more air pollution than Americans or Europeans

According to the World Health Organisation, about 88 percent of premature deaths in low- and middle-income countries in Asia can be attributed to air pollution. The number of road vehicles in Beijing increased from 1.5 million in 2000 to more than 5 million in 2014 and the number in Delhi, India, is expected to increase from 4.7 million in 2010 to 25.6 million by 2030.

In a review published by the journal Atmospheric Environment, Surrey's Global Centre for Clean Research (GCARE) looked at studies of pollution exposure and concentration levels in Asian transport microenvironments (walking, driving, cycling, motorbike riding and bus riding). Researchers focused on the levels of fine particles, black carbon produced by carbon-rich fuels such as gasoline and diesel fuel, and ultrafine particles (UFP) small enough to travel deep into a citizen's lungs.

The review found evidence that pedestrians walking along busy roadsides in Asian cities are exposed to up to 1.6 times higher fine particle levels than people in European and American cities. Car drivers in Asia are exposed to up to nine times more pollution than Europeans and Americans, while black carbon levels were seven times higher for Asian pedestrians than Americans. The study reported that in Hong Kong, UFP levels were up to four times higher than in cities in Europe. In New Delhi, average black carbon concentration in cars was up to five times higher compared to Europe or North America.

Professor Prashant Kumar, lead author of the study and the Director of the Global Centre for Clean Air Research at the University of Surrey, said: "Care should be taken in directly comparing and contrasting the results of different studies due to varied amounts of information available on personal exposure in studied regions. However, there is compelling evidence that people travelling in urban areas in Asian cities are being exposed to a significantly higher level of air pollution.

"A noticeable gap still exists in studies that focus on the Asian population living in rural, semi-rural or smaller cities, where pollution exposure could be as harmful as in urban areas owing to several unattended sources. There were rare data on cyclist and motorcyclist exposure despite substantial use in Asian cities; studies were limited for other transport modes too. It is important that this knowledge gap is filled if we are to get a complete picture of the pollution exposure challenge that the Asian population faces."

Professor Chris Frey of North Carolina State University, co-author of the study, said: "There are increasing efforts in Asia to install properly designed and calibrated portable monitoring systems to measure actual exposures, using the data to better understand why high exposures occur and how to prevent them. These measurements of personal exposures will help individuals, businesses, and governments to develop and implement strategies to reduce such exposures."

Credit: 
University of Surrey

Fishy chemicals in farmed salmon

image: Organic pollutants hazardous to human health may contaminate farm-raised Atlantic salmon if their feed is sourced from regions with little or no environmental regulations.

Image: 
Swanson School of Engineering/Carla Ng

PITTSBURGH (July 10, 2018) ... Persistent organic pollutants--or POPs--skulk around the environment threatening human health through direct contact, inhalation, and most commonly, eating contaminated food. As people are becoming more aware of their food's origin, new research at the University of Pittsburgh suggests it might be just as important to pay attention to the origin of your food's food.

The American Chemical Society journal Environmental Science & Technology featured research by Carla Ng, assistant professor of civil and environmental engineering at Pitt's Swanson School of Engineering, on the cover of its June 19 issue. Dr. Ng tracked the presence of a class of synthetic flame retardants called polybrominated diphenyl ethers (PBDEs), which were once a popular additive to increase fire resistance in consumer products such as electronics, textiles, and plastics (DOI: 10.1021/acs.est.8b00146).

"The United States and much of Europe banned several PBDEs in 2004 because of environmental and public health concerns," says Dr. Ng. "PBDEs can act as endocrine disruptors and cause developmental effects. Children are particularly vulnerable."

The Stockholm Convention, an international environmental treaty established to identify and eliminate organic pollutants, listed PBDEs as persistent organic pollutants in 2009. Despite restrictions on their use, PBDEs continue to be released into the environment because of their long lifetime and abundance in consumer goods. They are particularly dense in areas such as China, Thailand, and Vietnam that process a lot of electronic waste and do not regulate much of their recycling.

"The international food trade system is becoming increasingly global in nature and this applies to animal feed as well. Fish farming operations may import their feed or feed ingredients from a number of countries, including those without advanced food safety regulations," explains Dr. Ng.

Most models to predict human exposure to pollutants typically focus on people in relation to their local environment. Dr. Ng's model compared a variety of factors to find the best predictor of PBDEs in farmed salmon, including pollutants inhaled through gills, how the fish metabolized and eliminated pollutants, and of course, the concentration of pollutants in the feed.

She says, "We found that feed is relatively less important in areas that already have high concentrations of pollutants in the environment. However, in otherwise clean and well-regulated environments, contaminated feed can be thousands of times more significant than the location of the farm for determining the PBDE content of salmon fillets."

Dr. Ng says the model could be modified and applied to other fish with high global trading volumes such as tilapia or red snapper. It could also be used to predict pollutant content in livestock or feeds produced in contamination "hot spots."

"Hot spots are places identified as having high levels of pollutants," says Dr. Ng. "As these chemicals circulate through the environment, much ends up in the ocean. It's extremely important to pay attention to the sourcing of ocean commodities and areas where pollutant concentrations are particularly high."

Dr. Ng's model also helps inform contamination control strategies such as substituting fish oils for plant-based materials or taking measures to decontaminate fish oil before human consumption.

Credit: 
University of Pittsburgh

UTEP, UNT study sheds light on composition of dust carried by rainwater across Texas

image: A cloud of dust rolls into El Paso, Texas on March 3rd, 2012.

Image: 
Photo by Joel Gilbert.

A collaboration between professors from The University of Texas at El Paso and the University of North Texas is leading to a better understanding of the composition of dust carried by rain across the state, and how that dust can affect the places where it ends up.

An article published July 6, 2018, in the Journal of Geophysical Research: Atmospheres, a publication of the American Geophysical Union, details for the first time how dust - and the compounds it contains - are dispersed throughout the state in rainwater.

The article, titled "Wet Dust Deposition across Texas during the 2012 Drought: An Overlooked Pathway for Elemental Flux to Ecosystems," was supported by grants from the National Science Foundation by Thomas E. Gill, Ph.D., UTEP professor in the Department of Geological Sciences and Environmental Science and Engineering Program, and Alexandra Ponette-González, Ph.D., associate professor in UNT's Department of Geography and the Environment.

"The main motivation of the study was to find out how much material, and the composition of that material, is falling out with the rain," Gill said. "Dust can contain chemical elements that are actually nutrients such as iron and phosphorus. But we also know that dust can carry along with it other elements that are potentially harmful pollutants, that are very detrimental to ecosystems. We wanted to take a look at it."

Gill and Ponette-González worked with the National Atmospheric Deposition Program to review rainwater collection samples collected throughout 2012. The year represented the tail-end of a severe drought experienced throughout the state. The dry conditions yielded a large amount of dust in the samples analyzed.

Gill and Ponette-González studied dusty rainwater samples from the arid West Texas desert and the Gulf Coast near Houston. The stark contrast in geographic locations offered a clearer picture of not only the compounds contained within dust carried by rain, but the great potential for that dust to travel. Gill pointed out that some samples collected along the Gulf Coast contained dust that originated in the Chihuahuan Desert.

Furthermore, Gill said, samples from the Guadalupe Mountains east of El Paso yielded disproportionately high amounts of calcium, phosphate and potassium that were deposited throughout the course of a year through only a handful of rain events. In the Gulf Coast, meanwhile, dusty rain events delivered 20 percent of the phosphate input in that area, Gill said.

He added that the study marks the first time that the composition, frequency and amount of dust in rain have been quantified for Texas.

"Dusty rain events are not just something that make you need a car wash," Gill said. "It's also moving stuff around in the ecosystem that could be important. We think this study points out that we have to be more aware of the potential for pollutants, but also nutrients being delivered with dust mixed with rain."

Credit: 
University of Texas at El Paso

Seeing yourself as Einstein may change the way you think

The perception of having Albert Einstein's body may help unlock previously inaccessible mental resources, finds a new study. Following a virtual reality "Einstein" experience, participants were less likely to unconsciously stereotype older people while those with low self-esteem scored better on cognitive tests. Published in Frontiers in Psychology, the study suggests the way our brain perceives our body is surprisingly flexible. The researchers hope the technique will be useful for education.

"Virtual reality can create the illusion of a virtual body to substitute your own, which is called virtual embodiment," says Professor Mel Slater of the University of Barcelona. "In an immersive virtual environment, participants can see this new body reflected in a mirror and it exactly matches their movements, helping to create a powerful illusion that the virtual body is their own."

Previous research found that virtual embodiment can have striking effects on attitudes and behavior. For example, white people who experienced a virtual black body showed less unconscious stereotyping (called implicit bias) of black people.

"We wondered whether virtual embodiment could affect cognition," says Slater. "If we gave someone a recognizable body that represents supreme intelligence, such as that of Albert Einstein, would they perform better on a cognitive task than people given a normal body?"

To find out, the researchers recruited 30 young men to participate in a virtual embodiment experiment. Prior to the embodiment, the participants completed three tests: a cognitive task to reveal their planning and problem-solving skills; a task to quantify their self-esteem; and one to identify any implicit bias towards older people. This final task was to investigate whether the experience of having an older appearance simulation could change attitudes to older people.

The study participants then donned a body-tracking suit and a virtual reality headset. Half experienced a virtual Einstein body and the other half a normal adult body. After completing some exercises in the virtual environment with their new body, they repeated the implicit bias and cognitive tests.

The researchers found that people with low self-esteem performed the cognitive task better following the virtual Einstein experience, compared with those who experienced a normal body of someone their own age. Those exposed to the Einstein body also had a reduced implicit bias against older people.

Bias is based on considering someone to be different from yourself. Being in an older body may have subtly changed the participants' attitudes by blurring the distinction between elderly people and themselves.

Similarly, being in the body of someone extremely intelligent may have caused the participants to think about themselves differently, allowing them to unlock mental resources that they don't normally access.

Crucially, these cognitive enhancements only occurred in people with low self-esteem. The researchers hypothesize that those with low self-esteem had the most to gain by changing how they thought about themselves. Seeing themselves in the body of a respected and intelligent scientist may have enhanced their confidence during the cognitive test.

To further investigate the phenomenon, a larger study with more participants -- and including men and women -- is needed. However, the results so far suggest that the technique could be useful in education.

"It is possible that this technique might help people with low self-esteem to perform better in cognitive tasks and it could be useful in education," says Slater.

Credit: 
Frontiers

New targets found to reduce blood vessel damage in diabetes

image: This is Dr. Masuko Ushio-Fukai.

Image: 
Phil Jones, Senior Photographer, Augusta University

AUGUSTA, Ga. (July 9, 2018) - In diabetes, both the tightly woven endothelial cells that line our blood vessels and the powerhouses that drive those cells start to come apart as early steps in the destruction of our vasculature.

Now scientists have evidence that these breakups occur as another relationship falls apart.

Levels of the enzyme PDIA1, which enables a healthy homeostasis of endothelial cells as well as production of new blood vessels, decrease in diabetes, while activity of Drp1, a key regulator of fission regulated by PDIA1, goes way up, Medical College of Georgia scientists report in the journal Cell Reports.

The imbalance drives endothelial cells and their powerhouses apart, setting up a vicious cycle where too much reactive oxygen species, or ROS, gets made by the mitochondria, says Dr. Masuko Ushio-Fukai, vascular biologist in the Vascular Biology Center and Department of Medicine at MCG at Augusta University.

Powerhouses further fragment, more Drp1 gets oxidized and activated and even more ROS gets produced, says the study's corresponding author.

"Fission induces fragmentation which induces more ROS which contributes to Drp1 oxidation," says Ushio-Fukai of the mounting feedback loop.

The biological glue that helps hold endothelial cells together begins to come apart and so do the previously tightly connected cells.

"It's very leaky and promotes inflammatory cells, like macrophages, to the endothelial cells which causes even more disruption," Ushio-Fukai says.

The discoveries provide new treatment targets for diseases associated with endothelial cell senescence, or aging, such as diabetes, cardiovascular disease and age-related disorders, the scientists report.

Potential points of intervention include restoring a healthy balance of PDIA1 and Drp1 and/or reducing the high oxidative stress that throws off the balance in diabetes and other disease.

"It's clear that endothelial function is impaired in conditions like diabetes as well as aging," Ushio-Fukai says. "If we can help restore the function of endothelial cells, we can help keep blood vessels more normal."

We know that some ROS is needed for a variety of body functions, but that high levels are associated with aging throughout the body. Inside our endothelial cells, the mitochondria, known for producing the cell fuel ATP, actually primarily produce ROS - mainly superoxide and hydrogen peroxide - as fuel and ROS in turn helps fuel mitochondria.

Much like a high performance versus lower performing car, ROS is sufficient to keep the normally quiescent cells that line our blood vessels functioning, versus our heart muscle cells, for example, which need a lot of the high-test ATP, Ushio-Fukai says.

In fact, normal levels of ROS actually activate PDIA1 and are a signaling molecule for angiogenesis, the formation of new blood vessels.

But the MCG scientists have shown that the high ROS levels in diabetes instead decrease activity of PDIA1, which impairs angiogenesis. In this high oxidative-stress environ, with its regulator turned down, oxidation and activity of Drp1 go up, Ushio-Fukai says.

The imbalance sets in motion other unhealthy events that include the mitochondria literally coming apart - rather than undergoing the normal fission and fusion - which results in even more ROS production and that vicious cycle.

When the scientists knocked out PDIA1 in endothelial cells isolated from human blood vessels, they found more evidence that the protein is required to maintain endothelial cell function. The endothelial cells started looking and acting older. There was less cell growth and proliferation as well as impaired angiogenesis and ability to dilate.

When they looked at whether PDIA1 regulates ROS levels in endothelial cells, they found that the loss of PDIA1 induces both a slight increase inside the endothelial cells and mitochondrial dysfunction, including significantly increasing the amount of ROS produced by mitochondria. Mitochondria, which are typically in constant state of fission and fusion, only fragment without PDIA1 in their endothelial cells.

They showed that PDIA1 appears to have a direct role in regulating the fission action of Drp1 and were able to rescue the cells from excessive mitochondrial fragmentation by delivering more PDIA1 directly to the cells and to their mitochondria. Looking again at the relationship between PDIA1 and Drp1, they saw a significant increase in Drp1 when they silenced PDIA1 in endothelial cells. A Drp1 inhibitor, in turn, silenced the expected mitochondrial fragmentation, related endothelial cell senescence and the impaired ability to form capillaries.

Wound healing is a big problem in diabetes, to some extent at least because of impaired angiogenesis, so they also looked in a mouse model of wound healing with type 2 diabetes. PDIA1 expression was markedly downregulated in the skin compared with healthy mice. When they transferred normal PDIA1 to the vascular endothelial cells in the diabetic mice, it rescued normal protein levels and wound healing. Wound healing was also impaired in mice missing PDIA1 and, once again, restoring the normal protein normalized wound healing.

"We showed that impaired wound healing in diabetic mice can be restored by treatment of endothelial cell senescence," Ushio-Fukai says.

Next steps include developing a clinical grade Drp1 inhibitor. The MCG scientists also are looking at delivery systems for PDIA1, including use of biological packages called exosomes, which cells use to communicate and swap contents.

Healthy endothelial cells also produce nitric oxide, a key vasodilator of blood vessels.

Credit: 
Medical College of Georgia at Augusta University

New insight into Huntington's disease may open door to drug development

image: Professor Ray Truant (left) and Ph.D. student Laura Bowie of McMaster University have developed a new hypothesis on Huntington's disease, published in PNAS, which shows promise to open new avenues for drug development for the condition.

Image: 
Photo courtesy of McMaster University

Hamilton, ON (July 9, 2018) - McMaster University researchers have developed a new theory on Huntington's disease which is being welcomed for showing promise to open new avenues of drug development for the condition.

Huntington's disease is caused by a mutation in the gene that makes the protein called huntingtin. A team of researchers led by McMaster has found there is a unique type of signalling coming from damaged DNA, that signals huntingtin activity in DNA repair, and that this signalling is defective in Huntington's disease.

A study developing the new hypothesis was published today in the Proceedings of the National Academy of Sciences (PNAS).

"The concept was that if we applied the signalling molecule back in excess, even orally, this signalling can be restored in the Huntington's disease mouse brain," said Laura Bowie, a PhD student in the Department of Biochemistry and Biomedical Sciences at McMaster. "The net result was that we fixed the modification of huntingtin not seen in mutant huntingtin in Huntington's disease."

Using this hypothesis, the study team discovered a molecule called N6-furfuryladenine, derived from the repair of DNA damage, which corrected the defect seen in mutant huntingtin.

"Based on dosing by different ways of this molecule in mouse Huntington's disease models, Huntington's disease symptoms were reversed," said Bowie. "The mutant huntingtin protein levels were also restored to normal, which was a surprise to us."

Ray Truant, senior author on the study, has dedicated his career to Huntington's disease research and how mutation leads to Huntington's disease. His lab was the first to show that normal huntingtin was involved in DNA repair.

Truant argues that the traditional and controversial amyloid/protein misfolding hypothesis, where a group of proteins stick together forming brain deposits, is likely the result of the disease, rather than its cause.

He said he considers this paper the most significant of his career.

"This is an important new lead and a new hypothesis, but it is important for people to know this is not a drug or cure," said Truant, professor in the Department of Biochemistry and Biomedical Sciences at McMaster.

"This is the first new hypothesis for Huntington's disease in 25 years that does not rely on the version of the amyloid hypothesis which has consistently failed in drug development for other diseases."

Huntington's disease is a hereditary, neurodegenerative illness with devastating physical, cognitive and emotional symptoms. Worldwide, approximately one of every 7,000 people can develop Huntington's disease. Currently there is no treatment available to alter the course of the disease.

The study is an original and important contribution to the field of neurodegeneration, says Yves Joanette, scientific director of the Canadian Institutes of Health Research Institute of Aging.

"This research shows how complex and diverse the routes to neurodegenerative processes in the brain can be," said Joanette. "This study will inspire not only research on Huntington's disease, but also in some of the contributing processes to the development of many other neurodegenerative diseases."

Bev Heim-Myers, CEO of the Huntington Society of Canada, said: "The Huntington Society of Canada is proud to support such leading edge research."

"Innovative research initiatives, such as the work led by the team in Dr. Truant's lab, including PhD student Laurie Bowie, has the potential to transform HD research. The answers we find for Huntington's disease will likely lead to better understanding of treatments for other neurological diseases and it is important that we continue this cross-talk amongst neurodegenerative diseases."

Credit: 
McMaster University

Oxygen levels on early Earth rose, fell several times before great oxidation even

image: The Jeerinah Formation in Western Australia, where a UW-led team found a sudden shift in nitrogen isotopes. "Nitrogen isotopes tell a story about oxygenation of the surface ocean, and this oxygenation spans hundreds of kilometers across a marine basin and lasts for somewhere less than 50 million years," said lead author Matt Koehler.

Image: 
Roger Buick / University of Washington

Earth's oxygen levels rose and fell more than once hundreds of millions of years before the planetwide success of the Great Oxidation Event about 2.4 billion years ago, new research from the University of Washington shows.

The evidence comes from a new study that indicates a second and much earlier "whiff" of oxygen in Earth's distant past -- in the atmosphere and on the surface of a large stretch of ocean -- showing that the oxygenation of the Earth was a complex process of repeated trying and failing over a vast stretch of time.

The finding also may have implications in the search for life beyond Earth. Coming years will bring powerful new ground- and space-based telescopes able to analyze the atmospheres of distant planets. This work could help keep astronomers from unduly ruling out "false negatives," or inhabited planets that may not at first appear to be so due to undetectable oxygen levels.

"The production and destruction of oxygen in the ocean and atmosphere over time was a war with no evidence of a clear winner, until the Great Oxidation Event," said Matt Koehler, a UW doctoral student in Earth and space sciences and lead author of a new paper published the week of July 9 in the Proceedings of the National Academy of Sciences.

"These transient oxygenation events were battles in the war, when the balance tipped more in favor of oxygenation."

In 2007, co-author Roger Buick, UW professor of Earth and space sciences, was part of an international team of scientists that found evidence of an episode -- a "whiff" -- of oxygen some 50 million to 100 million years before the Great Oxidation Event. This they learned by drilling deep into sedimentary rock of the Mount McRae Shale in Western Australia and analyzing the samples for the trace metals molybdenum and rhenium, accumulation of which is dependent on oxygen in the environment.

Now, a team led by Koehler has confirmed a second such appearance of oxygen in Earth's past, this time roughly 150 million years earlier -- or about 2.66 billion years ago -- and lasting for less than 50 million years. For this work they used two different proxies for oxygen -- nitrogen isotopes and the element selenium -- substances that, each in its way, also tell of the presence of oxygen.

"What we have in this paper is another detection, at high resolution, of a transient whiff of oxygen," said Koehler. "Nitrogen isotopes tell a story about oxygenation of the surface ocean, and this oxygenation spans hundreds of kilometers across a marine basin and lasts for somewhere less than 50 million years."

The team analyzed drill samples taken by Buick in 2012 at another site in the northwestern part of Western Australia called the Jeerinah Formation.

The researchers drilled two cores about 300 kilometers apart but through the same sedimentary rocks -- one core samples sediments deposited in shallower waters, and the other samples sediments from deeper waters. Analyzing successive layers in the rocks years shows, Buick said, a "stepwise" change in nitrogen isotopes "and then back again to zero. This can only be interpreted as meaning that there is oxygen in the environment. It's really cool -- and it's sudden."

The nitrogen isotopes reveal the activity of certain marine microorganisms that use oxygen to form nitrate, and other microorganisms that use this nitrate for energy. The data collected from nitrogen isotopes sample the surface of the ocean, while selenium suggests oxygen in the air of ancient Earth. Koehler said the deep ocean was likely anoxic, or without oxygen, at the time.

The team found plentiful selenium in the shallow hole only, meaning that it came from the nearby land, not making it to deeper water. Selenium is held in sulfur minerals on land; higher atmospheric oxygen would cause more selenium to be leached from the land through oxidative weathering -- "the rusting of rocks," Buick said -- and transported to sea.

"That selenium then accumulates in ocean sediments," Koehler said. "So when we measure a spike in selenium abundances in ocean sediments, it could mean there was a temporary increase in atmospheric oxygen."

The finding, Buick and Koehler said, also has relevance for detecting life on exoplanets, or those beyond the solar system.

"One of the strongest atmospheric biosignatures is thought to be oxygen, but this study confirms that during a planet's transition to becoming permanently oxygenated, its surface environments may be oxic for intervals of only a few million years and then slip back into anoxia," Buick said.

"So, if you fail to detect oxygen in a planet's atmosphere, that doesn't mean that the planet is uninhabited or even that it lacks photosynthetic life. Merely that it hasn't built up enough sources of oxygen to overwhelm the 'sinks' for any longer than a short interval.

"In other words, lack of oxygen can easily be a 'false negative' for life."

Koehler added: "You could be looking at a planet and not see any oxygen -- but it could be teeming with microbial life."

Credit: 
University of Washington

Why gold-palladium alloys are better than palladium for hydrogen storage

image: The Au atoms destabilize chemisorbed hydrogen, thus increasing their energy and reducing the barrier.

Image: 
2018 Shohei Ogura, Institute of Industrial Science, The University of Tokyo

Tokyo - Materials that absorb hydrogen are used for hydrogen storage and purification, thus serving as clean energy carriers. The best-known hydrogen absorber, palladium (Pd), can be improved by alloying with gold (Au).

New research led by The University of Tokyo Institute of Industrial Science explains for the first time how Au makes such a difference, which will be valuable for fine-tuning further improvements.

The first step in hydrogen storage is chemisorption, wherein gaseous H2 collides with Pd and adsorbs (sticks) to the surface. Secondly, the chemisorbed H atoms diffuse into the sub-surface, several nanometers deep. A recent article published in Proceedings of the National Academy of Sciences (PNAS) reports that the group focused on this slow second step, which is the bottleneck to the overall process.

In pure Pd, only around 1 in 1,000 of the H2 molecules that collide with the metal actually absorb into the interior. Hence, only these can be stored as energy carriers. However, when the Pd surface is alloyed with Au, absorption is over 40 times faster.

It is vital to get the amount of Au just right - hydrogen absorption is maximized when the number of Au atoms is slightly less than half (0.4) of a single monolayer of Pd, according to the study. This was discovered by thermal desorption spectroscopy, and by depth-measurement of the H atoms using gamma-ray emissions.

"We wanted to know what role Au plays," study first author Kazuhiro Namba says. "The Au atoms are mostly at the alloy surface. However, our results showed that hydrogen storage is improved even below this depth, in pure Pd. Therefore, Au must be accelerating the diffusion of hydrogen into the sub-surface, rather than improving its solubility."

This diffusion acts like a typical chemical reaction - its rate is determined by the energy barrier, i.e. the hurdle that the H atoms must overcome to penetrate Pd. The barrier height is the gap between the energies of the chemisorbed H atoms and the transition state they must pass through to reach the first sub-surface site.

According to density functional theory (DFT) calculations, the Au atoms destabilize chemisorbed hydrogen, thus increasing their energy and reducing the barrier. By making the surface a less stable environment for H atoms, this encourages them to penetrate more quickly into deeper sites, instead of lingering at the surface. Photoemission spectroscopy suggests that Au atoms push the energy of the Pd electrons downward, weakening their ability to chemisorb hydrogen.

However, the weakly chemisorbed H atoms are also more likely to simply desorb from the surface; i.e., return to the gas phase. This unwanted process explains why hydrogen storage is maximized with just 0.4 monolayers of Au - if any more Au is added, the desorption of hydrogen outpaces its diffusion into Pd.

"Our study reveals, at the electronic level, how Au alloying controls hydrogen absorption," co-author Shohei Ogura says. "This will help us to design better hydrogen storage materials, which will play a role in carbon-neutral energy transport, as well as solid catalysts for chemical reactions, which often depend on surface-bound hydrogen."

Credit: 
Institute of Industrial Science, The University of Tokyo

Following pitch count guidelines may help young baseball players prevent injuries

SAN DIEGO, CA - Young pitchers who exceed pitch count limits are more prone to elbow injuries, according to research presented today at the American Orthopaedic Society for Sports Medicine's Annual Meeting in San Diego. Season statistics of players were compared relative to pitch count limits established by the Japanese Society of Clinical Sports Medicine.

"Our research focused on 149 young pitchers ranging in age from 7 to 11 who had no prior elbow pain," commented lead author Toshiyuki Iwame, MD, from Tokushima University in Tokushima, Japan. "We found those who reported elbow pain after the season were associated with pitching numbers beyond current throwing guidelines."

Researchers asked the players to complete a questionnaire after the season, which showed 66 (44.3%) experienced pain. Multivariate analysis showed that throwing more than 50 pitches per day (OR, 2.44; 95% CI, 1.22-4.94), 200 pitches per week (OR, 2.04; 95% CI, 1.03-4.10), and 70 games per year (OR, 2.47; 95% CI, 1.24-5.02), the baselines established by the JSCSM, were risk factors for pain.

"As the demand on young pitchers to play more increases, there is less time for repair of bony and soft tissues in the elbow," commented Iwame. "We hope research like this continues to direct young athletes, parents and coaches to follow pitch limits to prevent injuries."

The study authors noted the player recall bias, reporting of pain detail on the questionnaire, and limited geographical representation were limitations of the research.

Credit: 
American Orthopaedic Society for Sports Medicine

Mystery of phase change in sub-nanosecond-octahedra structure motif

Phase change random access memory (PCRAM) has been successfully applied in the computer storage architecture, as storage class memory, to bridge the performance gap between DRAM and Flash-based solid-state drive due to its good scalability, 3D-integration ability, fast operation speed and compatible with CMOS technology. Focusing on phase change materials and PCRAM for decades, we have successfully developed 128 Mb embedded PCRAM chips, which can meet the requirements of most embedded systems.

3D Xpoint (3D PCRAM), invented by Intel and Micron, has been regarded as a new breakthrough in the last 25 years since the application of NAND in 1989, which represents state-of-the-art memory technology. This technology has some remarkable features, such as the confined device structure with 20 nm size, the metal crossbar electrodes to reduce the resistance variations in PCRAM arrays, and the ovonic threshold switching selector that can provide a high drive current and a low leakage current. A good understanding of phase change mechanism is of great help to design new phase change materials with fast operation speed, low power consumption and long-lifetime.

In a recent paper published in SCIENCE CHINA Information Sciences, researchers firstly review the development of PCRAM and different understandings on phase change mechanisms in recent years, and then propose a new view on the mechanism, which is based on the octahedral structure motifs and vacancies.

Octahedral structure motifs are generally found in both amorphous and crystalline phase change materials. They are considered to be the basic units during phase transition, which are severely defective in the amorphous phase. These configurations turn into more ordered ones after minor local rearrangements, the growth of which results in the crystallization of rocksalt (RS) phase with a large amount of vacancies in the cation sites. Further driven by thermodynamic driving force, these vacancies move and layer along certain directions; consequently, the metastable RS structure transforms into the stable hexagonal (HEX) structure. Based on the results, researchers find that reversible phase transition between amorphous phase and RS phase, without further changing into HEX phase, would greatly decrease the required power consumption. Robust octahedra and plenty of vacancies in both amorphous and RS phase, respectively avoiding large atomic rearrangement and providing necessary space, are crucial to achieve the nanosecond or even sub-nanosecond operation of PCRAM.

Credit: 
Science China Press