Tech

Two NASA satellites confirm Tropical Cyclone Ampil's heaviest rainfall shift

video: This 3-D animation showing cloud top heights within Tropical Storm Ampil on July 20, was constructed with GPM's radar data (DPR Ku Band). DPR's Ku Band instrument provided three dimensional measurements of precipitation within a 152 mile (245 km) wide swath east of AMPIL's center. Cloud top heights over a larger area were made possible by blending measurements from GPM's radar (DPR Ku band) with cloud top heights based on Japan's HIMAWARI-8 satellite's infrared temperatures.

Image: 
Credits: NASA/JAXA, Hal Pierce

Two NASA satellites observed Tropical Storm Ampil in six and a half hours and found the storm's heaviest rainfall occurring in a band of thunderstorms shifted from north to south of the center. NASA's GPM satellite passed over the storm first and NASA's Aqua satellite made the second pass.

Tropical Storm Ampil was moving toward the northwest with winds of about 50 knots (57.5 mph) when the Global Precipitation Measurement mission or GPM core observatory satellite flew above on July 20, 2018 at 2:56 a.m. EDT (0656 UTC).

Data received by the GPM core satellite's Microwave Imager (GMI) and Dual-Frequency Precipitation Radar (DPR) instruments were used in an analysis of Ampil's precipitation. GMI and DPR showed that the northern side of the tropical storm was nearly dry and that rain bands in that area were producing only light to moderate rainfall. However, the most intense downpour was occurring in a band of thunderstorms well to the northeast of Ampil's center. Precipitation in that area was measured by GPM's radar (DPR Ku Band) falling at a rate of over 139 mm (5.5 inches) per hour.

GPM found moderate to heavy precipitation in a rain band wrapping around the southern side of the tropical cyclone's center of circulation. GPM is a joint mission between NASA and the Japan Aerospace Exploration Agency, JAXA.

At NASA's Goddard Space Flight Center in Greenbelt, Maryland, a 3-D image of Ampil's precipitation was made possible by using data collected by GPM's radar (DPR Ku and). A few of the most intense storms north of Ampil's center of circulation were found by DPR to reach heights above 14 km (8.7 miles). A 3-D animation showing cloud top heights within Tropical Storm Ampil was constructed with GPM's radar data (DPR Ku Band). DPR's Ku Band instrument provided three dimensional measurements of precipitation within a 152 mile (245 km) wide swath east of AMPIL's center. Cloud top heights over a larger area were made possible by blending measurements from GPM's radar (DPR Ku band) with cloud top heights based on Japan's HIMAWARI-8 satellite's infrared temperatures.

Six and a half hours later, NASA's Aqua satellite passed over Ampil on July 20 at 9:30 a.m. EDT (1330 UTC). The MODIS or Moderate Resolution Imaging Spectroradiometer instrument aboard NASA's Aqua satellite looked at the storm in infrared light. In one small area southeast of the center, Aqua found cloud top temperatures as cold or colder than minus 70 degrees Fahrenheit/minus 56.6 degrees Celsius. Cloud tops with temperatures that cold have the potential to generate very heavy rainfall. The band of thunderstorms containing the heaviest rainfall had shifted from the northern quadrant to the southern quadrant.

At 11 a.m. EDT (1500 UTC), Tropical storm Ampil was located near 24.1 degrees north latitude and 130.1 degrees east longitude, just 191 nautical miles southeast of Andersen Air Base, Okinawa Island, Japan. Ampil's maximum sustained winds were near 45 knots (52 mph/83 kph). It was moving to the north-northwest at 10 knots (11.5 mph/18.5 kph).

The Joint Typhoon Warning Center (JTWC) predicts that the tropical storm will intensify over the next few days as Ampil moves over the East China Sea toward China. Peak winds are predicted to reach 55 knots (62 mph/102 kph) before making final landfall in eastern China after a day or two.

Credit: 
NASA/Goddard Space Flight Center

Scientists reverse aging-associated skin wrinkles and hair loss in a mouse model

image: The mouse in the center photo shows aging-associated skin wrinkles and hair loss after two months of mitochondrial DNA depletion. That same mouse, right, shows reversal of wrinkles and hair loss one month later, after mitochondrial DNA replication was resumed. The mouse on the left is a normal control, for comparison.

Image: 
UAB

BIRMINGHAM, Ala. - Wrinkled skin and hair loss are hallmarks of aging. What if they could be reversed?

Keshav Singh, Ph.D., and colleagues have done just that, in a mouse model developed at the University of Alabama at Birmingham. When a mutation leading to mitochondrial dysfunction is induced, the mouse develops wrinkled skin and extensive, visible hair loss in a matter of weeks. When the mitochondrial function is restored by turning off the gene responsible for mitochondrial dysfunction, the mouse returns to smooth skin and thick fur, indistinguishable from a healthy mouse of the same age.

"To our knowledge, this observation is unprecedented," said Singh, a professor of genetics in the UAB School of Medicine.

Importantly, the mutation that does this is in a nuclear gene affecting mitochondrial function, the tiny organelles known as the powerhouses of the cells. Numerous mitochondria in cells produce 90 percent of the chemical energy cells need to survive.

In humans, a decline in mitochondrial function is seen during aging, and mitochondrial dysfunction can drive age-related diseases. A depletion of the DNA in mitochondria is also implicated in human mitochondrial diseases, cardiovascular disease, diabetes, age-associated neurological disorders and cancer.

"This mouse model," Singh said, "should provide an unprecedented opportunity for the development of preventive and therapeutic drug development strategies to augment the mitochondrial functions for the treatment of aging-associated skin and hair pathology and other human diseases in which mitochondrial dysfunction plays a significant role."

The mutation in the mouse model is induced when the antibiotic doxycycline is added to the food or drinking water. This causes depletion of mitochondrial DNA because the enzyme to replicate the DNA becomes inactive.

In four weeks, the mice showed gray hair, reduced hair density, hair loss, slowed movements and lethargy, changes that are reminiscent of natural aging. Wrinkled skin was seen four to eight weeks after induction of the mutation, and females had more severe skin wrinkles than males.

Dramatically, this hair loss and wrinkled skin could be reversed by turning off the mutation. The photos below show the hair loss and wrinkled skin after two months of doxycycline induction, and the same mouse a month later after doxycycline was stopped, allowing restoration of the depleted mitochondrial DNA.

Little change was seen in other organs when the mutation was induced, suggesting an important role for mitochondria in skin compared to other tissues.

The wrinkled skin showed changes similar to those seen in both intrinsic and extrinsic aging -- intrinsic aging is the natural process of aging, and extrinsic aging is the effect of external factors that influence aging, such as skin wrinkles that develop from excess sun or long-term smoking.

Among the details, the skin of induced-mutation mice showed increased numbers of skin cells, abnormal thickening of the outer layer, dysfunctional hair follicles and increased inflammation that appeared to contribute to skin pathology. These are similar to extrinsic aging of the skin in humans. The mice with depleted mitochondrial DNA also showed changed expression of four aging-associated markers in cells, similar to intrinsic aging.

The skin also showed disruption in the balance between matrix metalloproteinase enzymes and their tissue-specific inhibitor -- a balance of these two is necessary to maintain the collagen fibers in the skin that prevent wrinkling.

The mitochondria of induced-mutation mice had reduced mitochondrial DNA content, altered mitochondrial gene expression, and instability of the large complexes in mitochondria that are involved in oxidative phosphorylation.

Reversal of the mutation restored mitochondrial function, as well as the skin and hair pathology. This showed that mitochondria are reversible regulators of skin aging and loss of hair, an observation that Singh calls "surprising."

"It suggests that epigenetic mechanisms underlying mitochondria-to-nucleus cross-talk must play an important role in the restoration of normal skin and hair phenotype," Singh said, who has a secondary UAB appointment as professor of pathology. "Further experiments are required to determine whether phenotypic changes in other organs can also be reversed to wildtype level by restoration of mitrochondrial DNA."

Credit: 
University of Alabama at Birmingham

Largest multi-lesion medical imaging dataset is now publicly available

image: The ground-truth and two enlarged lymph nodes are correctly detected, even though the lymph nodes are not annotated in the dataset.

Image: 
@SPIE

BELLINGHAM, Washington, USA and CARDIFF, UK - A paper published today in the Journal of Medical Imaging - "DeepLesion: Automated mining of large-scale lesion annotations and universal lesion detection with deep learning," - announced the open availability of the largest CT lesion-image database accessible to the public. Such data are the foundations for the training sets of machine-learning algorithms; until now, large-scale annotated radiological image datasets, essential for the development of deep learning approaches, have not been publicly available.

DeepLesion, developed by a team from the National Institutes of Health Clinical Center, was developed by mining historical medical data from their own Picture Archiving and Communication System. This new dataset has tremendous potential to jump-start the field of computer-aided detection (CADe) and diagnosis (CADx).

The database includes multiple lesion types, including kidney lesions, bone lesions, lung nodules, and enlarged lymph nodes. The lack of a multi-category lesion dataset to date has been a major roadblock to development of more universal CADe frameworks capable of detecting multiple lesion types. A multi-category lesion dataset could even enable development of CADx systems that automate radiological diagnosis.

The database is built using the annotations - "bookmarks" - of clinically meaningful findings in medical images from the image archive. After analyzing the characteristics of these bookmarks - which take different forms, including arrows, lines, ellipses, segmentation, and text - the team harvested and sorted those bookmarks to create the DeepLesion database.

Whereas the field of computer vision has access to the robust ImageNet3 dataset, which contains millions of images, the medical imaging field has not had access to the same quantity of data. Most publicly available medical image datasets contain just tens or hundreds of cases. With over 32,000 annotated lesions from over 10,000 case studies, the DeepLesion dataset is now the largest publicly available medical image dataset.

"We hope the dataset will benefit the medical imaging area just as ImageNet benefited the computer vision area," says Ke Yan, the lead author on the paper and a postdoctoral fellow in the laboratory of senior author Ronald Summers, MD, PhD.

In addition to building the database, the team also developed a universal lesion detector based on the database. The researchers note that lesion detection is a time-consuming task for radiologists, but a key part of diagnosis. This detector may be able to serve as an initial screening tool for radiologists or other specialist CADe systems in the future.

In addition to lesion detection, the DeepLesion database may also be used to classify lesions, retrieve lesions based on query strings, or predict lesion growth in new cases based on existing patterns in the database. The database can be downloaded at https://nihcc.box.com/v/DeepLesion.

Future work will include extending the database to other image modalities, like MR, including data from multiple hospitals, and improving the detection accuracy of the detector algorithm.

Credit: 
SPIE--International Society for Optics and Photonics

Supplemental oxygen eliminates morning blood pressure rise in sleep apnea patients

image: Supplemental oxygen eliminates blood pressure rise after CPAP withdrawal.

Image: 
ATS

July 20, 2018--Supplemental oxygen eliminates the rise in morning blood pressure experienced by obstructive sleep apnea (OSA) patients who stop using continuous positive airway pressure (CPAP), the standard treatment for OSA, according to new research published online in the American Thoracic Society's American Journal of Respiratory and Critical Care Medicine.

In "Effect of Supplemental Oxygen on Blood Pressure in OSA: A Randomized, CPAP Withdrawal Trial," Chris D. Turnbull, BMBCh, a physician at the Oxford Centre for Respiratory Medicine at Churchill Hospital Oxford in the U.K., and co-authors report that in patients with moderate to severe OSA, supplemental oxygen prevented the rise in systolic and diastolic blood pressure, and the increase in oxygen desaturations that were seen in the control arm of the study after CPAP was withdrawn.

Twenty-five adults living in the United Kingdom participated in the study. All had been using CPAP successfully for over a year. CPAP was withdrawn for 14 nights, during which time participants first received supplemental oxygen or regular air overnight through a face mask or nasal cannula, and then crossed over to a second CPAP withdrawal period with the opposite treatment. Neither the researchers nor the participants knew when the participant was receiving the intervention (oxygen) or control (air) therapy.

Many studies have demonstrated an association between OSA, hypertension and cardiovascular disease. Some of these studies have linked the acute rises in blood pressure that OSA patients experience while sleeping to the constant need to wake up when their breathing stops or is partially blocked.

The authors of the current study wanted to find out if these recurrent arousals were also responsible for higher blood pressure in OSA patients during the day or whether intermittent hypoxia (low oxygen levels), resulting from interrupted breathing during sleep, caused a rise in blood pressure during the day.

The study found that supplemental oxygen substantially reduced intermittent hypoxia, but had minimal effect on two markers of arousal: the apnea-hypopnea index, a measure of sleep apnea severity that takes into account episodes of paused and shallow breathing, and the heart rate rises index. Based on these findings, the authors wrote that "intermittent hypoxia appears to be the dominant cause of daytime increase in blood pressure in OSA."

Dr. Turnbull said, "This is important because many patients, especially those with few symptoms, are unable to tolerate using CPAP treatment and other treatments may be needed for these individuals," given that elevated levels of blood pressure put them at greater risk for heart attack and stroke.

However, before supplemental oxygen can be used as an alternative to CPAP, the authors write that more research must be done to prove it is safe. Other studies, they note, have shown that supplemental oxygen could increase injury to the heart when administered after a heart attack, and that in some patients, supplemental oxygen causes hypercapnia (excessive carbon dioxide in the bloodstream).

"The next challenge for researchers will be to see if supplemental oxygen treatment has similar effects in patients in the longer-term along with assessing its longer-term safety," Dr. Turnbull said.

The study also looked at objective and subjective measures of daytime sleepiness but did not find a difference between the two groups.

Credit: 
American Thoracic Society

Traveling to the sun: Why won't Parker Solar Probe melt?

video: NASA's Parker Solar Probe is heading to the Sun. Why won't the spacecraft melt? Thermal Protection System Engineer Betsy Congdon (Johns Hopkins APL) outlines why Parker can take the heat. Download this video in HD formats: https://svs.gsfc.nasa.gov/12867

Image: 
NASA's Goddard Space Flight Center

This summer, NASA's Parker Solar Probe will launch to travel closer to the Sun, deeper into the solar atmosphere, than any mission before it. If Earth was at one end of a yard-stick and the Sun on the other, Parker Solar Probe will make it to within four inches of the solar surface.

Inside that part of the solar atmosphere, a region known as the corona, Parker Solar Probe will provide unprecedented observations of what drives the wide range of particles, energy and heat that course through the region -- flinging particles outward into the solar system and far past Neptune.

Inside the corona, it's also, of course, unimaginably hot. The spacecraft will travel through material with temperatures greater than a million degrees Fahrenheit while being bombarded with intense sun light.

So, why won't it melt?

Parker Solar Probe has been designed to withstand the extreme conditions and temperature fluctuations for the mission. The key lies in its custom heat shield and an autonomous system that helps protect the mission from the Sun's intense light emission, but does allow the coronal material to "touch" the spacecraft.

The Science Behind Why It Won't Melt

One key to understanding what keeps the spacecraft and its instruments safe, is understanding the concept of heat versus temperature. Counterintuitively, high temperatures do not always translate to actually heating another object.

In space, the temperature can be thousands of degrees without providing significant heat to a given object or feeling hot. Why? Temperature measures how fast particles are moving, whereas heat measures the total amount of energy that they transfer. Particles may be moving fast (high temperature), but if there are very few of them, they won't transfer much energy (low heat). Since space is mostly empty, there are very few particles that can transfer energy to the spacecraft.

The corona through which Parker Solar Probe flies, for example, has an extremely high temperature but very low density. Think of the difference between putting your hand in a hot oven versus putting it in a pot of boiling water (don't try this at home!) -- in the oven, your hand can withstand significantly hotter temperatures for longer than in the water where it has to interact with many more particles. Similarly, compared to the visible surface of the Sun, the corona is less dense, so the spacecraft interacts with fewer hot particles and doesn't receive as much heat.

That means that while Parker Solar Probe will be traveling through a space with temperatures of several million degrees, the surface of the heat shield that faces the Sun will only get heated to about 2,500 degrees Fahrenheit (about 1,400 degrees Celsius).

The Shield That Protects It

Of course, thousands of degrees Fahrenheit is still fantastically hot. (For comparison, lava from volcano eruptions can be anywhere between 1,300 and 2,200 F (700 and 1,200 C) And to withstand that heat, Parker Solar Probe makes use of a heat shield known as the Thermal Protection System, or TPS, which is 8 feet (2.4 meters) in diameter and 4.5 inches (about 115 mm) thick. Those few inches of protection mean that just on the other side of the shield, the spacecraft body will sit at a comfortable 85 F (30 C).

The TPS was designed by the Johns Hopkins Applied Physics Laboratory, and was built at Carbon-Carbon Advanced Technologies, using a carbon composite foam sandwiched between two carbon plates. This lightweight insulation will be accompanied by a finishing touch of white ceramic paint on the sun-facing plate, to reflect as much heat as possible. Tested to withstand up to 3,000 F (1,650 C), the TPS can handle any heat the Sun can send its way, keeping almost all instrumentation safe.

The Cup that Measures the Wind

But not all of the Solar Parker Probe instruments will be behind the TPS.

Poking out over the heat shield, the Solar Probe Cup is one of two instruments on Parker Solar Probe that will not be protected by the heat shield. This instrument is what's known as a Faraday cup, a sensor designed to measure the ion and electron fluxes and flow angles from the solar wind. Due to the intensity of the solar atmosphere, unique technologies had to be engineered to make sure that not only can the instrument survive, but also the electronics aboard can send back accurate readings.

The cup itself is made from sheets of Titanium-Zirconium-Molybdenum, an alloy of molybdenum, with a melting point of about 4,260 F (2,349 C). The chips that produce an electric field for the Solar Probe Cup are made from tungsten, a metal with the highest known melting point of 6,192 F (3,422 C). Normally lasers are used to etch the gridlines in these chips -- however due to the high melting point acid had to be used instead.

Another challenge came in the form of the electronic wiring -- most cables would melt from exposure to heat radiation at such close proximity to the Sun. To solve this problem, the team grew sapphire crystal tubes to suspend the wiring, and made the wires from niobium.

To make sure the instrument was ready for the harsh environment, the researchers needed to mimic the Sun's intense heat radiation in a lab. To create a test-worthy level of heat, the researchers used a particle accelerator and IMAX projectors -- jury-rigged to increase their temperature. The projectors mimicked the heat of the Sun, while the particle accelerator exposed the cup to radiation to make sure the cup could measure the accelerated particles under the intense conditions. To be absolutely sure the Solar Probe Cup would withstand the harsh environment, the Odeillo Solar Furnace -- which concentrates the heat of the Sun through 10,000 adjustable mirrors -- was used to test the cup against the intense solar emission.

The Solar Probe Cup passed its tests with flying colors -- indeed, it continued to perform better and give clearer results the longer it was exposed to the test environments. "We think the radiation removed any potential contamination," Justin Kasper, principal investigator for the SWEAP instruments at the University of Michigan in Ann Arbor, said. "It basically cleaned itself."

The Spacecraft That Keeps its Cool

Several other designs on the spacecraft keep Parker Solar Probe sheltered from the heat. Without protection, the solar panels -- which use energy from the very star being studied to power the spacecraft -- can overheat. At each approach to the Sun, the solar arrays retract behind the heat shield's shadow, leaving only a small segment exposed to the Sun's intense rays.

But that close to the Sun, even more protection is needed. The solar arrays have a surprisingly simple cooling system: a heated tank that keeps the coolant from freezing during launch, two radiators that will keep the coolant from freezing, aluminum fins to maximize the cooling surface, and pumps to circulate the coolant. The cooling system is powerful enough to cool an average sized living room, and will keep the solar arrays and instrumentation cool and functioning while in the heat of the Sun.

The coolant used for the system? About a gallon (3.7 liters) of deionized water. While plenty of chemical coolants exist, the range of temperatures the spacecraft will be exposed to varies between 50 F (10 C) and 257 F (125 C). Very few liquids can handle those ranges like water. To keep the water from boiling at the higher end of the temperatures, it will be pressurized so the boiling point is over 257 F (125 C).

Another issue with protecting any spacecraft is figuring out how to communicate with it. Parker Solar Probe will largely be alone on its journey. It takes light eight minutes to reach Earth -- meaning if engineers had to control the spacecraft from Earth, by the time something went wrong it would be too late to correct it.

So, the spacecraft is designed to autonomously keep itself safe and on track to the Sun. Several sensors, about half the size of a cell phone, are attached to the body of the spacecraft along the edge of the shadow from the heat shield. If any of these sensors detect sunlight, they alert the central computer and the spacecraft can correct its position to keep the sensors, and the rest of the instruments, safely protected. This all has to happen without any human intervention, so the central computer software has been programmed and extensively tested to make sure all corrections can be made on the fly.

Launching toward the Sun

After launch, Parker Solar Probe will detect the position of the Sun, align the thermal protection shield to face it and continue its journey for the next three months, embracing the heat of the Sun and protecting itself from the cold vacuum of space.

Over the course of seven years of planned mission duration, the spacecraft will make 24 orbits of our star. On each close approach to the Sun it will sample the solar wind, study the Sun's corona, and provide unprecedentedly close up observations from around our star -- and armed with its slew of innovative technologies, we know it will keep its cool the whole time.

Credit: 
NASA/Goddard Space Flight Center

Relax, just break it

image: This shows the X-ray diffuse scattering that helped Argonne scientists and their collaborators start to answer long-held questions about relaxor ferroelectrics, a technologically important class of materials.

Image: 
Argonne National Laboratory

The properties of a solid depend on the arrangement of its atoms, which form a periodic crystal structure. At the nanoscale, arrangements that break this periodic structure can drastically alter the behavior of the material, but this is difficult to measure. Recent advances by scientists at the U.S. Department of Energy’s (DOE) Argonne National Laboratory are starting to unravel this mystery.

Using state-of-the art neutron and synchrotron X-ray scattering, Argonne scientists and their collaborators are helping to answer long-held questions about a technologically important class of materials called relaxor ferroelectrics, which are often lead-based. These materials have mechanical and electrical properties that are useful in applications such as sonar and ultrasound. The more scientists understand about the internal structure of relaxor ferroelectrics, the better materials we can develop for these and other applications.

“We understand the long-range order very well, but for this experiment we developed novel tools and methods to study the local order.”— Stephan Rosenkranz, Argonne senior physicist

The dielectric constants of relaxor ferroelectrics, which express their ability to store energy when in an electric field, have an unusual dependence on the frequency of the field. Its origin has long been a mystery to scientists. Relaxor ferroelectrics can also have exceedingly high piezoelectric properties, which means that when mechanically strained they develop an internal electric field, or, conversely, they expand or contract in the presence of an external electric field. These properties make relaxor ferroelectrics useful in technologies where energy must be converted between mechanical and electrical.

Because lead is toxic, scientists are trying to develop non-lead-based materials that can perform even better than the lead-based ferroelectrics. To develop these materials, scientists are first trying to uncover what aspects of the relaxor ferroelectric’s crystal structure cause its unique properties. Although the structure is orderly and predictable on average, deviations from this order can occur on a local, or nanoscale level. These breaks in the long-range symmetry of the overall structure play a crucial role in determining the material’s properties.

“We understand the long-range order very well, but for this experiment we developed novel tools and methods to study the local order,” said Argonne senior physicist Stephan Rosenkranz.

Scientists from Argonne and the National Institute of Standards and Technology, along with their collaborators, studied a series of lead-based ferroelectrics with different local orders, and therefore different properties. Using new instrumentation designed by Argonne scientists that is able to provide a much larger and more detailed measurement than previous instruments, the team studied the diffuse scattering of the materials, or how the local deviations in structure affect the otherwise more orderly scattering pattern.

Previous researchers have identified a certain diffuse scattering pattern, which takes the shape of a butterfly, and associated it with the anomalous dielectric properties of relaxor ferroelectrics. When Argonne scientists analyzed their experimental data, however, they found that the butterfly-shaped scattering was strongly correlated with piezoelectric behavior.

“Now we can think about what kind of local order causes this butterfly scattering, and how can we design materials that have the same structural features that give rise to this effect,” said Argonne physicist Danny Phelan.

As for the real cause of the anomalous dielectric properties, the scientists propose that it arises from competing interactions that lead to “frustration” in the material.

The new discoveries stemmed from the scientists’ use of both neutron scattering and X-ray scattering. “There is invaluable complementarity to using both of these techniques,” said Phelan. “Using one or the other doesn’t give you the whole picture.”

The scientists will use these discoveries to inform models of relaxor ferroelectrics that are used to develop new materials. Future experiments will further illuminate the relationship between local order and material properties.

Credit: 
DOE/Argonne National Laboratory

Toward a secure electrical grid

Not long ago, getting a virus was about the worst thing computer users could expect in terms of system vulnerability. But in our current age of hyper-connectedness and the emerging Internet of Things, that's no longer the case. With connectivity, a new principle has emerged, one of universal concern to those who work in the area of systems control, like João Hespanha, a professor in the departments of Electrical and Computer Engineering, and Mechanical Engineering at UC Santa Barbara. That law says, essentially, that the more complex and connected a system is, the more susceptible it is to disruptive cyber-attacks.

"It is about something much different than your regular computer virus," Hespanha said. "It is more about cyber physical systems -- systems in which computers are connected to physical elements. That could be robots, drones, smart appliances, or infrastructure systems such as those used to distribute energy and water."

In a paper titled "Distributed Estimation of Power System Oscillation Modes under Attacks on GPS Clocks," published this month in the journal IEEE Transactions on Instrumentation and Measurement, Hespanha and co-author Yongqiang Wang (a former UCSB postdoctoral research and now a faculty member at Clemson University) suggest a new method for protecting the increasingly complex and connected power grid from attack.

The question that arises in any system that incorporates many sensors for monitoring is, what if someone intercepts the communication between two sensors that are trying to assess the health of the system? How does the system know not to believe -- and act on -- the false information?

Hespanha explained, "In the power grid, you have to be able to identify what the voltage and the current are at specific, highly precise points in time" for multiple points along the grid. Knowing the speed at which electricity moves, the distance between sensors, and the time it takes an oscillation to move between sensors, one can determine whether the oscillation is real.

Making these precise, high-resolution measurements anywhere in the grid is possible through the use of phase measurement units (PMUs) -- devices that are aligned with the atomic clocks used in GPS. With the energy grid becoming increasingly distributed, power providers now have to monitor the system more, and PMUs are among the most important devices for doing so. While PMUs could be used to inform autonomous control systems, so far, they have seen limited use for one simple reason: they are vulnerable to GPS spoofing attacks.

"There is the possibility," Hespanha said, "that someone will hack the system and cause a catastrophic failure."

The attack could be as simple as someone taking a GPS jammer to a remote power-distribution station and tricking the system into providing false measurements, leading to a cascade effect as false readings ripple through the system and incorrect actions are taken. Since it is virtually impossible to prevent a hacker from getting close enough to a remote substation to jam its GPS, Hespanha said, "What you need is a control system that can process the information to make good decisions. The system has to keep hypothesizing that what it is reading is not real."

How It Can Work

"The power-supply system is a distributed system, so measurements are being made in many places," Hespanha explained. "If one of them starts to give erratic or unexpected measurements -- a sudden current surge or a voltage drop -- you should be able to determine whether those measurements make sense."

In the case of an actual fluctuation, such as when many people in Los Angeles are using their air-conditioning on a hot summer day, the result may be a slight drop in the alternating-current frequency in the city. That drop creates a disturbance which propagates along the power grid stretching from western Canada south to Baja California in Mexico and reaching eastward over the Rockies to the Great Plains. As the disturbance travels through the grid, the power stations that feed the grid try to counteract it by generating extra power if the frequency is too low or decreasing production if the frequency becomes too high.

"You're going to start by seeing oscillation on the grid," Hespanha explained. "That's exactly what the PMUs are looking for. You then compare the precise time you saw the disturbance in Los Angeles to the time you saw it in Bakersfield and then at other sensors as it continues north. And if those readings don't reflect the physics of how electricity moves, that's an indication something's wrong. The PMUs are there to see oscillations and to help dampen them to prevent them from developing."

But, if someone fooled an automated system, instead of damping the oscillations, the PMUs could create them instead.

So how would such an attack be recognized and stopped? To illustrate, Hespanha draws an electrical line running between Los Angeles and Seattle, with many smaller, ancillary lines running off to the sides. "If power is going in a certain direction, you should also be able to see any oscillation in the side lines in that direction. And you know the physical model of what things should do, so an attacker who changed the measurement on the main line would also have to mess up a lot of other measurements on the side lines along the way. And that would be very difficult."

Testing suggests that Hespanha's system would be resistant to attack and remain effective even if one-third of the sensor nodes were compromised. "That would allow for a much more autonomous system; that's the next big step," said Hespanha. "This is an enabling technology that will be needed to make a lot of this control come online. And it will be needed soon, because the system gets more complex all the time and is therefore more susceptible to attack."

Credit: 
University of California - Santa Barbara

Food for thought: How the brain reacts to food may be linked to overeating

UNIVERSITY PARK, Pa. -- The reason why some people find it so hard to resist finishing an entire bag of chips or bowl of candy may lie with how their brain responds to food rewards, leaving them more vulnerable to overeating.

In a study with children, researchers found that when certain regions of the brain reacted more strongly to being rewarded with food than being rewarded with money, those children were more likely to overeat, even when the child wasn't hungry and regardless of if they were overweight or not.

Shana Adise, a postdoctoral fellow at the University of Vermont who led the study while earning her doctorate at Penn State, said the results give insight into why some people may be more prone to overeating than others. The findings may also give clues on how to help prevent obesity at a younger age.

"If we can learn more about how the brain responds to food and how that relates to what you eat, maybe we can learn how to change those responses and behavior," Adise said. "This also makes children an interesting population to work with, because if we can stop overeating and obesity at an earlier age, that could be really beneficial."

Previous research on how the brain's response to food can contribute to overeating has been mixed. Some studies have linked overeating with brains that are more sensitive to food rewards, while others have found that being less sensitive to receiving food rewards makes you more likely to overeat.

Additionally, other studies have shown that people who are willing to work harder for food than other types of rewards, like money, are more likely to overeat and gain weight over time. But the current study is the first to show that children who have greater brain responses to food compared to money rewards are more likely to overeat when appealing foods are available.

"We know very little about the mechanisms that contribute to overeating," Adise said. "The scientific community has developed theories that may explain overeating, but whether or not they actually relate to food intake hadn't yet been evaluated. So we wanted to go into the lab and test whether a greater brain response to anticipating and winning food, compared to money, was related to overeating."

For the study, 59 children between the ages of 7 and 11 years old made four visits to the Penn State's Children's Eating Behavior Laboratory.

During the first three visits, the children were given meals designed to measure how they eat in a variety of different situations, such as a typical meal when they're hungry versus snacks when they're not hungry. How much the children ate at each meal was determined by weighing the plates before and after the meals.

On their fourth visit, the children had fMRI scans as they played several rounds of a game in which they guessed if a computer-generated number would be higher or lower than five. They were then told that if they were right, they would win either money, candy or a book, before it was revealed if they were correct or not.

The researchers found that when various regions of the brain reacted more to anticipating or winning food compared to money, those children were more likely to overeat.

"We also found that the brain's response to food compared to money was related to overeating regardless of how much the child weighed," Adise said. "Specifically, we saw that increased brain responses in areas of the brain related to cognitive control and self control when the children received food compared to money were associated with overeating."

Adise added that this is important because it suggests there may be a way to identify brain responses that can predict the development of obesity in the future.

Kathleen Keller, associate professor of nutritional sciences, Penn State, said the study -- recently published in the journal Appetite -- backs up the theory that an increased brain response in regions of the brain related to rewards is associated with eating more food in a variety of situations.

"We predicted that kids who had an increased response to food relative to money would be the ones to overeat, and that's what we ended up seeing," Keller said. "We specifically wanted to look at kids whose brains responded to one type of a reward over another. So it wasn't that they're overly sensitive to all rewards, but that they're highly sensitive to food rewards."

Keller said the findings give insight into how the brain influences eating, which is important because it could help identify children who are at risk for obesity or other poor eating habits before those habits actually develop.

"Until we know the root cause of overeating and other food-related behaviors, it's hard to give good advice on fixing those behaviors," Keller said. "Once patterns take over and you overeat for a long time, it becomes more difficult to break those habits. Ideally, we'd like to prevent them from becoming habits in the first place."

Credit: 
Penn State

Princeton-UPenn research team finds physics treasure hidden in a wallpaper pattern

image: A newly identified insulating material using the symmetry principles behind wallpaper patterns may provide a basis for quantum computing, according to an international team of researchers. This strontium-lead sample (Sr2Pb3) has a fourfold Dirac cone surface state, a set of four, two-dimensional electronic surface states that go away from a point in momentum space in straight lines.

Image: 
Image courtesy of Benjamin Wieder, Princeton University Department of Physics

An international team of scientists has discovered a new, exotic form of insulating material with a metallic surface that could enable more efficient electronics or even quantum computing. The researchers developed a new method for analyzing existing chemical compounds that relies on the mathematical properties like symmetry that govern the repeating patterns seen in everyday wallpaper.

"The beauty of topology is that one can apply symmetry principles to find and categorize materials," said B. Andrei Bernevig, a professor of physics at Princeton.

The research, appearing July 20 in the journal Science, involved a collaboration among groups from Princeton University, the University of Pennsylvania (Penn), Sungkyunkwan University, Freie Universität Berlin and the Max Planck Institute of Microstructure Physics.

The discovery of this form of lead-strontium (Sr2Pb3) completes a decade-long search for an elusive three-dimensional material that combines the unique electronic properties of two-dimensional graphene and three-dimensional topological insulators, a phase of matter discovered in 2005 in independent works by Charles Kane at Penn and Bernevig at Princeton.

Some scientists have theorized that topological insulators, which insulate on their interior but conduct electricity on their surface, could serve as a foundation for super-fast quantum computing.

"You can think about a topological insulator like a Hershey's kiss," said Kane, a corresponding author on the paper. "The chocolate is the insulator and the foil is a conductor. We've been trying to identify new classes of materials in which crystal symmetries protect the conducting surface. What we've done here is to identify the simplest kind of topological crystalline insulator."

The new work demonstrates how the symmetries of certain two-dimensional surfaces, known as the 17 wallpaper groups for their wallpaper-like patterning, constrain the spatial arrangement (topology) of three-dimensional insulators.

In a conventional three-dimensional topological insulator, each two-dimensional surface exhibits a single characteristic group of states with cone-like dispersion. These cones resemble the elements on graphene called Dirac cones, features that imbue the material and other two-dimensional Dirac semimetals with their unusual electronic transport qualities, but they are distinct because graphene possesses a total of four Dirac cones in two pairs that are "glued" together.

Kane had suspected that with crystal symmetries, a second kind of topological insulator could exist with a single pair of glued Dirac cones. "What I realized was that a single pair of Dirac cones is impossible in a purely two-dimensional material, but it might be possible at the surface of a new kind of topological insulator. But when I tried to construct such a state, the two cones always came unglued."

A solution emerged when Benjamin Wieder, then a graduate student in Kane's group and now a Princeton postdoctoral associate, visited Princeton. At Princeton, Bernevig and colleague Zhi Jun Wang had just discovered "hourglass insulators" -- topological insulators with strange patterns of interlocking hourglass-like states -- which Wieder recognized as acting as if you had wrapped a three-dimensional crystal with a special kind of patterned wallpaper.

"We realized that you could get not just the hourglass insulator, but also this special Dirac insulator, by finding a crystal that looked like it was covered in the right wallpaper," said Wieder.

In particular, they recognized that a glued pair of Dirac cones could be stabilized on crystal surfaces that have two intersecting lines along which the surfaces look identical after being flipped and turned perpendicularly. These lines, known as glide reflections, characterize the so-called nonsymmorphic wallpaper groups, and thus provide the namesake of this new phase, which the team dubbed a "nonsymmorphic Dirac insulator."

The researchers quickly went to work applying mathematical rigor to Wieder's inspiration, resulting in a new, wallpaper symmetry-based methodology for diagnosing the bulk topology of three-dimensional crystals.

"The basic principles are simple enough that we sketched them on napkins that very evening," said co-author Barry Bradlyn, an associate research scholar in the Princeton Center for Theoretical Science (PCTS).

"But they are nevertheless robust enough to predict and understand a zoo of new topological phases in real materials," said Wang, a postdoctoral research associate in physics.

The discovery allowed the scientists to directly relate the symmetry of a surface to the presence of desired topological surface states for the first time, said Penn's Andrew Rappe, another co-author on the paper. "This allows an elegant and immediately useful means of designing desirable surface and interface states."

To identify the Dirac insulating phase in nature, the researchers calculated the electronic structures of hundreds of previously synthesized compounds with surfaces with two glide lines (wallpaper groups pgg and p4g) before identifying the novel topology in lead-strontium.

The computational chemists "knew they were searching for a needle in a haystack, but nobody bothered to tell them how small the needle might be," said Jennifer Cano, an associate research scholar at PCTS.

As even more exotic topological insulators are discovered, the role of wallpaper group symmetry, and of the special, graphene-like cones in the Dirac insulator, have been further solidified.

"When you can split a true surface Dirac cone while keeping time-reversal symmetry, something truly special happens," said Bernevig. "You get three-dimensional insulators whose two-dimensional surfaces are also a kind of topological insulator." Such phases have been predicted recently in bismuth crystals and molybdenum ditelluride (MoTe2) by several members of the collaboration.

Furthermore, with the use of a new theory, topological quantum chemistry, the researchers hope to find many more of these exotic phases.

"If we could paint these materials with the right wallpaper, we'd see more Dirac insulators," said Wieder, "but sometimes, the wrong wallpaper is interesting too."

Credit: 
Princeton University

New battery could store wind and solar electricity affordably and at room temperature

A new combination of materials developed by Stanford researchers may aid in developing a rechargeable battery able to store the large amounts of renewable power created through wind or solar sources. With further development, the new technology could deliver energy to the electric grid quickly, cost effectively and at normal ambient temperatures.

The technology - a type of battery known as a flow battery - has long been considered as a likely candidate for storing intermittent renewable energy. However, until now the kinds of liquids that could produce the electrical current have either been limited by the amount of energy they could deliver or have required extremely high temperatures or used very toxic or expensive chemicals.

Stanford assistant professor of materials science and engineering William Chueh, along with his PhD student Antonio Baclig and Jason Rugolo, now a technology prospector at Alphabet's research subsidiary X Development, decided to try sodium and potassium, which when mixed form a liquid metal at room temperature, as the fluid for the electron donor - or negative - side of the battery. Theoretically, this liquid metal has at least 10 times the available energy per gram as other candidates for the negative-side fluid of a flow battery.

"We still have a lot of work to do," said Baclig, "but this is a new type of flow battery that could affordably enable much higher use of solar and wind power using Earth-abundant materials."

The group published their work in the July 18 issue of Joule.

Separating sides

In order to use the liquid metal negative end of the battery, the group found a suitable ceramic membrane made of potassium and aluminum oxide to keep the negative and positive materials separate while allowing current to flow.

The two advances together more than doubled the maximum voltage of conventional flow batteries, and the prototype remained stable for thousands of hours of operation. This higher voltage means the battery can store more energy for its size, which also brings down the cost of producing the battery.

"A new battery technology has so many different performance metrics to meet: cost, efficiency, size, lifetime, safety, etc.," said Baclig. "We think this sort of technology has the possibility, with more work, to meet them all, which is why we are excited about it."

Improvements ahead

The team of Stanford PhD students, which in addition to Baclig includes Geoff McConohy and Andrey Poletayev, found that the ceramic membrane very selectively prevents sodium from migrating to the positive side of the cell - critical if the membrane is going to be successful. However, this type of membrane is most effective at temperatures higher than 200 degrees Celsius (392 F). In pursuit of a room-temperature battery, the group experimented with a thinner membrane. This boosted the device's power output and showed that refining the membrane's design is a promising path.

They also experimented with four different liquids for the positive side of the battery. The water-based liquids quickly degraded the membrane, but they think a non-water-based option will improve the battery's performance.

Credit: 
Stanford University

Solar thermal energy will help China cut costs of climate action

image: This is the Crescent Dunes CSP project with 10 hours of storage.

Image: 
IMAGE@SolarReserve

China's power systems operators must invest in renewable energy to meet climate commitments. Wind power and PV are the lowest cost renewables, but they only deliver power when it's windy or sunny.

By contrast, more expensive Concentrated Solar Power (CSP), which can store its solar energy relatively inexpensively, and for long durations, can deliver power at any time, day or night.

Surprisingly, even though it's more expensive, CSP could ultimately prove less costly for a power system with a lot of renewable energy because of its ability to dispatch its solar power day or night.

The study finds that if CSP were substituted for between 5% and 20% of planned PV and wind power in Gansu Province and Qinghai Province it would bring the greatest benefit to power systems operators, reducing curtailment of wind and PV while lowering the operational costs of base load coal generators, that must ramp up and down to ameliorate fluctuating generation from solar and wind.

Previous studies have only analyzed the flexibility benefits of CSP from the point of view of maximizing ROI to potential investors and developers. The new study helps to fill a gap in economic research designed to maximize the long-term benefits of CSP to the overall power system.

Chinese policymakers want to know the best plan

A research team from Beijing's Tsinghua University report their findings in the July issue of the journal Applied Energy, in their paper titled Economic justification of concentrating solar power in high renewable energy penetrated power systems, at https://www.solarpaces.org/study-csp-will-help-china-cut-costs-of-climate-action/

They analyzed the cost-benefit of various levels of CSP in place of planned Variable Renewable Energy (VRE) like PV and wind.

In two provinces in particular, Qinghai and Gansu, which plan to supply 83% and 104% respectively of their maximum load with VRE, the authors found that substituting CSP for between 5% and 20% of VRE would result in the lowest cost to the system operator.

Previous papers from these researchers have provided power system planning blueprints for China's policymakers at the NDRC.

Lead author, Prof. Chongqing Kang, who heads the Electrical Engineering department at Tsinghua, is the much-cited author of over 300 studies on renewables and power system planning and operation. Second author, Associate Professor Ning Zhang, has been focused on the renewable energy analytics and optimization in the power system.

"We have had very close collaboration with this government," Prof. Kang told SolarPACES.

"We have proposed several research studies before about wind and solar, and they have now have raised more interest in CSP, which is still in its first stage of development. The reason for the interest is that China has set a very aggressive goal for renewable energy and wind and PV are already in fast development. They have several people that focus on renewable energy at the NDRC, which is under the Energy Bureau."

The study quantifies the "levelized benefit" of CSP

The study focused on the benefit of CSP specifically to the power systems in Qinghai and Gansu. Both provinces have excellent solar resources and good siting opportunities for large solar or wind plants, and very ambitious plans for deploying wind and solar technologies.

Qinghai plans to supply 82.3% of maximum load demand with a combined 13 GW of VRE; from 3 GW of wind power and 10 GW of PV. Gansu plans to supply 104.3% of maximum load demand from a combined 27 GW of VRE; 20 GW of wind and 7 GW of solar PV.

By combining the economic benefit of CSP as a flexible renewable energy generation resource that is able to dispatch solar on demand and further reduce wind power and PV curtailment, they derive a "levelized benefit" figure for CSP.

The study suggests an additional energy and flexibility benefit of between 18 and 30 cents per kilowatt hour if CSP replaced between 5% and up to 20% of the proposed solar PV and wind power in these provinces. The higher value of CSP's energy and flexibility benefit justifies its relatively higher investment cost.

Confident that the technical immaturity of CSP is temporary

The study comes at a time of bold plans in China: to literally double 2018 global CSP deployment of 5 GW by 2020. Following a 1 GW round of 18 demonstration projects, China plans to build 5 GW of CSP.

Some initial targets in the first round of demonstration projects have proven harder to achieve than expected. Several projects dropped out, unable to reach an initial milestone on time.

However, the authors are very confident that these growing pains are surmountable, noting CSP has barely begun deployment compared with PV and wind.

"Not all of the parts can be produced by China at this point, so the learning process in the construction process is a little delayed," Kang said. They emphasized that CSP startup problems are not insuperable: "they are still learning; development will be faster in the near future."

Why China will need longer hours of CSP storage

All of China's planned CSP includes Thermal Energy Storage (TES). The study notes:

"TES systems in CSP plants are currently less costly (with capital costs around 20-70 $/kWh) than battery energy storage systems (with capital cost above $150/kWh)"

"CSP is a new technology that can be flexibly dispatched," Kang noted. "I think China does not want to miss that technology. So the initial 20 projects, for about 1 GW of CSP, are to say how this technology works in China."

China's need for night power is relatively greater than other nations, because factories hum all night in many regions.

"One previous informal suggestion I've made is that storage should be longer in China," he said. "In big cities, like Beijing and Shanghai, our load is about 60% at night, about like big cities in the US - but in Western China, factories operate 24 hours. The load at night is about 80% of daytime, it does not really disappear, so they need long duration storage; at least 10 hours."

An entire power system is simulated. Dr. Zhang and PhD candidate Ershun Du at Beijing's Electrical Engineering department at Tsinghua University helped design the analysis software, using power systems data from the generation and transmission expansion planning and load forecasting data.

"The analysis tool or software that we use is in-house developed software by our team; the GOPT. It is a power system operation system software able to conduct year-round power system dispatch considering a wide range of types of generation and detailed AC/DC power grid and practical dispatching rules" Du explained.

"The software simulates the power system operation through a long time period using sufficient amount of VRE output scenarios so that it is able to deliver a reliable estimate on the economics of power system operation with wind, PV and CSP."

The data comes from the electric power planning blueprints for each province.

"We conducted this analysis to simulate whether investing in the CSP plants is economic or not in in Qinghai Province and Gansu Province, to justify how large or how much benefit the CSP power plants can bring," said Du, who in 2017 was a visiting scholar at NREL where related studies have estimated the value that CSP brings to the grid within the Western US Interconnect.

Finding: CSP benefits outweigh costs in both provinces

CSP brought the greatest benefit to Gansu Province, where it would reduce the curtailed solar and wind power, but also reduce costs to existing coal-fired power generation by cutting fuel costs, ramping costs, and start-stop costs as it tries to fill in between ever-growing solar and wind.

In Qinghai Province, the benefit would be lower. CSP would be built in a high desert region where several large rivers originate in the high mountains. "They are two very different power systems, and we found that CSP has more benefits in Gansu Province, because Qinghai Province already has a lot of Hydro," Du explained. Like CSP, hydro is dispatchable, making it an equally good "filler" with PV and wind.

In Gansu, the benefit value was between 24 and 30 cents per kilowatt hour of generation (0.238-0.300 $/kWh). In Quinghai, with plentiful hydro, the levelized benefit value was under 20 cents (0.177-0.191 $/kWh).

"We find that even with a higher initial cost to build CSP, investing in CSP is still economic in both provinces because of its very high external benefit of accommodating wind power and PV that leads to lower cost over time in power system operations," concluded Zhang. ??"However, CSP subsidies are still required to internalize the benefit to pay back its heavy investment.

Credit: 
SolarPACES

UCI-led study finds therapy dogs effective in reducing symptoms of ADHD

Irvine, Calif., July 17, 2018 - In a first of its kind randomized trial, researchers from the UCI School of Medicine found therapy dogs to be effective in reducing the symptoms of attention deficit/hyperactivity disorder (ADHD) in children. The study's main outcomes were recently published by the American Psychological Association in the Society of Counseling Psychology's Human-Animal Interaction Bulletin (HAIB). Additional new findings were presented at the International Society for Anthrozoology 2018 Conference held July 2-5 in Sydney, Australia.

Titled, "A Randomized Controlled Trial of Traditional Psychosocial and Canine-Assisted Interventions for Children with ADHD," the research involved children aged 7 to 9 who had been diagnosed with ADHD and who had never taken medicines for their condition. The study randomized participants to compare benefits from evidenced-based, "best practice" psychosocial interventions with the same intervention augmented by the assistance of certified therapy dogs. The research was led by Sabrina E. B. Schuck, PhD, MA, executive director of the UCI Child Development Center and assistant professor in residence in the Department of Pediatrics at UCI School of Medicine.

Results from Schuck's research indicate children with ADHD who received canine assisted intervention (CAI) experienced a reduction in inattention and an improvement in social skills. And, while both CAI and non-CAI interventions were ultimately found to be effective for reducing overall ADHD symptom severity after 12 weeks, the group assisted by therapy dogs fared significantly better with improved attention and social skills at only eight weeks and demonstrated fewer behavioral problems. No significant group differences, however, were reported for hyperactivity and impulsivity.

"Our finding that dogs can hasten the treatment response is very meaningful," said Schuck. "In addition, the fact that parents of the children who were in the CAI group reported significantly fewer problem behaviors over time than those treated without therapy dogs is further evidence of the importance of this research."

Guidelines from the American Academy of Pediatrics for the management of ADHD underscore the importance of both psychopharmacological and psychosocial therapies. Patients who receive psychosocial therapy prior to medications have shown to fare better. Additionally, many families prefer not to use medications in young children.

"The take away from this is that families now have a viable option when seeking alternative or adjunct therapies to medication treatments for ADHD, especially when it comes to impaired attention," said Schuck. "Inattention is perhaps the most salient problem experienced across the life span for individuals with this disorder."

This study is the first known randomized controlled trial of CAI for children with ADHD. It illustrates that the presence of therapy dogs enhances traditional psychosocial intervention and is feasible and safe to implement.

Animal assisted intervention (AAI) has been used for decades, however, only recently has empirical evidence begun to support these practices reporting benefits including reduced stress, improved cognitive function, reduced problem behaviors and improved attention.

Credit: 
University of California - Irvine

Splitting water: Nanoscale imaging yields key insights

image: Berkeley Lab researchers Francesca Toma (left) and Johanna Eichhorn used a photoconductive atomic force microscope to better understand materials for artificial photosynthesis.

Image: 
Marilyn Chung/Berkeley Lab

In the quest to realize artificial photosynthesis to convert sunlight, water, and carbon dioxide into fuel - just as plants do - researchers need to not only identify materials to efficiently perform photoelectrochemical water splitting, but also to understand why a certain material may or may not work. Now scientists at Lawrence Berkeley National Laboratory (Berkeley Lab) have pioneered a technique that uses nanoscale imaging to understand how local, nanoscale properties can affect a material's macroscopic performance.

Their study, "Nanoscale Imaging of Charge Carrier Transport in Water Splitting Anodes", has just been published in Nature Communications. The lead researchers were Johanna Eichhorn and Francesca Toma of Berkeley Lab's Chemical Sciences Division.

"This technique correlates the material's morphology to its functionality, and gives insights on the charge transport mechanism, or how the charges move inside the material, at the nanoscale," said Toma, who is also a researcher in the Joint Center for Artificial Photosynthesis, a Department of Energy Innovation Hub.

Artificial photosynthesis seeks to produce energy-dense fuel using only sunlight, water, and carbon dioxide as inputs. The advantage of such an approach is that it does not compete against food stocks and would produce no or low greenhouse gas emissions. A photoelectrochemical water splitting system requires specialized semiconductors that use sunlight to split water molecules into hydrogen and oxygen.

Bismuth vanadate has been identified as a promising material for a photoanode, which provides charges to oxidize water in a photoelectrochemical cell. "This material is a case example in which efficiency should be theoretically good, but in experimental tests you actually observe very poor efficiency," Eichhorn said. "The reasons for that are not completely understood."

The researchers used photoconductive atomic force microscopy to map the current at every point of the sample with high spatial resolution. This technique has already been used to analyze local charge transport and optoelectronic properties of solar cell materials but is not known to have been used to understand the charge carrier transport limitations at the nanoscale in photoelectrochemical materials.

Eichhorn and Toma worked with scientists at the Molecular Foundry, a nanoscale science research facility at Berkeley Lab, on these measurements through the Foundry's user program. They found that there were differences in performance related to the nanoscale morphology of the material.

"We discovered that the way charges are utilized is not homogeneous over the whole sample, but rather, there's heterogeneity," Eichhorn said. "Those differences in performance may account for its macroscopic performance - the overall output of the sample - when we perform water splitting."

To understand this characterization, Toma gives the example of a solar panel. "Let's say the panel has 22 percent efficiency," she said. "But can you tell at the nanoscale, at each point in the panel, that it will give you 22 percent efficiency? This technique enables you to say, yes or no, specifically for photoelectrochemical materials. If the answer is no, it means there are less active spots on your material. In the best case it just decreases your total efficiency, but if there are more complex processes, your efficiency can be decreased by a lot."

The improved understanding of how the bismuth vanadate is working will also allow researchers to synthesize new materials that may be able to drive the same reaction more efficiently. This study builds on previous research by Toma and others, in which she was able to analyze and predict the mechanism that defines (photo)chemical stability of a photoelectrochemical material.

Toma said these results put scientists much closer to achieving efficient artificial photosynthesis. "Now we know how to measure local photocurrent in these materials, which have very low conductivity," she said. "The next step is to put all of this in a liquid electrolyte and do exactly the same thing. We have the tools. Now we know how to interpret the results, and how to analyze them, which is an important first step for moving forward."

Credit: 
DOE/Lawrence Berkeley National Laboratory

New creepy, crawly search and rescue robot developed at Ben-Gurion U

image: Ben-Gurion University of the Negev researchers designed the Rising Sprawl-Tuned Autonomous Robot (RSTAR) to function simply and reliably, change shape and overcome common obstacles without any external mechanical intervention. RSTAR uses adjustable sprawling legs angled downwards and outwards from its body to creep and crawl and climb over and through a variety of obstacles and surfaces.

Image: 
Ben-Gurion U

NEW YORK...July 18, 2018 - A new highly maneuverable search and rescue robot that can creep, crawl and climb over rough terrain and through tight spaces has been developed by Ben-Gurion University of the Negev (BGU) researchers.

The new Rising Sprawl-Tuned Autonomous Robot (RSTAR) utilizes adjustable sprawling wheel legs attached to a body that can move independently and reposition itself to run on flat surfaces, climb over large obstacles and up closely-spaced walls, and crawl through a tunnel, pipe or narrow gaps. See video.

The innovative BGU robot was introduced at the International Conference on Robotics and Automation (ICRA 2018) in Brisbane, Australia, May 21-25.

"The RSTAR is ideal for search and rescue operations in unstructured environments, such as collapsed buildings or flooded areas, where it must adapt and overcome a variety of successive obstacles to reach its target," says Dr. David Zarrouk, a lecturer in BGU's Department of Mechanical Engineering, and head of the Bio-Inspired and Medical Robotics Lab. "It is the newest member of our family of STAR robots."

Dr. Zarrouk and BGU student and robotics lab worker Liran Yehezkel designed RSTAR to function simply and reliably, change shape and overcome common obstacles without any external mechanical intervention. Its speed and relatively low energy consumption make the robot ideal for a broad range of applications that may require longer work time.

The robot uses its round wheels to travel more than three feet per second on hard flat surfaces and switches to spoke wheels to traverse extremely soft or granular surfaces, like thick mud or sand, without getting stuck. It also climbs vertically and crawls horizontally by pressing its wheels to walls without touching the floor.

The BGU team is working on a larger STAR robot version that will climb over larger obstacles, including stairs, and carry more than four pounds of sensors and supplies. A smaller STAR or RSTAR will piggyback on the larger robot to use in hard-to-reach areas and sneak in between narrow cracks and passages.

Credit: 
American Associates, Ben-Gurion University of the Negev

Supersharp images from new VLT adaptive optics

image: This image of the planet Neptune was obtained during the testing of the Narrow-Field adaptive optics mode of the MUSE/GALACSI instrument on ESO's Very Large Telescope. The corrected image is sharper than a comparable image from the NASA/ESA Hubble Space Telescope.

Image: 
ESO/P. Weibacher (AIP)

The MUSE (Multi Unit Spectroscopic Explorer - https://www.eso.org/public/teles-instr/paranal-observatory/vlt/vlt-instr/muse/) instrument on ESO's Very Large Telescope (VLT - http://www.eso.org/public/teles-instr/paranal-observatory/vlt/ ) works with an adaptive optics unit called GALACSI. This makes use of the Laser Guide Star Facility, 4LGSF, a subsystem of the Adaptive Optics Facility (AOF - https://www.eso.org/public/teles-instr/technology/adaptive_optics/). The AOF provides adaptive optics for instruments on the VLTs Unit Telescope 4 (UT4). MUSE was the first instrument to benefit from this new facility and it now has two adaptive optics modes -- the Wide Field Mode and the Narrow Field Mode [1].

The MUSE Wide Field Mode coupled to GALACSI in ground-layer mode corrects for the effects of atmospheric turbulence up to one kilometre above the telescope over a comparatively wide field of view. But the new Narrow Field Mode using laser tomography corrects for almost all of the atmospheric turbulence above the telescope to create much sharper images, but over a smaller region of the sky [2].

With this new capability, the 8-metre UT4 reaches the theoretical limit of image sharpness and is no longer limited by atmospheric blur. This is extremely difficult to attain in the visible and gives images comparable in sharpness to those from the NASA/ESA Hubble Space Telescope. It will enable astronomers to study in unprecedented detail fascinating objects such as supermassive black holes at the centres of distant galaxies, jets from young stars, globular clusters, supernovae, planets and their satellites in the Solar System and much more.

Adaptive optics is a technique to compensate for the blurring effect of the Earth's atmosphere, also known as astronomical seeing, which is a big problem faced by all ground-based telescopes. The same turbulence in the atmosphere that causes stars to twinkle to the naked eye results in blurred images of the Universe for large telescopes. Light from stars and galaxies becomes distorted as it passes through our atmosphere, and astronomers must use clever technology to improve image quality artificially.

To achieve this four brilliant lasers are fixed to UT4 that project columns of intense orange light 30 centimetres in diameter into the sky, stimulating sodium atoms high in the atmosphere and creating artificial Laser Guide Stars. Adaptive optics systems use the light from these "stars" to determine the turbulence in the atmosphere and calculate corrections one thousand times per second, commanding the thin, deformable secondary mirror of UT4 to constantly alter its shape, correcting for the distorted light.

MUSE is not the only instrument to benefit from the Adaptive Optics Facility. Another adaptive optics system, GRAAL, is already in use with the infrared camera HAWK-I. This will be followed in a few years by the powerful new instrument ERIS. Together these major developments in adaptive optics are enhancing the already powerful fleet of ESO telescopes, bringing the Universe into focus.

This new mode also constitutes a major step forward for the ESO's Extremely Large Telescope, which will need Laser Tomography to reach its science goals. These results on UT4 with the AOF will help to bring ELT's engineers and scientists closer to implementing similar adaptive optics technology on the 39-metre giant.

Credit: 
ESO