Tech

Kilauea eruption fosters algae bloom in North Pacific Ocean

image: During Kilauea's massive eruption in 2018, a super bloom of algae appeared and stretched for miles. USC Dornsife geochemists and University of Hawaii researchers investigated what led to the growth.

Image: 
US Coast Guard

Volcanoes are often feared for their destructive power, but a new study reminds us that they can foster new growth.

A year ago in July, researchers from USC Dornsife College of Letters, Arts and Sciences and the University of Hawaii rushed out on a boat to Hawaii's famous K?lauea volcano to collect samples of the surrounding North Pacific Ocean. They were trying to determine why so much algae had begun growing in the water as approximately 1 billion tonnes of hot lava poured in.

When looking at NASA's satellite photos of the eruption, the scientists noticed that the ocean water around the volcano was turning green. The satellite had detected huge amounts of chlorophyll, the green pigment in algae and other plants that converts light into energy.

The new study, published Sept. 6 in the journal Science, shows that the green plume in the ocean around the volcano contained the perfect cocktail for plant growth -- a fertile mix of higher nitrate levels, silicic acid, iron and phosphate.

"There was no reason for us to expect that an algae bloom like this would happen," said geochemist Seth John, assistant professor of Earth sciences at USC Dornsife and an author of the study. "Lava doesn't contain any nitrate."

Nitrogen is a natural fertilizer for plants, even on land. With such rich conditions, the algae bloom exploded, expanding as far as hundreds of miles out into the Pacific Ocean, the researchers said.

"Usually, whenever an algae grows and divides, it gets eaten up right away by other plankton," said study co-author and USC Dornsife post-doctoral researcher Nicholas Hawco. "The only way you get this bloom is if there is an imbalance."

The researchers believe that the nitrogen likely was stirred up from the deep ocean. As the hot lava poured in, it forced an upwelling of colder, deep ocean water. When the water rose, it carried nitrogen and other particles to the surface that helped the algae grow.

"All along the coast of California, there is regular upwelling," said John. "All the kelp beds and marine creatures that inhabit those ecosystems are basically driven by those currents that draw fertilizing nutrients up from deep water to the surface. That is essentially the same process that we saw in Hawai?i, but faster."

Credit: 
University of Southern California

Study offers new insights on impacts of crop trading in China

FROSTBURG, MD (September 5, 2019)--Feeding the world's growing population is one of the great challenges of the 21st century. This challenge is particularly pressing in China, which has 22% of the world's population but only 7% of the global cropland. Synthetic nitrogen fertilizer has been intensively used to boost crop yields in China, but more than 60% of it has been lost, causing severe environmental problems such as air pollution, eutrophication of lakes and rivers, and soil degradation.

In a recent study, scientists from the University of Maryland Center for Environmental Science used historical records to shed light on sustainability policies for balancing food demand, crop production, trade expenditure, and the environmental degradation associated with food production in China.

"It is critical to understand and quantify the trade-offs of using international trade as one of the strategies for resolving food demand and environmental challenges," said study co-author Xin Zhang of the University of Maryland Center for Environmental Science. "The lack of systematic approaches for assessing the impacts of trade on sustainability has been preventing us from understanding the synergies and trade-offs among different environmental and socioeconomic concerns related to trade and crop production."

The study is among the few to consider both the socioeconomic and environmental impacts of crop trading, and is one of the pioneer studies to consider the impact of crop mixes in an import portfolio and domestic production.

"Economic costs to alleviate environmental pollutions caused by producing export crops could be comparable to the economic benefits brought by trade," said co-author Guolin Yao.

Focusing on China's crop production and trade over 1986-2015, scientists evaluated the impacts of trade from several perspectives, including environmental (such as nitrogen pollution and land use), social (for example, crop self-sufficiency for a country) and economic (such as trade expenditure and environmental damage cost). Their findings show that crop imports can relieve nitrogen pollution and land-use pressure in China and the world but not without adding environmental burdens to other countries and exposing China's food availability to the risks of the international market or unstable bilateral trade relationships. They also found that the environmental damage costs of nitrogen pollution avoided by importing crops in China are less than current trade expenditure, but may reach or surpass it as China's economy develops.

"This paper proposes new concepts of 'alternative' nitrogen and land. If China has to produce imported crops domestically, then how much nutrients and land is needed? Since the nitrogen-use efficiency and crop yield is usually lower in China than countries with better technologies or more favorable environments, reallocating production in those countries can provide relief on environment in China and the world," Zhang said. "The international food trade could mitigate environmental degradation by coordinating global food supply and demand. "

China increasingly relies on agricultural imports, driven by its rising population and income, as well as dietary shifts. International trade offers an opportunity to relieve pressures on resource depletion and pollution, while it poses multiple socioeconomic challenges, such as food availability.

"Food trade can reduce global nitrogen inputs by redistributing the production of commodities to regions more efficient in nitrogen use," said Zhang.

Globalization of food trade can help alleviate the pressure of increasing food demands and subsequent nitrogen pollution, and a diverse and carefully designed crop trade portfolio can protect a country against local disruptions and shortfalls in production. Currently, 23% of the food produced for human consumption is traded internationally, tending to flow from regions with high production efficiency to less efficient regions.

"We are not only assessing what happened in last 30 years but also what are the potentials of China to relieve nitrogen pollution with adjusting their trade portfolio," she said. "we found such potentials are less than but comparable to the nitrogen mitigation potentials by improving technologies and practices for nutrient management."

Credit: 
University of Maryland Center for Environmental Science

Unique report details dermatological progression and effective treatment of a severe jellyfish sting

image: Envenomation Day 2

Image: 
Wilderness & Environmental Medicine

Philadelphia, September 5, 2019 - A detailed case report and comprehensive sequence of photographs in Wilderness & Environmental Medicine, published by Elsevier, document the dermatological progression of a patient stung by a jellyfish off the coast of Cambodia. The aim of this report is to guide clinicians and patients to understand what to expect after such a sting and steps to increase the probability of a full recovery.

Jellyfish stings (envenomations) are a common affliction of ocean goers worldwide. Although many are simply a nuisance, some can be very severe or even fatal. The most severe non-anaphylactic reactions are caused by jellyfish species that inhabit Indo-Pacific waters. Examples of species known to be clinically dangerous include the box jellyfish (Chironex fleckeri), sea nettle (Chrysaora quinquecirrha), and Irukandji (Carukia barnesi).

"In this report, we provide written and visual details of the natural progression of a probable box jellyfish envenomation that originated in waters off Cambodia," explained lead author Paul S. Auerbach, MD, Department of Emergency Medicine, Stanford University School of Medicine, Palo Alto, CA, USA. "We document a comprehensive visual demonstration of what patients and clinicians should expect after a severe sting by box jellyfish or similarly injurious species."

The patient (a co-author of this report) was a 21-year-old woman in good health at the time of injury in early June 2019. She was stung while swimming off the coast of Koh Ta Kiev, Cambodia, in chest-deep water. She experienced immediate and intense burning pain to her right leg that gradually subsided over the next 10 hours, bright red and swollen streaks where tentacles had contacted the skin of the right thigh, and lightheadedness with a brief loss of consciousness after leaving the water within 20 minutes of the sting. She did not see the jellyfish well enough to identify precisely the species, but multiple species of box-type jellyfish are known to live in these waters.

With worsening symptoms, the patient travelled to Siem Reap where she was admitted to Royal Angkor International Hospital overnight. She then travelled to Bali. Dr. Auerbach telephoned the patient and recommended that she return to California promptly to receive the medical care necessary to attempt healing without complications and gave her detailed advice for managing the wound during the journey.

After arriving in the United States, the patient was seen by a dive medicine specialist physician and a plastic surgeon. Physical examination was consistent with superficial and deep partial thickness burns, possibly with near full thickness injury in some areas. She was advised to continue washing the wounds and to clean and dress the wounds using silver sulfadiazine topical cream in the deeper areas and Leptospermum honey (Medihoney) in the shallower areas. The immediate goals were to keep the wounds clean, prevent infection, maintain moisture to facilitate healing, and allow for serial examinations to determine the best method for wound closure.

After 60 days the wound was fully closed and healed. Moisturizing lotion and sun protection measures were continued to optimize scar maturation. The patient was counseled that she might be a candidate for elective scar revision, steroid treatment, and/or laser therapy in the future.

"With knowledgeable consultation from experienced physicians and meticulous care, this injury healed without the need for skin grafting," reported Dr. Auerbach.

"The authors have provided an exceptionally detailed photographic timeline of the case," added Neal W. Pollock, PhD, Department of Kinesiology, Laval University, Quebec, QC, Canada, and Editor-in-Chief of Wilderness & Environmental Medicine. "It provides excellent documentation of the evolution that will rarely be seen."

Credit: 
Elsevier

Innovative technique for labeling and mapping inhibitory neurons reveals diverse tuning profile

Neurons are complex, highly connected cells engaged with multiple networks throughout the brain, and they exhibit a wide range of activity. As such, individual neurons can perform many functions. Neurons are generally classified as either excitatory or inhibitory based on downstream effects on other cells, with each cell receiving a diverse array of excitatory and inhibitory synaptic inputs that help shape that cell's unique properties. In a recent study, researchers at the Max Planck Florida Institute for Neuroscience revealed that inhibitory inputs to neurons in the visual cortex are more diverse than previously thought, suggesting that our current notion of neuronal connectivity may only reflect a part of the whole picture. The team of researchers explored how neurons are wired together and what effect these connections have on neuronal properties. Using genetic tools, imaging techniques, and optogenetics, they showed that inhibitory inputs onto single neurons can deviate from the canonical view of cortical circuits. The presence of this surprising, differentially-tuned inhibition suggests that cortical connectivity is more flexible than originally assumed, allowing for multiplexed computations.

Few studies have mapped inhibitory inputs onto neurons within intact brain circuits. Despite the wide variety of techniques to visualize excitatory connections, there are almost none readily available to study co-occurring inhibitory connections. Dr. Benjamin Scholl, Senior Research Scientist in the lab of Dr. David Fitzpatrick, and Dr. Daniel Wilson, now a postdoctoral researcher at Harvard Medical School, developed a strategy for labeling and mapping local inhibitory inputs onto a cell. They expressed a fluorescent protein specifically in inhibitory neurons, taking advantage of genetic markers to target only these cells, and combined whole-cell patch-clamp recordings with patterned stimulation of neurons to record their individual activity. In the same cells, they also measured their selectivity for different orientations of moving edges.

Scholl and colleagues found that the selectivity of inhibitory inputs may parallel or completely diverge from that of target neurons, revealing a "diverse palette" of inhibition. Previously, it was thought that these inputs should all be co-tuned, with aligned functional preferences. Data from this study suggests that, depending on network activation, inhibitory cells with different tuning profiles are able to uniquely contribute and allow for flexibility of network responses. "These networks are highly interconnected and dynamic, and these studies are beginning to show us that the functional connectivity we hope to uncover is more complicated than we previously believed," remarks Scholl. Further, understanding the anatomical connectivity, or "connectome," may not be entirely sufficient to understand brain circuits, emphasizing the need to map and elucidate functional connectomes in the brain.

The rules governing neuronal tuning are in no way simple, with different stimulus conditions evoking different patterns of excitation and inhibition, and there is much more about the intricacies of the visual system that has yet to be uncovered. But advancements in labeling and imaging techniques such as that outlined in Scholl's paper open the door for future examination of synaptic inputs to individual neurons. "We have to appreciate the full complexity of how individual neurons are engaged in circuits," states Fitzpatrick. "The power of this paper lies in the technology that allows us to record from individual neurons while presenting a visual stimulus and selectively activating inhibitory neurons." Scholl and the Fitzpatrick lab hope to understand how excitation and inhibition are used flexibly to encode information, including during early development. Subsequent studies may explore the role of experience in shaping neuronal networks and the varying impact of individual neurons under different stimuli or contexts. The lab also hopes to develop new techniques for better resolution imaging and precise stimulation of single cells to further characterize the role of inhibitory neurons in the visual cortex.

Credit: 
Max Planck Florida Institute for Neuroscience

Research reveals new plan to maximize rideshare availability by routing empty cars

CATONSVILLE, MD, September 5, 2019 - Time is money. Especially for rideshare drivers with companies like Uber and Lyft. New research in the INFORMS journal Operations Research looks at a new model for rideshare companies focusing on maximizing the availability of rideshares by optimally routing empty cars.

All rideshares start off waiting on a passenger... when a passenger arrives if an empty car is available, the passenger occupies that car and travels to their destination. If no empty car is available within a short period of time, the passenger abandons the rideshare method and tries an alternative form of transportation. So how do you ensure a car is available?

If an empty rideshare is available and the driver takes the passenger to the destination, the car is empty again, at which point a decision has to be made: should the driver stay there to hopefully find new passengers immediately or relocate without a passenger to a different place, risking time spent driving without a passenger and without getting paid?

Driving empty cars seems to be a waste of resources, but turns out to be necessary and imperative to maximize the availability of rideshare services in the presence of the geographic imbalance of passengers, according to the study conducted by Anton Braverman of Kellogg School of Management at Northwestern University, Jim Dai of The Chinese University of Hong Kong, Shenzhen and Cornell University, Lei Ying of the University of Michigan and Xin Liu of Arizona State University.

The researchers have developed a model to control the flow of empty cars in a network of geographic locations, with different passenger demands, to optimize system-wide functionality. The algorithm directs an empty car after it drops off a passenger to a different location instead of waiting at the same location to match the geographic imbalance of passenger demands.

"You can calculate the availability of empty cars when they are requested and find new passengers quickly once a driver reaches a destination," said Braverman. "The new car flow control policy is based on historical data for time-dependent futurecast to anticipate route changes and direct cars accordingly."

This model assumes the cars are controlled by the company rather than allowing drivers to make decisions on their own.

"An incentive can be that the company will pay for fuel costs or a flat hourly salary when a car is driving empty, which has been recently experimented by several rideshare companies," said Braverman.

Credit: 
Institute for Operations Research and the Management Sciences

NASA-NOAA satellite finds wind shear pushing on Tropical Storm Gabrielle

image: NASA-NOAA's Suomi NPP satellite passed over Tropical Storm Gabrielle and the VIIRS instrument aboard captured this image of the storm on Sept. 5 at 12:18 a.m. EDT (0418 UTC). Suomi NPP found strongest thunderstorms north of the center had cloud top temperatures as cold as minus 70 degrees Fahrenheit (minus 56.6 Celsius).

Image: 
NASA/NOAA/NRL

NASA-NOAA's Suomi NPP satellite passed over the eastern Atlantic Ocean and infrared data revealed that the storm was being adversely affected by wind shear, pushing its strongest storms northeast of its center.

NASA-NOAA's Suomi NPP satellite used infrared light to analyze the strength of storms in the remnants of Tropical Storm Gabrielle. Infrared data provides temperature information, and the strongest thunderstorms that reach high into the atmosphere have the coldest cloud top temperatures.

On Sept. 5 at 12:18 a.m. EDT (0418 UTC), the Visible Infrared Imaging Radiometer Suite (VIIRS) instrument aboard Suomi NPP found strongest thunderstorms northeast of the center had cloud top temperatures as cold as minus 70 degrees Fahrenheit (minus 56.6 Celsius). Cloud top temperatures that cold indicate strong storms with the potential to generate heavy rainfall. The southern quadrant of the storm appeared to be almost devoid of clouds because of outside winds blowing from the southwest, or southwesterly vertical wind shear.

In general, wind shear is a measure of how the speed and direction of winds change with altitude. Tropical cyclones are like rotating cylinders of winds. Each level needs to be stacked on top each other vertically in order for the storm to maintain strength or intensify. Wind shear occurs when winds at different levels of the atmosphere push against the rotating cylinder of winds, weakening the rotation by pushing it apart at different levels.

NOAA's National Hurricane Center (NHC) noted in the discussion on Sept. 5, "Although Gabrielle could experience some intensity fluctuations during the next 24 hours, the cyclone should remain in a rather harsh environment during the next 36 to 48 hours, due to south to southwesterly vertical [wind] shear, some dry air in the middle portions of the atmosphere, and oceanic sea surface temperatures on the order of 25 to 26 Celsius."  Afterward, gradual strengthening is forecast as Gabrielle moves into a more favorable environment.

At 5 a.m. EDT (0900 UTC) NOAA's National Hurricane Center (NHC) said the center of Tropical Storm Gabrielle was located near latitude 21.9 degrees north and longitude 35.0 degrees west. That's about 825 miles (1,330 km) west-northwest of the Cabo Verde Islands. Gabrielle is moving toward the northwest near 8 mph (13 kph), and this motion is expected to continue for the next few days with an increase in forward speed. Maximum sustained winds remain near 50 mph (85 kph) with higher gusts.  The estimated minimum central pressure is 1002 mb (29.59 inches).

The NHC said, "Little change in strength is forecast during the next couple of days.  Afterward, some slow strengthening is expected to begin by this weekend."

For updated forecasts, visit: http://www.nhc.noaa.gov

Credit: 
NASA/Goddard Space Flight Center

Satellite finds a 'hook' of heavy rainfall in Hurricane Juliette

image: The GPM core satellite passed over Juliette on Sept. 5 at 8:46 a.m. EDT (1246 UTC). GPM found the heaviest rainfall (purple) in the northwest of thunderstorms circling the center where it was falling at a rate of over 36 mm (about 1.4 inch) per hour. Heavy rainfall of about 25 mm (1 inch per hour) (in yellow and orange) stretched east and south in that same band circling the eye of the storm, giving the appearance of a hook around the storm's eye. Credit: NASA/JAXA/NRL

Image: 
NASA/JAXA/NRL

From its vantage point in orbit around the Earth, when the Global Precipitation Measurement mission or GPM core satellite passed over the Eastern Pacific Ocean, it gathered data on rainfall rates occurring in Hurricane Juliette. The areas of strongest rainfall resembled a hook.

The GPM satellite passed over Juliette on Sept. 5 at 8:46 a.m. EDT (1246 UTC). GPM found the heaviest rainfall in the northwest of thunderstorms circling the center where it was falling at a rate of over 36 mm (about 1.4 inch) per hour. Heavy rainfall of about 25 mm (1 inch per hour) stretched east and south in that same thunderstorm band circling the eye of the storm, giving the appearance of a hook around the storm's eye. GPM is a joint mission between NASA and the Japan Aerospace Exploration Agency, JAXA.

At 5 a.m. EDT (2 a.m. PDT) on Sept. 5, the National Hurricane Center (NHC) noted, "Juliette's cloud pattern has changed little during the past several hours.  If anything, the spiral bands appear to have improved a bit in the western portion of the cyclone." At that time, the center of Hurricane Juliette was located near latitude 20.2 degrees north and longitude 119.1 degrees west. That's about 620 miles (995 km) west-southwest of the southern tip of Baja California, Mexico.

Juliette is moving toward the northwest near 9 mph (15 kph). The hurricane is expected to move to the west-northwest at a slightly faster forward speed on Friday and should continue this motion through Saturday. Maximum sustained winds remain near 90 mph (150 kph) with higher gusts. The estimated minimum central pressure is 976 millibars.

Gradual weakening is forecast to resume today [Sept. 5] as the storm is expected to move over cooler sea surface temperature and encounter dry air and outside winds (vertical wind shear). Further weakening is expected during the next several days.  Juliette should weaken to a tropical storm on Friday.

For updated forecasts, visit: http://www.nhc.noaa.gov

Credit: 
NASA/Goddard Space Flight Center

NASA finds a few strong storms left in Fernand's remnants over Northeastern Mexico

image: On Sept. 5 at 4:20 a.m. EDT (0820 UTC) the MODIS instrument that flies aboard NASA's Aqua satellite showed strongest storms (red) in fragmented thunderstorms in the remnants of Fernand over northeastern Mexico. There, cloud top temperatures were as cold as minus 70 degrees Fahrenheit (minus 56.6 Celsius).

Image: 
NASA/NRL

Tropical Storm Fernand made landfall in northeastern Mexico and began dissipating. However, infrared imagery from NASA's Aqua satellite shows that there are still fragmented strong storms left in the tropical cyclone's remnants. Those storms have the potential to generate heavy rainfall and there were warnings posted on Sept. 5.

NASA's Aqua satellite used infrared light to analyze the strength of storms in the remnants of Tropical Storm Fernand. Infrared data provides temperature information, and the strongest thunderstorms that reach high into the atmosphere have the coldest cloud top temperatures.

On Sept. 5 at 4:20 a.m. EDT (0820 UTC), the Moderate Imaging Spectroradiometer or MODIS instrument that flies aboard NASA's Aqua satellite found strongest thunderstorms had cloud top temperatures as cold as minus 70 degrees Fahrenheit (minus 56.6 Celsius). Cloud top temperatures that cold indicate strong storms with the potential to generate heavy rainfall.

The Servicio Meteorológico Nacional (SMN) is Mexico's national weather organization. On Sept. 5, SMN issued several warnings for rainfall from Fernand's remnants. SMN forecasters expect rainfall will total up to 15 inches over northeastern Mexico. In Tamaulipas less than one additional inch is expected. However, in central and southern Nuevo Leon, another 3 to 6 inches is expected. In northern Nuevo Leon and southern Coahuila, 2 to 5 inches are forecasts, and south Texas and the lower Texas coast can expect between 1 to 2 inches, totalling 6 inches in that area.

NOAA's National Hurricane Center (NHC) issued the final advisory on the remnants of Fernand at 0300 UTC on Sept. 5 (11 p.m. EDT on Sept. 4). At that time, the remnants were centered near 23.0 degrees north latitude and 99.0 degrees west longitude. That's about 130 miles west-southwest of the mouth of the Rio Grande River. The remnants were moving to the west-northwest and the remnants had maximum sustained winds near 30 knots (34.5 mph/55.5 kph).

Fernand is expected to dissipate over the next day or two.

Credit: 
NASA/Goddard Space Flight Center

NASA measures Dorian's heavy rainfall from Bahamas to Carolinas

image: This false-colored infrared image taken from NASA's Aqua satellite on Sept. 4 at 2:29 p.m. EDT (18:29 UTC) shows Dorian after it re-strengthened into a Category 3 hurricane off the Georgia and South Carolina coast. The image shows a clear eye in the storm. Intense storms capable of generating heavy rainfall appear in purple.

Image: 
NASA JPL/Heidar Thrastarson

Hurricane Dorian continues to generate tremendous amounts of rainfall, and has left over three feet of rain in some areas of the Bahamas and is now lashing the Carolinas. NASA's IMERG product provided a look at those rainfall totals.

By Thursday morning, September 5, Hurricane Dorian had dumped heavy rain on coastal South Carolina.  An even greater accumulation of over 10 inches was occurring off shore along the path of Dorian's inner core.  In part because of Hurricane Dorian's forward motion during the past two days, the recent rainfall totals have remained below the 36-inch accumulation observed when Dorian was stalled over the Bahamas.

NASA has the ability to estimate the rainfall rates occurring in a storm or how much rain has fallen. Rainfall imagery was generated using the Integrated Multi-satEllite Retrievals for GPM or IMERG product at NASA's Goddard Space Flight Center in Greenbelt, Maryland. These near-realtime rain estimates come from the NASA IMERG algorithm, which combines observations from a fleet of satellites in the GPM or Global Precipitation Measurement mission constellation of satellites, and is calibrated with measurements from the GPM Core Observatory as well as rain gauge networks around the world. The measurements are done in near-real time, to provide global estimates of precipitation every 30 minutes.

The storm-total rainfall at a particular location varies with the forward speed of the hurricane, with the size of the hurricane's wind field, and with how vigorous the updrafts are inside the hurricane.

Warnings and Watches on Sept. 5

NOAA's National Hurricane Center noted the following warnings and watches on Sept. 5. A Storm Surge Warning is in effect from the Savannah River to Poquoson, VA, Pamlico and Albemarle Sounds, Neuse and Pamlico Rivers and Hampton Roads, VA.  A Hurricane Warning is in effect from the Savannah River to the North Carolina/Virginia border and for the Pamlico and Albemarle Sounds. A Tropical Storm Warning is in effect from the North Carolina/Virginia border to Chincoteague, VA, and for the Chesapeake Bay from Smith Point southward. A Tropical Storm Watch is in effect for north of Chincoteague VA to Fenwick Island, DE and the Chesapeake Bay from Smith Point to Drum Point, the Tidal Potomac south of Cobb Island, Woods Hole to Sagamore Beach, MA, Nantucket and Martha's Vineyard, MA.

NHC:  Dorian's Status on Sept. 5

NHC's latest bulletin at 8 a.m. EDT (1200 UTC) noted the eye of Hurricane Dorian was located near latitude 32.1 degrees North, longitude 79.3 degrees West. That puts the eye of Dorian about 70 miles (115 km) south-southeast of Charleston, South Carolina. Dorian is now moving toward the north-northeast near 8 mph (13 kph).   Maximum sustained winds are near 115 mph (185 km/h) with higher gusts.  Dorian is a category 3 hurricane on the Saffir-Simpson Hurricane Wind Scale.  Some fluctuations in intensity are expected this morning, followed by slow weakening through Saturday. However, Dorian is expected to remain a hurricane for the next few days. Hurricane-force winds extend outward up to 60 miles (95 km) from the center, and tropical-storm-force winds extend outward up to 195 miles (315 km). Charleston International Airport recently reported a wind gust of 61 mph (98 kph). The estimated minimum central pressure based on Air Force Reserve Hurricane Hunter data is 959 millibars.

Dorian's Forecast Path

The National Hurricane Center forecast calls for Dorian to turn toward the northeast by tonight, and a northeastward motion at a faster forward speed is forecast on Friday.  On the forecast track, the center of Dorian will continue to move close to the coast of South Carolina today, and then move near or over the coast of North Carolina tonight and Friday.  The center should move to the southeast of extreme southeastern New England Friday night and Saturday morning, and approach Nova Scotia later on Saturday.

For updated forecasts, visit NOAA's NHC: http://www.nhc.noaa.gov

For more info on Dorian's rainfall from the Precipitation Measurement missions:  Hurricane Dorian Brings Heavy Rain to Bahamas

Credit: 
NASA/Goddard Space Flight Center

Study shows exposure to multiple languages may make it easier to learn one

Learning a new language is a multi-step, often multi-year process: Listen to new sounds, read new word structures, speak in different patterns or inflections.

But the chances of picking up that new language -- even unintentionally -- may be better if you're exposed to a variety of languages, not just your native tongue.

A new study from the University of Washington finds that, based on brain activity, people who live in communities where multiple languages are spoken can identify words in yet another language better than those who live in a monolingual environment.

"This study shows that the brain is always working in the background. When you're overhearing conversations in other languages, you pick up that information whether you know it or not," said Kinsey Bice, a postdoctoral fellow in the UW Department of Psychology and the Institute for Learning and Brain Sciences and lead author of the study, which is published in the September issue of the journal Brain and Language.

The finding itself was somewhat by happenstance, explained Bice. The research launched in a community where English was the predominant language, but a cross-country lab move -- for unrelated reasons -- resulted in an additional study sample, in a community with a diversity of languages.

Yet the task for participants remained the same: Identify basic words and vowel patterns in an unfamiliar language -- in this case, Finnish. While some of the classroom test results were similar between the two groups, the brain activity of those in the diverse-language setting was measurably greater when it came to identifying words they hadn't seen before.

The work started in the community around Pennsylvania University. According to Census data, the surrounding county is 85% white, and statewide, about 10% of residents speak a language other than English at home. For this study, researchers enrolled 18 people who were "functionally monolingual," based on their self-professed lack of proficiency in any language other than English.

Then, faculty author Judith Kroll, now of the University of California, Irvine, had an opportunity to relocate her lab to Southern California, and the team decided to see what results the experiment might yield there.

The second study sample of 16 people -- all monolingual English speakers -- came from the community around University of California, Riverside. Census data show that 35% of the surrounding county is white, and statewide, 44% of people ages 5 and older live in households where English is not the primary language.

The researchers chose Finnish because it wasn't common to either study location and relies on vowel-harmony rules that can be challenging for learners. Essentially, the vowels "ä," "ö" and "y" -- known as "front vowels" because they are formed in the front of the speaker's mouth -- cannot appear in the same words as the "back vowels": "a," "o" and "u." For instance, "lätkä," the word for "hockey," contains only front vowels, while "naula," the word for "nail," contains only back vowels.

Across two hour-long sessions, the participants were introduced to 90 Finnish vocabulary words through cards labeled with the word, a picture of what the word represented and an audio recording of a native speaker pronouncing the word. They also were asked to distinguish between nonsense and actual Finnish words to help them infer the vowel patterns. At the conclusion of training, participants were tested on words they had learned, as well as new and "fake" Finnish words. For the test portion, participants wore a headpiece equipped with special sensors that measure brain activity by detecting minute electrical signals on the scalp, a noninvasive technique called electroencephalography (EEG).

Both groups demonstrated similar abilities to identify Finnish words they had studied and determining whether they were different from the nonsense words they encountered during the training sessions. Neither group, however, showed particular fluency with telling the difference between real words and nonsense words that they had not seen before.

The EEG results, however, showed that the brains of the California participants, when shown the unknown words (real and nonsense), could tell the difference.

Brain and behavior data measure different time scales of information, she explained. Neurological measures show, millisecond by millisecond, how the brain processes what a person perceives. Behavioral measures can show a slight delay when compared to what's happening in the brain, because cognitive processes like decision-making and retrieving information from memory occur before a person answers a question or takes some kind of action, Bice said.

The results suggest an effect of ambient exposure to other languages, Bice said. The groups were generally matched in terms of demographics and their proficiency in other languages; the only differences were the socioeconomic status and the language environment. If anything, the higher levels of education and income in the Pennsylvania community normally would be associated with greater language learning. That leaves the environment.

The difference in brain activity among the California participants is reminiscent of past research at the UW that shows neurological results often "outpace" behavioral, or classroom, results, Bice said.

In the end, because of the lab relocation, the study findings were serendipitous, Bice said. Further research could more formally control for various factors and expand the study pool. But this study shows the ways the human brain may absorb another language, itself a useful skill in a globalizing society, she added.

"It's exciting to be reminded that our brains are still plastic and soaking in information around us, and we can change ourselves based on the context we place ourselves in," Bice said.

Credit: 
University of Washington

Bots might prove harder to detect in 2020 elections

USC Information Sciences Institute (USC ISI) computer scientist, Emilio Ferrara, has new research indicating that bots or fake accounts enabled by artificial intelligence on social media have evolved and are now better able to copy human behaviors in order to avoid detection.

In the journal First Monday, research by Ferrara and colleagues Luca Luceri (Scuola Universitaria Professionale della Svizzera Italiana), Ashok Deb (USC ISI), Silvia Giordano (Scuola Universitaria Professionale della Svizzera Italiana), examine bot behavior during the US 2018 elections compared to bot behavior during the US 2016 elections.

The researchers studied almost 250,000 social media active users who discussed the US elections both in 2016 and 2018, and detected over 30,000 bots. They found that bots in 2016 were primarily focused on retweets and high volumes of tweets around the same message. However, as human social activity online has evolved, so have bots. In the 2018 election season, just as humans were less likely to retweet as much as they did in 2016, bots were less likely to share the same messages in high volume.

Bots, the researchers discovered, were more likely to employ a multi-bot approach as if to mimic authentic human engagement around an idea. Also, during the 2018 elections, as humans were much more likely to try to engage through replies, bots tried to establish voice and add to dialogue and engage through the use of polls, a strategy typical of reputable news agencies and pollsters, possibly aiming at lending legitimacy to these accounts.

In one example, a bot account posted an online Twitter poll asking if federal elections should require voters to show ID at the polls. It then asked Twitter users to vote and retweet.

Lead author, Emilio Ferrara, noted, "Our study further corroborates this idea that there is an arms race between bots and detection algorithms. As social media companies put more efforts to mitigate abuse and stifle automated accounts, bots evolve to mimic human strategies. Advancements in AI enable bots producing more human-like content. We need to devote more efforts to understand how bots evolve and how more sophisticated ones can be detected. With the upcoming 2020 US elections, the integrity of social media discourse is of paramount importance to allow a democratic process free of external
influences."

Credit: 
University of Southern California

NIST team shows atoms can receive common communications signals

image: Wireless communications often use a format called phase shifting or phase modulation, in which the signals are shifted relative to one another in time. In this example, the communications signal (blue) contains periodic reversals relative to the reference signal (red). These reversals are the blips that look like cats' ears. The information (or data) is encoded in this modulation.

Image: 
Holloway/NIST

Researchers at the National Institute of Standards and Technology (NIST) have demonstrated a new type of sensor that uses atoms to receive commonly used communications signals. This atom-based receiver has the potential to be smaller and work better in noisy environments than conventional radio receivers, among other possible advantages.

The NIST team used cesium atoms to receive digital bits (1s and 0s) in the most common communications format, which is used in cell phones, Wi-Fi and satellite TV, for example. In this format, called phase shifting or phase modulation, radio signals or other electromagnetic waves are shifted relative to one another over time. The information (or data) is encoded in this modulation.

"The point is to demonstrate one can use atoms to receive modulated signals," project leader Chris Holloway said. "The method works across a huge range of frequencies. The data rates are not yet the fastest out there, but there are other benefits here, like it may work better than conventional systems in noisy environments."

As described in a new paper, the quantum sensor received signals based on real-world phase-shifting methods. A 19.6 gigahertz transmission frequency was chosen because it was convenient for the experiment, but it also could be used in future wireless communications systems, Holloway said.

The NIST team previously used the same basic technique for imaging and measurement applications. Researchers use two different color lasers to prepare atoms contained in a vapor cell into high-energy ("Rydberg") states, which have novel properties such as extreme sensitivity to electromagnetic fields. The frequency of an electric field signal affects the colors of light absorbed by the atoms.

In the new experiments, the team used a recently developed atom-based mixer to convert input signals into new frequencies. One radio-frequency (RF) signal acts as a reference and a second RF signal serves as the modulated signal carrier. Differences in frequency and the offset between the two signals were detected and measured by probing the atoms.

While many researchers have previously shown that atoms can receive other formats of modulated signals, the NIST team was the first to develop an atom-based mixer that could handle phase shifting.

Depending on the encoding scheme, the atom-based system received up to about 5 megabits of data per second. This is close to the speed of older, third-generation (3G) cell phones.

The researchers also measured the accuracy of the received bit stream based on a conventional metric called error vector magnitude (EVM). EVM compares a received signal phase to the ideal state and thus gauges modulation quality. The EVM in the NIST experiments was below 10 percent, which is decent for a first demonstration, Holloway said. This is comparable to systems deployed in the field, he added.

Tiny lasers and vapor cells are already used in some commercial devices such as chip-scale atomic clocks, suggesting it might be feasible to build practical atom-based communications equipment.

With further development, atom-based receivers may offer many benefits over conventional radio technologies, according to the paper. For example, there is no need for traditional electronics that convert signals to different frequencies for delivery because the atoms do the job automatically. The antennas and receivers can be physically smaller, with micrometer-scale dimensions. In addition, atom-based systems may be less susceptible to some types of interference and noise. The atom-based mixer also can measure weak electric fields precisely.

Credit: 
National Institute of Standards and Technology (NIST)

A decade of renewable energy investment, led by solar, tops US $2.5 trillion

image: Solar power will have drawn half - USD 1.3 trillion - of the USD 2.6 trillion in renewable energy investments made over the decade. Solar capacity alone will have grown from 25 GW at the beginning of 2010 to an expected 638 GW by the close of 2019 - enough to produce all the electricity needed each year by about 100 million average homes in the USA.

Image: 
UN Environment / FS / BNEF

Frankfurt / Nairobi - Global investment in new renewable energy capacity this decade - 2010 to 2019 inclusive - is on course to hit USD 2.6 trillion, with more gigawatts of solar power capacity installed than any other generation technology, according to new figures published today.

According to the Global Trends in Renewable Energy Investment 2019 report, released ahead of the UN Global Climate Action Summit, this investment is set to have roughly quadrupled renewable energy capacity (excluding large hydro) from 414 GW at the end of 2009 to just over 1,650 GW when the decade closes at the end of this year.

Solar power will have drawn half — USD 1.3 trillion — of the USD 2.6 trillion in renewable energy capacity investments made over the decade. Solar alone will have grown from 25 GW at the beginning of 2010 to an expected 663 GW by the close of 2019 - enough to produce all the electricity needed each year by about 100 million average homes in the USA. (The USA had about 128 million households in 2018)

The global share of electricity generation accounted for by renewables reached 12.9 per cent, in 2018, up from 11.6 per cent in 2017. This avoided an estimated 2 billion tonnes of carbon dioxide emissions last year alone - a substantial saving given global power sector emissions of 13.7 billion tonnes in 2018.

Including all major generating technologies (fossil and zero-carbon), the decade is set to see a net 2,366 GW of power capacity installed, with solar accounting for the largest single share (638 GW), coal second (529 GW), and wind and gas in third and fourth places (487 GW and 438 GW respectively), and wind and gas in third and fourth places (487 GW and 438 GW respectively).

The cost-competitiveness of renewables has also risen dramatically over the decade. The levelized cost of electricity (a measure that allows comparison of different methods of electricity generation on a consistent basis) is down 81 per cent for solar photovoltaics since 2009; that for onshore wind is down 46 per cent.

"Investing in renewable energy is investing in a sustainable and profitable future, as the last decade of incredible growth in renewables has shown," said Inger Andersen, Executive Director of the UN Environment Programme.

"But we cannot afford to be complacent. Global power sector emissions have risen about 10 per cent over this period. It is clear that we need to rapidly step up the pace of the global switch to renewables if we are to meet international climate and development goals."

2018 sees quarter-trillion dollar mark exceeded again

The report, released annually since 2007, also continued its traditional look at yearly figures, with global investment in renewables capacity hitting USD 272.9 billion in 2018.

While this was 12 per cent down over the previous year, 2018 was the ninth successive year in which capacity investment exceeded USD 200 billion and the fifth successive year above USD 250 billion. It was also was about three times the global investment in coal and gas-fired generation capacity combined.

The 2018 figure was achieved despite continuing falls in the capital cost of solar and wind projects, and despite a policy change that hit investment in China in the second half of the year.

A record 167 GW of new renewable energy capacity was completed in 2018, up from 160 GW in 2017.

Jon Moore, Chief Executive of BloombergNEF (BNEF), the research company that provides the data and analysis for the Global Trends report, commented: "Sharp falls in the cost of electricity from wind and solar over recent years have transformed the choice facing policy-makers. These technologies were always low-carbon and relatively quick to build. Now, in many countries around the world, either wind or solar is the cheapest option for electricity generation."

The report also tracks other, non-capacity investment in renewables - money going into technology and specialist companies. All of these types of investment showed increases in 2018. Government and corporate research and development was up 10 per cent at USD 13.1 billion, while equity raised by renewable energy companies on public markets was 6 per cent higher at USD 6 billion, and venture capital and private equity investment was up 35 per cent at USD 2 billion.

Said Svenja Schulze, Germany's Federal Minister for the Environment, Nature Conservation and Nuclear Safety: "The technologies to use wind, sun or geothermal energy are available, they are competitive and clean. Within 10 years Germany will produce two-thirds of its power based on renewables. We are demonstrating that an industrial country can phase out coal and, at the same time, nuclear energy without putting its economy at risk. We know that renewables make sense for the climate and for the economy. Yet we are not investing nearly enough to decarbonize power production, transport and heat in time to limit global warming to 2C or ideally 1.5C. If we want to achieve a safe and sustainable future, we need to do a lot more now in terms of creating an enabling-regulatory environment and infrastructure that encourage investment in renewables."

"It is important to see renewables becoming first choice in many places," said Nils Stieglitz, President of Frankfurt School of Finance and Management. "But now we need to think beyond scaling-up renewables. Divesting from coal is just one issue within the broader field of sustainable finance. Investors increasingly care whether what they do makes sense in the context of a low-carbon and sustainable future."

China still leads, but renewables investment spreads

China has been by far the biggest investor in renewables capacity over this decade, having committed USD 758 billion between 2010 and the first half of 2019, with the U.S. second on USD 356 billion and Japan third on USD 202 billion.

Europe as a whole invested USD 698 billion in renewables capacity over the same period, with Germany contributing the most at USD 179 billion, and the United Kingdom USD 122 billion.

While China remained the largest single investor in 2018 (at USD 88.5 billion, down 38 per cent), renewable energy capacity investment was more spread out across the globe than ever last year, with 29 countries each investing more than USD 1 billion, up from 25 in 2017 and 21 in 2016

Credit: 
Terry Collins Assoc

Apathy as an indicator of progression in Huntington's disease

Researchers from the brain cognition and plasticity group of the Bellvitge Biomedical Research Institute (IDIBELL) and the Neuroscience Institute of the University of Barcelona (UBNeuro) have led an innovative study that identifies modifications in the connectivity of cerebral white matter associated with the heterogeneous nature of apathy in Huntington's disease (HD), making it possible to use this syndrome as a biomarker of disease progression. Their findings, published in Neuroimage: Clinical, may also lead to personalized treatments for apathy as a multidimensional syndrome in other neurodegenerative disorders.

"Our goal was to study apathy as a multidimensional syndrome and explore, for the first time, the relationship between different subtypes of apathy and white matter connectivity in Huntington's disease", says Dr Estela Càmara, lead author.

Although apathy is usually identified with a lack of interest and widespread motivation, it is a much more complex syndrome that can be associated with three domains: cognitive ("I find it difficult to set future goals"), emotional ("I feel indifference for problems that used to be of my interest") and automatic activation deficit ("I need a push to start things"). "Distinguishing the white matter configuration of each subtype in HD can allow us to define disease profiles across the spectrum of apathy, facilitating its diagnosis and treatment", adds Audrey DePaepe, Fullbright student and first author of the study.

"Our study proposes a new, tailored methodology that uses specific apathy measures and an optimized protocol to study the syndrome accurately while taking the underlying heterogeneity of HD at the individual level into account", explains Dr Sierpowska.

Forty-six patients with Huntington's disease and thirty-five healthy controls underwent a psychiatric evaluation of apathy in their three domains, in addition to several imaging tests, to study whether individual differences in specific cortico-striatal tracts predicted global apathy and its subdomains.

The researchers found that apathy profiles can follow different timelines; for example, the self-activation deficit domain manifests before the motor one. "This relationship is very relevant to corroborate the progressive nature of apathy as a biomarker in HD and its potential to capture the progression of neurodegeneration," explains Dr Càmara. These results also have implications for the differential diagnosis of apathy subtypes and, subsequently, for the design of more individualized pharmacological treatments.

The study also involved several hospitals in Barcelona, such as Bellvitge University Hospital, Hospital de la Santa Creu i Sant Pau, Hospital Clínic of Barcelona and Vergè de la Mercè Hospital. This allowed researchers to investigate with a larger sample of patients, a fact of particular importance for a rare disease such as Huntington's disease.

Credit: 
IDIBELL-Bellvitge Biomedical Research Institute

Researchers uncover role of earthquake motions in triggering a 'surprise' tsunami

video: Using a coupled earthquake-tsunami on LRZ computing resources, LMU researchers were able to uncover the cause of the 2018 Palu Bay earthquake's devastation.

Image: 
LMU

In newly published research, an international team of geologists, geophysicists, and mathematicians show how coupled computer models can accurately recreate the conditions leading to the world's deadliest natural disasters of 2018, the Palu earthquake and tsunami, which struck western Sulawesi, Indonesia in September last year. The team's work was published in Pure and Applied Geophysics.

The tsunami was as surprising to scientists as it was devastating to communities in Sulawesi. It occurred near an active plate boundary, where earthquakes are common. Surprisingly, the earthquake caused a major tsunami, although it primarily offset the ground horizontally--normally, large-scale tsunamis are typically caused by vertical motions.

Researchers were at a loss--what happened? How was the water displaced to create this tsunami: by landslides, faulting, or both? Satellite data of the surface rupture suggests relatively straight, smooth faults, but do not cover areas offshore, such as the critical Palu Bay. Researchers wondered--what is the shape of the faults beneath Palu Bay and is this important for generating the tsunami? This earthquake was extremely fast. Could rupture speed have amplified the tsunami?

Using a supercomputer operated by the Leibniz Supercomputing Centre, a member of the Gauss Centre for Supercomputing, the team showed that the earthquake-induced movement of the seafloor beneath Palu Bay itself could have generated the tsunami, meaning the contribution of landslides is not required to explain the tsunami's main features. The team suggests an extremely fast rupture on a straight, tilted fault within the bay. In their model, slip is mostly lateral, but also downward along the fault, resulting in anywhere from 0.8 metres to 2.8 metres vertical seafloor change that averaged 1.5 metres across the area studied. Critical to generating this tsunami source are the tilted fault geometry and the combination of lateral and extensional strains exerted on the region by complex tectonics.

The scientists come to this conclusion using a cutting-edge, physics-based earthquake-tsunami model. The earthquake model, based on earthquake physics, differs from conventional data-driven earthquake models, which fit observations with high accuracy at the cost of potential incompatibility with real-world physics. It instead incorporates models of the complex physical processes occurring at and off of the fault, allowing researchers to produce a realistic scenario compatible both with earthquake physics and regional tectonics.

The researchers evaluated the earthquake-tsunami scenario against multiple available datasets. Sustained supershear rupture velocity, or when the earthquake front moves faster than the seismic waves near the slipping faults, is required to match simulation to observations. The modeled tsunami wave amplitudes match the available wave measurements and the modeled inundation elevation (defined as the sum of the ground elevation and the maximum water height) qualitatively match field observations.
This approach offers a rapid, physics-based evaluation of the earthquake-tsunami interactions during this puzzling sequence of events.

"Finding that earthquake displacements probably played a critical role generating the Palu tsunami is as surprising as the very fast movements during the earthquake itself," said Thomas Ulrich, PhD student at Ludwig Maximilian University of Munich and lead author of the paper. "We hope that our study will launch a much closer look on the tectonic settings and earthquake physics potentially favouring localized tsunamis in similar fault systems worldwide."

Credit: 
Gauss Centre for Supercomputing