Tech

New bacteriophage fully characterized and sequenced

image: The only peer-reviewed journal dedicated to bacteriophage research and its applications in medicine, agriculture, aquaculture, veterinary applications, animal production, food safety, and food production

Image: 
Mary Ann Liebert, Inc., publishers

New Rochelle, NY, January 27, 2020--Researchers have identified a new bacteriophage that can infect and destroy bacteria in the genus Pantoea, for which few bacteriophage have been identified and characterized. Details of the isolation, characterization, and full genome sequencing of this new bacteriophage are published in the new Genome Introduction section of PHAGE: Therapy, Applications, and Research, a new peer-reviewed journal from Mary Ann Liebert, Inc., publishers. Click here to read the full-text article free on the PHAGE website through February 27, 2020.

The article entitled "Isolation and Characterization of vB_PagP-SK1, a T7-Like Phage Infecting Pantoea agglomerans." was coauthored by John Stavrinides, University of Regina (Canada) and colleagues from University of Otago (Dunedin, New Zealand), Roy Romanow Provincial laboratory (Regina, Canada), and Cadham Provincial Laboratory (Winnipeg, Canada).

Members of the Pantoea genus can cause disease in plant and animal hosts, as well as opportunistic infections in humans. The researchers showed the new bacteriophage vB_PagP-SK1 to have a broad host range, capable of infecting 15 strains of Pantoea across three species groups - primarily P. agglomerans - together with one strain of Erwinia billingiae. vB-PagP-SK1 belongs to the Teseptimavirus genus and is most closely related to the E. amylovora phage vB_EamP-L1.

"The first Genome Introduction describes the isolation and characterization of a novel bacteriophage (vB_PagP-SK1)," states Dr. Andrew Millard, Section Editor of PHAGE and University of Leicester, U.K. "Utilizing a novel high throughput microplate assay for host range studies, the phage vB_PagP-SK1 was shown to infect multiple species within the genus Pantoea and Erwinia. Combined with genomic data, it demonstrated that genetically similar bacteriophages can have different host ranges."

Credit: 
Mary Ann Liebert, Inc./Genetic Engineering News

Recreational fishers catching more sharks and rays

Recreational fishers are increasingly targeting sharks and rays, a situation that is causing concern among researchers.

A new study by an international team of scientists reveals that recreational catches of these fishes have gradually increased over the last six decades around the world, now accounting for 5-6 per cent of the total catches taken for leisure or pleasure.

In their paper published in Frontiers in Marine Science, the experts explain that almost 1 million tonnes of fish are being extracted from marine waters by recreational fishers every year. Overall, these recreational fish catches have grown from 280,000 tonnes per year in the 1950s to around 900,000 tonnes in the mid-2010s. Of this total amount, some 54,000 tonnes are comprised of sharks and rays.

"The rise in shark and ray catches started in the 1990s and has been particularly sharp in Oceania and South America," said Kátia Freire, lead author of the study and a professor at the Universidade Federal de Sergipe in Brazil. "However, we may actually be underestimating the real amounts, as accessing recreational fisheries data is particularly difficult. Most countries do not compile these data or those that do, do not incorporate them into their national fisheries statistics that are reported to the Food and Agriculture Organization of the United Nations."

The rise in shark and ray catches for recreational purposes is particularly troubling because many species are already threatened by the commercial fishing industry and by illegal fishers.

"The problem with sharks and rays is that even if they are thrown back into the ocean, a practice that is not uncommon as many recreational fishers now practice 'catch-and-release,' not all individuals survive," said Daniel Pauly, co-author of the study and principal investigator of the Sea Around Us initiative at the University of British Columbia's Institute for the Ocean and Fisheries. "For example, 98 per cent of scalloped hammerheads die."

The biology of sharks makes them slow when it comes to growing and maturing, which means that they produce only a small number of young in their lifetimes. If many individuals are caught before they have been able to reproduce sufficiently, then their population numbers start to dwindle.

Slow growth and late maturation also make increasingly popular recreational practices such as beach-based shark fishing problematic.

"In Australia, a group of recreational fishers lands large tiger sharks and hammerheads from the beach. These large animals are essential to population health and are unlikely to survive the experience of a lengthy fight and subsequently being dragged up the beach," said Jessica Meeuwig, co-author of the study and director of the Marine Futures Lab at the University of Western Australia. "Given the threatened status of these species globally, such practices are inappropriate."

Even though information is scarce and disparate, the research team was able to learn about what is being caught and has been caught in 125 countries over the past 60+ years.

"Thus, we have assembled the first comprehensive global estimate of marine recreational catches, which is a major accomplishment," said Dirk Zeller, co-author of the study and director of the Sea Around Us - Indian Ocean at the University of Western Australia. "Even approximate estimates are better than saying 'we have no data,' which translates into 'there are zero recreational catches,' a statement that is not true for most countries and that leads to an under-valuation of recreational fisheries and their impact on fish populations."

Credit: 
University of British Columbia

APS tip sheet: Network dynamics of online polarization

image: Interaction dynamics reveals the mechanisms behind online polarization and social media echo chambers

Image: 
Baumann et al., Physical Review Letters (2020)

In the last decade, social media communities have become increasingly polarized. Now, scientists have created a model to explain the dynamics of radicalization. The research examines the appearance of online polarization and the Echo Chamber Effect -- in which a person fortifies their beliefs by exclusively interacting with like-minded people. The model developed by Baumann et al. uses the premise that online users will radicalize when they segregate into a group with shared viewpoint. The model showed discourse online becomes less stable when echo chambers appear as well as when topics are more controversial. It also accurately replicated actual data from Twitter regarding polarization and debates. Studying how online polarization emerges can improve scientists' understanding of social network dynamics.

Credit: 
American Physical Society

NASA's Aqua satellite reveals Tropical Cyclone Esami's dissipation

image: On January 27 at 4:05 a.m. EST (0905 UTC), the MODIS instrument aboard NASA's Aqua satellite provided a visible image of the remnant clouds of former Tropical Cyclone Esami in the Southern Indian Ocean.

Image: 
NASA/NRL

Tropical Cyclone Esami formed in the Southern Indian Ocean and just three days later, visible imagery from NASA's Aqua satellite confirmed the storm had dissipated.

Tropical Cyclone Esami formed on January 24 at 4 p.m. EST (2100 UTC) about 764 miles east-southeast of Port Louis, Mauritius. Esami's maximum sustained winds peaked the next day on Jan. 25 at 45 knots (52 mph/83 kph).

On January 26 at 4 p.m. EST (2100 UTC), the Joint Typhoon Warning Center issued their final warning on Tropical Cyclone Esami. At that time, Esami had weakened to a depression with maximum sustained winds near 30 knots (34.5 mph/55.5 kph). It was located near latitude 29.8 degrees south and longitude 77.9 degrees west, about 1,260 miles east-southeast of Port Louis, Mauritius. The depression was moving to the south-southeast and was dissipating.

When NASA's Aqua satellite passed over the Southern Indian Ocean on Jan. 27 at 4:05 a.m. EST (0905 UTC), the Moderate Resolution Imaging Spectroradiometer or MODIS instrument provided a visible image that revealed the remnants of Esami were dissipating.

Typhoons and hurricanes are the most powerful weather events on Earth. NASA's expertise in space and scientific exploration contributes to essential services provided to the American people by other federal agencies, such as hurricane weather forecasting.

Credit: 
NASA/Goddard Space Flight Center

NASA finds Tropical Cyclone Diane's quick fade

image: On Jan. 27 at 4 a.m. EST (0900 UTC) the MODIS instrument that flies aboard NASA's Aqua satellite provided a visible image of Tropical Cyclone Diane that showed wind shear had blown the bulk of clouds southwest of its center and was causing the storm to dissipate.

Image: 
NASA/NRL

Tropical Cyclone Diane formed late on January 24 and by the next day it was reduced to a remnant low-pressure system in the Southern Indian Ocean. NASA's Aqua satellite provided a look at its remnants on Jan. 27.

On Jan. 24 by 4 p.m. EST (2100 UTC), Diane formed just 38 nautical miles northwest of Port Louis, Mauritius. On Jan. 25, Diane reached maximum sustained winds near 45 knots (52 mph/83 kph) and that strength was maintained until early on Jan. 26.

At 4 p.m. EST (2100 UTC) on Jan. 26, the Joint Typhoon Warning Center issued the final warning on Diane. At that time, Diane was located near latitude 23.8 degrees south and longitude 70.1 degrees east, about 746 nautical miles east-southeast of Port Louis, Mauritius. Maximum sustained winds dropped to 40 knots (46 mph/74 kph) and were weakening because of vertical wind shear.

In general, wind shear is a measure of how the speed and direction of winds change with altitude. Tropical cyclones are like rotating cylinders of winds. Each level needs to be stacked on top each other vertically in order for the storm to maintain strength or intensify. Wind shear occurs when winds at different levels of the atmosphere push against the rotating cylinder of winds, weakening the rotation by pushing it apart at different levels. Wind shear from the northeast was pushing against Diane, sending the bulk of clouds southwest of the center.

On Jan. 27 at 4 a.m. EST (0900 UTC) the Moderate Resolution Imaging Spectroradiometer or MODIS instrument that flies aboard NASA's Aqua satellite provided a visible image of Diane that showed wind shear had taken a final toll on the storm. The MODIS image revealed that the bulk of clouds associated with the former tropical storm had been blown to the southeast of the weak center of circulation and Diane was dissipating.

NASA's Aqua satellite is one in a fleet of NASA satellites that provide data for hurricane research.

Tropical cyclones/hurricanes are the most powerful weather events on Earth. NASA's expertise in space and scientific exploration contributes to essential services provided to the American people by other federal agencies, such as hurricane weather forecasting.

Credit: 
NASA/Goddard Space Flight Center

NASA catches the dying remnants of Tropical Cyclone 12P

image: On January 27 at 6:50 a.m. EST (1150 UTC), the MODIS instrument aboard NASA's Aqua satellite gathered temperature information about Tropical Depression 12P's cloud tops. MODIS found one small area of powerful thunderstorms (red) where temperatures were as cold as or colder than minus 70 degrees Fahrenheit (minus 56.6 Celsius). The storm was dissipating.

Image: 
NASA/NRL

Tropical Cyclone 12P formed in the Southern Pacific Ocean on January 25 and two days later, NASA's Aqua satellite observed the storm's demise.

Tropical Cyclone 12P formed on January 25 at 10 a.m. EST (1500 UTC) about 142 nautical miles southeast of Niue. That was the peak for 12P, as maximum sustained winds reached 35 knots (40 mph).

On January 26, the Joint Typhoon Warning Center issued their final warning on Tropical cyclone 12P at 10 a.m. EST (1500 UTC). At that time, 12P had weakened to a depression with maximum sustained winds near 25 knots. It was located near latitude 22.4 degrees south and longitude 166.4 degrees west, about 279 miles southeast of Niue. The depression was moving southeast and weakening to a remnant low pressure area.

By January 27, the remnants of 12P were dissipating under adverse conditions. NASA's Aqua satellite passed over the Southern Pacific Ocean and the Moderate Resolution Imaging Spectroradiometer or MODIS instrument provided an infrared view of the depression at 6:50 a.m. EST (1150 UTC). MODIS found one small area of powerful thunderstorms remained where temperatures were as cold as or colder than minus 70 degrees Fahrenheit (minus 56.6 Celsius). NASA research has shown storms with cloud top temperatures that cold (and high in the troposphere) have the ability to generate heavy rainfall. That area of heavy rainfall was quickly dissipating. 12P's remnants are expected to dissipate later in the day.

Typhoons and hurricanes are the most powerful weather events on Earth. NASA's expertise in space and scientific exploration contributes to essential services provided to the American people by other federal agencies, such as hurricane weather forecasting.

By Rob Gutro
NASA's Goddard Space Flight Center

Credit: 
NASA/Goddard Space Flight Center

Political Islamophobia may look differently online than in person

UNIVERSITY PARK, Pa. -- Islamophobia was rampant on social media during the midterm elections, but researchers say future Muslim candidates running for office should know that the hatred they see online may be different than what they experience on the campaign trail.

In a study, the researchers found that the majority of anti-Muslim tweets related to the 2018 midterm elections were sent by either a select few thought leaders with large followings on social media, or by bots -- software that autonomously tweets or retweets content. Additionally, Muslim candidates' face-to-face experiences with constituents were generally more positive than what they experienced online.

Shaheen Pasha, an assistant teaching professor at Penn State, said the findings help dispel the myth that the vast majority of people in the U.S. are anti-Muslim.

"People retweet these messages of hate because they feel like they're jumping on a bandwagon where they think everyone feels that way," Pasha said. "But in reality, it's just a handful of people and a lot of bots who are creating this content. These hateful messages are snowballing even though the majority of people may not agree or actually feel that way."

According to the researchers, Islamophobia ramped up on social media as Muslim candidates ran for seats in the U.S. Senate and House of Representatives in the 2018 midterm elections. Ilhan Omar and Rashida Tlaib, who campaigned during the election and ended up being the first two Muslim women elected to Congress, were two of the primary targets.

Pasha said that as more Muslims run for political office, she and the other researchers wanted to do a deep dive into where Islamophobia on social media was coming from, to learn more about what future Muslim candidates can expect.

"We're going to continue seeing these messages in the next election, especially with candidates who are more vocal or more visibly identifiable as Muslims," Pasha said. "We wanted to put together a roadmap for future candidates that let them know what they can expect to see online, what to expect on the ground as they speak with the public, and what this means for them as they go out in the public eye."

For the study, the researchers surveyed 40 Muslim Americans who ran for office in the 2018 midterm election about their experiences during their campaigns. The researchers also collected data about the candidates' social media activity and tweets about the candidates between Sept. 30 and Nov. 4, 2018. Tweets were coded for hate speech and Islamophobic or xenophobic language.

The researchers found that while the Muslim candidates reported little Islamophobia while meeting with constituents face to face, there was a narrative surrounding the candidates on social media that was "disproportionately Islamophobic, xenophobic, racist, and misogynistic," according to the report, recently published by the Social Science Research Council.

For example, 40% of the 90,193 tweets referencing Omar within the study window contained Islamophobic or anti-immigrant language. Another 10% contained Israel-related hate speech. Out of the 12,492 tweets tagging Tlaib, 28% were Islamophobic or anti-immigrant and 22% attacked her sympathy for Palestine.

In contrast, while one-third of survey respondents reported "high" or "very high" levels of Islamophobia during their campaigns, almost 40% said they experienced "little" or "no" Islamophobia. Additionally, 74% said they rarely or never encountered people who believe Islam is evil or a religion of hate, and 67% said they rarely or never encountered people who think Islam supports terrorism.

"When the candidates met people on the ground, there was some skepticism, but it wasn't coming from a place of hatred and vitriol like we saw online," Pasha said. "Face to face, people still had questions, but they were more about the issues and about whether they were electable. It was less to do with their religion."

Pasha said she hopes the research can help prepare and encourage other Muslim candidates to run for office.

"Omar and Tlaib have started a movement where we're seeing more Muslim representative in elections, and I think we're going to see more of that moving forward," Pasha said. "It's important for these candidates to know what to expect when they hit the campaign trail, and to know that the majority of people aren't spreading this hatred and vitriol, may help them believe they can do it."

Credit: 
Penn State

Seismic biomarkers in Japan Trench fault zone reveal history of large earthquakes

In the aftermath of the devastating Tohoku-Oki earthquake that struck off the coast of Japan in March 2011, seismologists were stunned by the unprecedented 50 meters of shallow displacement along the fault, which ruptured all the way to the surface of the seafloor. This extreme slip at shallow depths exacerbated the massive tsunami that, together with the magnitude 9.1 earthquake, caused extensive damage and loss of life in Japan.

In a new study, published January 27 in Nature Communications, researchers used a novel technique to study the faults in the Japan Trench, the subduction zone where the Tohoku-Oki earthquake struck. Their findings reveal a long history of large earthquakes in this fault zone, where they found multiple faults with evidence of more than 10 meters of slip during large earthquakes.

"We found evidence of many large earthquakes that have ruptured to the seafloor and could have generated tsunamis like the one that struck in 2011," said coauthor Pratigya Polissar, associate professor of ocean sciences at UC Santa Cruz.

Japanese researchers looking at onshore sediment deposits have found evidence of at least three similar tsunamis having occurred in this region at roughly 1,000-year intervals. The new study suggests there have been even more large earthquakes on this fault zone than those that left behind onshore evidence of big tsunamis, said coauthor Heather Savage, associate professor of Earth and planetary sciences at UC Santa Cruz.

Savage and Polissar have developed a technique for assessing the history of earthquake slip on a fault by analyzing organic molecules trapped in sedimentary rocks. Originally synthesized by marine algae and other organisms, these "biomarkers" are altered or destroyed by heat, including the frictional heating that occurs when a fault slips during an earthquake. Through extensive laboratory testing over the past decade, Savage and Polissar have developed methods for quantifying the thermal evolution of these biomarkers and using them to reconstruct the temperature history of a fault.

The Japan Trench Fast Drilling Project (JFAST) drilled into the fault zone in 2012, extracting cores and installing a temperature observatory. UCSC seismologist Emily Brodsky helped organize JFAST, which yielded the first direct measurement of the frictional heat produced by the fault slip during an earthquake (see earlier story). This heat dissipates after the earthquake, however, so the signal is small and transient.

"The biomarkers give us a way to detect permanent changes in the rock that preserve a record of heating on the fault," Savage said.

For the new study, the researchers examined the JFAST cores, which extended through the fault zone into the subducting plate below. "It's a complex fault zone, and there were a lot of faults throughout the core. We were able to say which faults had evidence of large earthquakes in the past," Savage said.

One of their goals was to understand whether some rock types in the fault zone were more prone to large slip in an earthquake than other rocks. The cores passed through layers of mudstones and clays with different frictional strengths. But the biomarker analysis showed evidence of large seismic slip on faults in all the different rock types. The researchers concluded that differences in frictional properties do not necessarily determine the likelihood of large shallow slip or seismic hazard.

Savage and Polissar began working on the biomarker technique as postdoctoral researchers at UC Santa Cruz, publishing their first paper on it with Brodsky in 2011. They continued developing it as researchers at the Lamont-Doherty Earth Observatory of Columbia University, before returning to UC Santa Cruz as faculty members in 2019. Hannah Rabinowitz, the first author of the new paper, worked with them as a graduate student at Columbia and is now at the U.S. Department of Energy.

"We've tested this technique in different rocks with different ages and heating histories, and we can now say yes, there was an earthquake on this fault, and we can tell if there was a large one or many small ones," Savage said. "We can now take this technique to other faults to learn more about their histories."

In addition to Rabinowitz, Savage, and Polissar, the coauthors of the paper include Christie Rowe and James Kirkpatrick at McGill University. This work was funded by the National Science Foundation. The JFAST project was sponsored by the International Ocean Drilling Program (IODP).

Credit: 
University of California - Santa Cruz

Study connects marine heat wave with spike in whale entanglements

image: A rope from fishing gear can be seen as this entangled humpback whale breaches.

Image: 
NOAA-NMFS West Coast Region

Climate change is increasing the frequency and severity of marine heat waves--warm water anomalies that disrupt marine ecosystems--and this is creating new challenges for fisheries management and ocean conservation. A new study shows how the record-breaking marine heat wave of 2014 to 2016 caused changes along the U.S. West Coast that led to an unprecedented spike in the numbers of whales that became entangled in fishing gear.

"With the ocean warming, we saw a shift in the ecosystem and in the feeding behavior of humpback whales that led to a greater overlap between whales and crab fishing gear," said Jarrod Santora, a researcher in applied mathematics at UC Santa Cruz and first author of the study, published January 27 in Nature Communications.

Santora, who is also affiliated with the NOAA Southwest Fisheries Science Center, uses data-driven models of marine ecosystems to inform fishery management and conservation. As science advisor for a working group convened to address the whale entanglement problem, he has been providing his analyses to state and federal agencies to help them make management decisions that can reduce the risk of entanglement.

"It was a perfect storm of events over those three years, but we now have the ability to prevent that from happening again," Santora said. "We've developed a risk assessment and mitigation program, we're doing aerial surveys, and we're providing ecosystem-based indicators to the state resource managers so they can make informed decisions. There's a huge team of people working on this."

The high productivity of the California Current is supported by wind-driven upwelling of cool, nutrient-rich water along the coast, which sustains large populations of prey (such as krill, anchovy, and sardines) that attract whales and other predators. The intensity of upwelling and the extent of cool enriched water off the coast varies from year to year, but the extreme warming event in 2014-16 (which became known as the "warm blob") compressed this prime habitat into a very narrow band along the coast, Santora explained.

"Predators that are normally more spread out offshore all moved inshore because that's where the food was," he said. "Krill populations always take a hit during warming events, but we started to see an increase in anchovy. Humpback whales are unique in their ability to switch between krill and small fish, so during those years they moved inshore after the anchovy."

That shift brought an unusual number of whales into areas where they were more likely to encounter crab fishing gear. Whale entanglements, which averaged about 10 per year prior to 2014, shot up to 53 confirmed entanglements in 2015 and remained high at 55 confirmed entanglements in 2016.

Further complicating the situation was another consequence of the marine heat wave, an unprecedented bloom of toxic algae along the West Coast. When scientists detected dangerous levels of the neurotoxin domoic acid in Dungeness crabs, the opening of the 2015-16 crab fishing season was delayed until late March 2016. Normally, crab fishing activity is highest in November and December, but in 2016, peak fishing activity coincided with the arrival of migrating whales off California in April and May.

"All this gear was going out right during the peak arrival of whales, so that made things worse," Santora said. "But 2016 is not the whole story. We started to see an increase in whale entanglements in late 2014, well before the delayed crab season, and that was due to the habitat compression along the coast."

Another factor, he said, is the ongoing recovery of whale populations. Conservation efforts that began in the 1960s have enabled many populations that were decimated by commercial whaling to begin making a comeback. Although some North Pacific humpback whale populations are still considered threatened or endangered, their overall numbers have been increasing.

According to Santora, the events of 2014-16 show how important it is for scientists to work closely and communicate clearly with fisheries managers and other stakeholders. One positive outcome of the entanglement crisis was the creation of the California Dungeness Crab Fishing Gear Working Group, which includes commercial fishermen, state and federal resource managers, conservationists, and scientists. The group has developed a Risk Assessment and Mitigation Program to support collaborative efforts to reduce entanglements.

"Nobody wants to entangle whales," Santora said. "People are working to develop ropeless gear, but broad application of that new technology is still far in the future. For now, the best we can do is to monitor the situation closely and get ecosystem science information to the people who need it."

Credit: 
University of California - Santa Cruz

Researchers identify opportunities to advance genomic medicine

Genetic discoveries over the past 25 years have substantially advanced understanding of both rare and common diseases, furthering the development of treatment and prevention for ailments ranging from inflammatory bowel diseases to diabetes, according to a study published in Nature Research in January.

The paper, titled "A brief history of human disease genetics," reviews breakthroughs in the association of specific genes with particular disorders, progress mostly driven by advances in technology and analytical approaches. The study also provides a framework for medical innovation to improve clinical care in the field.

"The future of medicine will increasingly focus on delivering care that is tailored to an individual's genetic makeup and patterns," says Judy H. Cho, MD, Dean of Translational Genetics at the Icahn School of Medicine at Mount Sinai, Director of The Charles Bronfman Institute for Personalized Medicine, and a co-author of the report. "Applying this knowledge will help us to enhance personalized health and medicine for patients at The Mount Sinai Hospital now and for years to come."

The study tracks advances in genomics over the past two decades through better technology, expanded access to vast and diverse data, and the development of other foundational resources and tools. The researchers also note the evolution of how diseases were discovered and identified.

Another major advancement is the increasing availability of large prospective population-based cohorts, known as biobanks. These biobanks often include tissue samples from individuals of many ethnic backgrounds and provide access to a wide range of demographic, clinical, and lifestyle data. The study finds that systematic approaches to data sharing, such as global collaborative networks, are critical in characterizing new disorders.

Today, genetic testing for individuals with symptoms and for at-risk relatives occurs routinely; it ranges from cancer screenings to noninvasive prenatal tests. But challenges remain, including the absence of evidence-based guidelines to support health care recommendations, disparities in testing across society, and the lack of experience in genomics by some health care professionals.

The researchers say the biggest task in the coming decade will be to optimize and broadly implement strategies that use human genetics to enhance understanding of health and disease, and maximize the benefits of treatment. This will require joint efforts by the industry and academia to establish:

comprehensive inventories of genotype-phenotype relationships across populations and environments;

proactive measures to address entrenched disparities in scientific capacity and clinical opportunities that benefit individuals and societies across the world;

a systematic assessment of variant and gene-level function across cell types, states, and exposures;

improved strategies for turning basic knowledge from assessments into fully developed molecular, cellular, and physiological models of disease development; and

application of these biological insights to drive new treatment and preventive options.

Credit: 
The Mount Sinai Hospital / Mount Sinai School of Medicine

The great e-scooter hack

image: Computer science experts at UTSA have published the first review of the security and privacy risks posed by e-scooters and their related software services and applications.

Image: 
Photo by MusicFox Fx on Unsplash

(San Antonio -- January 27, 2020) Micromobility vehicles, such as e-scooters, zip in and out of traffic. In San Antonio alone, over 12,000 scooters are on the road. For this reason, micromobility is seen as an alleviating trend to help tackle traffic congestion.

However, new research out of UTSA finds e-scooters have risks beyond the perils of potential collisions. Computer science experts at UTSA have published the first review of the security and privacy risks posed by e-scooters and their related software services and applications.

"We were already investigating the risks posed by these micromobility vehicles to pedestrians' safety. During that study, we also realized that besides significant safety concerns, this new transportation paradigm brings forth new cybersecurity and privacy risks as well," noted Murtuza JaAccording to the review, which will soon appear in the proceedings of the 2nd ACM Workshop on Automotive and Aerial Vehicle Security (AutoSec 2020), hackers can cause a series of attacks, including eavesdropping on users and even spoof GPS systems to direct riders to unintended locations. Vendors of e-scooters can suffer denial-of-service attacks and data leaks.

"We've identified and outlined a variety of weak points or attack surfaces in the current ride-sharing, or micromobility, ecosystem that could potentially be exploited by malicious adversaries right from inferring the riders' private data to causing economic losses to service providers and remotely controlling the vehicles' behavior and operation," said Jadliwala.

Some e-scooter models communicate with the rider's smartphone over a Bluetooth Low Energy channel. Someone with malicious intent could eavesdrop on these wireless channels and listen to data exchanges between the scooter and riders' smartphone app by means of easily and cheaply accessible hardware and software tools such as Ubertooth and WireShark.

Those who sign up to use e-scooters also offer up a great deal of personal and sensitive data beyond just billing information. According to the study, providers automatically collect other analytics, such as location and individual vehicle information. This data can be pieced together to generate an individual profile that can even include a rider's preferred route, personal interests, and home and work locations. Jadliwala, an assistant professor in the Department of Computer Science who led this study.

"Cities are experiencing explosive population growth. Micromobility promises to transport people in a more sustainable, faster and economical fashion," added Jadliwala. "To ensure that this industry stays viable, companies should think not only about rider and pedestrian safety but also how to protect consumers and themselves from significant cybersecurity and privacy threats enabled by this new technology."

Credit: 
University of Texas at San Antonio

More rain and less snow means increased flood risk

As the world warms and precipitation that would have generated snowpack instead creates rain, the western U.S. could see larger floods, according to new Stanford research.

An analysis of over 400 watersheds from 1980 to 2016 shows that winter floods driven by rainfall can be more than 2.5 times as large as those driven by snowmelt. The researchers also found that flood sizes increase exponentially as a higher fraction of precipitation falls as rain, meaning the size of floods increased at a faster rate than the increase in rain.

The study, which appears in the January issue of Water Resources Research, is particularly salient for people planning infrastructure while taking global warming into account. As Northern Californians saw during the Oroville Dam crisis in 2017 when a spillway failure forced more than 180,000 residents to evacuate, warm storms can pose big problems.

"The Oroville Dam crisis is a good example of how existing infrastructure is already vulnerable to flooding," said lead author Frances Davenport, a PhD student in Earth system science at Stanford's School of Earth, Energy & Environmental Sciences (Stanford Earth). "These results show that warming alone - even without changes in precipitation amounts - could lead to changes in the size of floods."

While it might seem obvious that a greater fraction of precipitation falling as rain would cause bigger floods, the new research reveals that rainfall and flood size have a non-linear relationship. For example, a storm with 100 percent rain has 25 percent more liquid precipitation than a storm with 80 percent rain, but the researchers found that the average flood is 33 percent larger, meaning that the floods grow at a faster rate than the increase in liquid precipitation.

Future infrastructure needs

The results could inform management of reservoirs that not only secure the region's water supply but also provide a buffer for flooding, according to senior author Noah Diffenbaugh, the Kara J. Foundation Professor at Stanford Earth.

"Planners are being asked to project forward what kind of conditions today's infrastructure will have to withstand in the coming years and decades," Diffenbaugh said. "Both the shape and magnitude of our non-linear results have the potential to benefit planners in Western states that are trying to integrate the changing nature of snow hydrology into their decisions."

The researchers evaluated 410 watersheds using daily streamflow measurements from the U.S. Geological Survey to identify the largest precipitation events and the time periods with the highest streamflow. They then analyzed these events by comparing the amount of rain, snow and snowmelt leading up to and following each event.

In collaboration with economist and co-author Marshall Burke, an assistant professor of Earth system science, the researchers adapted methods from econometrics - a branch of applied statistics - to account for other influences like soil characteristics, slope and land-use change, in order to tease out the impact of precipitation alone. According to the authors, the analysis is one of the early attempts to apply these econometric techniques to hydrology.

"By using this econometric method, we can look at how flooding has varied across the full range of historical variability in each watershed," Davenport said. "This allows us to identify patterns that may not yet be evident in long-term flooding trends."

The results are useful to water managers thinking about long-term flood risks, especially in areas expected to experience warming and continued variability in the total amount of precipitation, according to the researchers. They were motivated to focus their analyses on the western U.S. because the same dams and reservoirs used to store water for the dry season also provide flood control during the wet season, with snow playing an important role in each.

"We've seen in recent years the real-time tension between keeping water in the reservoir so it can be used later in the year, and letting it out so that there's space available to prevent flooding from the next storm," said Diffenbaugh, who is also the Kimmelman Family Senior Fellow at the Woods Institute for the Environment. "States like California are well aware that as the snow hydrology of the western U.S. continues to change, the infrastructure that was designed and built around the old climate of the last century will continue to be pushed to its limits. Our results shed new light on how rapidly planners can expect extreme runoff to intensify as precipitation becomes more dominated by rain throughout the region."

Credit: 
Stanford's School of Earth, Energy & Environmental Sciences

Tiny, ancient meteorites suggest early Earth's atmosphere was rich in carbon dioxide

image: These tiny meteorites, about half a millimeter across, fell into the ocean and were collected from the deep sea. Like the samples used in the new study, these more recent micrometeorites are made of iron.

Image: 
Donald Brownlee/University of Washington

Very occasionally, Earth gets bombarded by a large meteorite. But every day, our planet gets pelted by space dust, micrometeorites that collect on Earth's surface.

A University of Washington team looked at very old samples of these small meteorites to show that the grains could have reacted with carbon dioxide on their journey to Earth. Previous work suggested the meteorites ran into oxygen, contradicting theories and evidence that the Earth's early atmosphere was virtually devoid of oxygen. The new study was published this week in the open-access journal Science Advances.

"Our finding that the atmosphere these micrometeorites encountered was high in carbon dioxide is consistent with what the atmosphere was thought to look like on the early Earth," said first author Owen Lehmer, a UW doctoral student in Earth and space sciences.

At 2.7 billion years old, these are the oldest known micrometeorites. They were collected in limestone in the Pilbara region of Western Australia and fell during the Archean eon, when the sun was weaker than today. A 2016 paper by the team that discovered the samples suggested they showed evidence of atmospheric oxygen at the time they fell to Earth.

That interpretation would contradict current understandings of our planet's early days, which is that oxygen rose during the "Great Oxidation Event," almost half a billion years later.

Knowing the conditions on the early Earth is important not just for understanding the history of our planet and the conditions when life emerged. It can also help inform the search for life on other planets.

"Life formed more than 3.8 billion years ago, and how life formed is a big, open question. One of the most important aspects is what the atmosphere was made up of -- what was available and what the climate was like," Lehmer said.

The new study takes a fresh look at interpreting how these micrometeorites interacted with the atmosphere, 2.7 billion years ago. The sand-sized grains hurtled toward Earth at up to 20 kilometers per second. For an atmosphere of similar thickness to today, the metal beads would melt at about 80 kilometers elevation, and the molten outer layer of iron would then oxidize when exposed to the atmosphere. A few seconds later the micrometeorites would harden again for the rest of their fall. The samples would then remain intact, especially when protected under layers of sedimentary limestone rock.

The previous paper interpreted the oxidization on the surface as a sign that the molten iron had encountered molecular oxygen. The new study uses modeling to ask whether carbon dioxide could have provided the oxygen to produce the same result. A computer simulation finds that an atmosphere made up of from 6% to more than 70% carbon dioxide could have produced the effect seen in the samples.

"The amount of oxidation in the ancient micrometeorites suggests that the early atmosphere was very rich in carbon dioxide," said co-author David Catling, a UW professor of Earth and space sciences.

For comparison, carbon dioxide concentrations today are rising and are currently at about 415 parts per million, or 0.0415% of the atmosphere's composition.

High levels of carbon dioxide, a heat-trapping greenhouse gas, would counteract the sun's weaker output during the Archean era. Knowing the exact concentration of carbon dioxide in the atmosphere could help pinpoint air temperature and and acidity of the oceans during that time.

More of the ancient micrometeorite samples could help narrow the range of possible carbon dioxide concentrations, the authors wrote. Grains that fell at other times could also help trace the history of Earth's atmosphere through time.

"Because these iron-rich micrometeorites can oxidize when they are exposed to carbon dioxide or oxygen, and given that these tiny grains presumably are preserved throughout Earth's history, they could provide a very interesting proxy for the history of atmospheric composition," Lehmer said.

Credit: 
University of Washington

Blue-emitting diode demonstrates limitations and promise of perovskite semiconductors

image: UC Berkeley chemists created a type of halide perovskite crystal that emits blue light, something that has been hard to achieve with this trendy new material. But the researchers discovered that these materials are inherently unstable, requiring careful control of temperature and chemical environment to maintain their precise color. This instability may have other applications, however.

Image: 
Peidong Yang, UC Berkeley

University of California, Berkeley, scientists have created a blue light-emitting diode (LED) from a trendy new semiconductor material, halide perovskite, overcoming a major barrier to employing these cheap, easy-to-make materials in electronic devices.

In the process, however, the researchers discovered a fundamental property of halide perovskites that may prove a barrier to their widespread use as solar cells and transistors.

Alternatively, this unique property may open up a whole new world for perovskites far beyond that of today's standard semiconductors.

In a paper appearing Jan. 24 in the journal Science Advances, UC Berkeley chemist Peidong Yang and his colleagues show that the crystal structure of the halide perovskites changes with temperature, humidity and the chemical environment, disrupting their optical and electronic properties. Without close control of the physical and chemical environment, perovskite devices are inherently unstable. This is not a major problem for traditional semiconductors.

"Some people may say this is a limitation. For me, this is a great opportunity," said Yang, the S. K. and Angela Chan Distinguished Chair in Energy in the College of Chemistry and director of the Kavli Energy NanoSciences Institute. "This is new physics: a new class of semiconductors that can be readily reconfigured, depending on what sort of environment you put them in. They could be a really good sensor, maybe a really good photoconductor, because they will be very sensitive in their response to light and chemicals."

Current semiconductors made of silicon or gallium nitride are very stable over a range of temperatures, primarily because their crystal structures are held together by strong covalent bonds. Halide perovskite crystals are held together by weaker ionic bonds, like those in a salt crystal. This means they're easier to make -- they can be evaporated out of a simple solution -- but also susceptible to humidity, heat and other environmental conditions.

"This paper is not just about showing off that we made this blue LED," said Yang, who is a senior faculty scientist at Lawrence Berkeley National Laboratory (Berkeley Lab) and a UC Berkeley professor of materials science and engineering. "We are also telling people that we really need to pay attention to the structural evolution of perovskites during the device operation, any time you drive these perovskites with an electrical current, whether it is an LED, a solar cell or a transistor. This is an intrinsic property of this new class of semiconductor and affects any potential optoelectronic device in the future using this class of material."

The blue diode blues

Making semiconductor diodes that emit blue light has always been a challenge, Yang said. The 2014 Nobel Prize for Physics was awarded for the breakthrough creation of efficient blue light-emitting diodes from gallium nitride. Diodes, which emit light when an electric current flows through them, are optoelectronic components in fiber optic circuits as well as general purpose LED lights.

Since halide perovskites first drew wide attention in 2009, when Japanese scientists discovered that they make highly efficient solar cells, these easily made, inexpensive crystals have excited researchers. So far, red- and green-emitting diodes have been demonstrated, but not blue. Halide perovskite blue-emitting diodes have been unstable -- that is, their color shifts to longer, redder wavelengths with use.

As Yang and his colleagues discovered, this is due to the unique nature of perovskites' crystal structure. Halide perovskites are composed of a metal, such as lead or tin, equal numbers of larger atoms, such as cesium, and three times the number of halide atoms, such as chlorine, bromine or iodine.

When these elements are mixed together in solution and then dried, the atoms assemble into a crystal, just as salt crystalizes from sea water. Using a new technique and the ingredients cesium, lead and bromine, the UC Berkeley and Berkeley Lab chemists created perovskite crystals that emit blue light and then bombarded them with X-rays at the Stanford Linear Accelerator Center (SLAC) to determine their crystalline structure at various temperatures. They found that, when heated from room temperature (about 300 Kelvin) to around 450 Kelvin, a common operating temperature for semiconductors, the crystal's squashed structure expanded and eventually sprang into a new orthorhombic or tetragonal configuration.

Since the light emitted by these crystals depends on the arrangement of and distances between atoms, the color changed with temperature, as well. A perovskite crystal that emitted blue light (450 nanometers wavelength) at 300 Kelvin suddenly emitted blue-green light at 450 Kelvin.

Yang attributes perovskites' flexible crystal structure to the weaker ionic bonds typical of halide atoms. Naturally occurring mineral perovskite incorporates oxygen instead of halides, producing a very stable mineral. Silicon-based and gallium nitride semiconductors are similarly stable because the atoms are linked by strong covalent bonds.

Making blue-emitting perovskites

According to Yang, blue-emitting perovskite diodes have been hard to create because the standard technique of growing the crystals as a thin film encourages formation of mixed crystal structures, each of which emits at a different wavelength. Electrons get funneled down to those crystals with the smallest bandgap -- that is, the smallest range of unallowed energies -- before emitting light, which tends to be red.

To avoid this, Yang's postdoctoral fellows and co-first authors -- Hong Chen, Jia Lin and Joohoon Kang -- grew single, layered crystals of perovskite and, adapting a low-tech method for creating graphene, used tape to peel off a single layer of uniform perovskite. When incorporated into a circuit and zapped with electricity, the perovskite glowed blue. The actual blue wavelength varied with the number of layers of octahedral perovskite crystals, which are separated from one another by a layer of organic molecules that allows easy separation of perovskite layers and also protects the surface.

Nevertheless, the SLAC experiments showed that the blue-emitting perovskites changed their emission colors with temperature. This property can have interesting applications, Yang said. Two years ago, he demonstrated a window made of halide perovskite that becomes dark in the sun and transparent when the sun goes down and also produces photovoltaic energy.

"We need to think in different ways of using this class of semiconductor," he said. "We should not put halide perovskites into the same application environment as a traditional covalent semiconductor, like silicon. We need to realize that this class of material has intrinsic structural properties that make it ready to reconfigure. We should utilize that."

Credit: 
University of California - Berkeley

'Jumping genes' help stabilize DNA folding patterns

"Jumping genes" -- bits of DNA that can move from one spot in the genome to another -- are well-known for increasing genetic diversity over the long course of evolution. Now, new research at Washington University School of Medicine in St. Louis indicates that such genes, also called transposable elements, play another, more surprising role: stabilizing the 3D folding patterns of the DNA molecule inside the cell's nucleus.

The study appears Jan. 24 in the journal Genome Biology.

The DNA molecule inside the nucleus of any human cell is more than six feet long. To fit into such a small space, it must fold into precise loops that also govern how genes are turned on or off. It might seem counterintuitive that bits of DNA that randomly move about the genome can provide stability to these folding patterns. Indeed, the discovery contradicts a long-held assumption that the precise order of letters in the DNA sequence always dictates the broader structure of the DNA molecule.

"In places where the larger 3D folding of the genome is the same between mice and humans, you expect the sequence of the letters of the DNA anchoring that shape to be conserved there as well," said senior author Ting Wang, PhD, the Sanford C. and Karen P. Loewentheil Distinguished Professor of Medicine. "But that's not what we found, at least not in the portions of the genome that in the past have been called 'junk DNA.'"

Studying DNA folding in mouse and human blood cells, the researchers found that in many regions where the folding patterns of DNA are conserved through evolution, the genetic sequence of the DNA letters establishing these folds is not. It is ever so slightly displaced. But this changing sequence, a genetic turnover, doesn't cause problems. Because the structure largely stays the same, the function presumably does, too, so nothing of importance changes.

"We were surprised to find that some young transposable elements serve to maintain old structures," said first author Mayank N.K. Choudhary, a doctoral student in Wang's lab. "The specific sequence may be different, but the function stays the same. And we see that this has happened multiple times over the past 80 million years, when the common ancestors of mice and humans first diverged from one another."

The fact that a new transposable element can insert itself and serve the same role as an existing anchor creates a redundancy in the regulatory portions of the genome -- regions of the DNA molecule that determine how and when genes are turned on or off.

According to the researchers, this redundancy makes the genome more resilient. In providing both novelty and stability, jumping genes may help the mammalian genome strike a vital balance -- allowing animals the flexibility to adapt to a changing climate, for example, while preserving biological functions required for life, protecting against the DNA damage that is wrought by living and reproducing on Earth over the span of deep time, measured in tens to hundreds of millions of years.

Even so, the researchers were careful to distinguish between portions of the genome that hold genes responsible for producing proteins and the rest of the genome. In genes that code for proteins, the genetic sequence and the structure are both conserved, and this study does not contradict that. However, the new research suggests that jumping genes in the non-protein coding areas of the genome follow different rules of conservation than the protein-coding genes.

"Our study changes how we interpret genetic variation in the noncoding regions of the DNA," Wang said. "For example, large surveys of genomes from many people have identified a lot of variations in noncoding regions that don't seem to have any effect on gene regulation, which has been puzzling. But it makes more sense in light of our new understanding of transposable elements -- while the local sequence can change, but the function stays the same.

"We may need to revisit these types of studies in light of the new understanding we now have of transposable elements," he added. "We have uncovered another layer of complexity in the genome sequence that was not known before."

Credit: 
Washington University School of Medicine