Tech

Rare antelopes and black cats

image: Three examples of the animal species filmed at Kilimanjaro (from left): an Abbott's duiker, a blue monkey and a black serval.

Image: 
(Pictures: Department of Zoology III / University of Würzburg)

Tanzania is home to a very elusive antelope species that cannot be found anywhere else in the world. According to the Red List, it can be classified as endangered. The first photograph of one of these antelopes was taken by researches as recently as the year 2003. So far, the distribution of this species on Mt. Kilimanjaro has not been documented. Its scientific name: Abbott's duiker (Cephalophus spadix).

However, now there are numerous videos showing this antelope. The film sequences were taken by a research group of Julius-Maximilians-Universität (JMU) Würzburg in Bavaria, Germany. The group has been doing research on biodiversity at Kilimanjaro for years. Current research has focused, among other things, on the question of how the biodiversity of larger mammals is influenced by climate change and human activities.

"We recorded the Abbott's Duiker with our video traps at eleven locations at altitudes between 1920 and 3849 meters for a total of 105 times," says doctoral student Friederike Gebert from the JMU Biocenter. "There's even a video of a mating attempt," says the scientist. And that's not the only special feature that has now been captured on film.

Tens of thousands of video sequences evaluated

The team led by JMU Professor Ingolf Steffan-Dewenter and Dr. Marcell Peters installed a total of five camera traps each on 66 study plots at Kilimanjaro - from the savannah in the lowlands to the forest regions at medium altitudes to the bush landscape at altitudes of up to 4550 metres. The cameras remained on site for two weeks, after which they were collected and evaluated.

Friederike Gebert had 80,000 film snippets to go through, of which 1,600 were actually showing mammals. Among them were a total of 33 wild mammal species - in addition to the Abbott's duiker, species like bush pig and porcupine, lesser kudu and yellow baboon were documented. Serval cats were also recorded. These yellow-black patterned predators are about the size of lynxes, but more daintily built. The videos also show a very special kind of serval: an animal whose coat is completely black.

Protected areas are important for biodiversity

The results of the JMU research group will be published in the Journal of Animal Ecology. "All in all, we were able to show that the species richness of large mammals is greatest at mid-elevations of the mountain, i.e. in the forest regions," says Friederike Gebert. The more plant biomass and potential prey there are, the greater the biodiversity.

"In the case of large mammals, biodiversity is particularly high in nature reserves, while it falls by 53 percent in unprotected areas - even though many of the unprotected areas still have natural vegetation," says Professor Steffan-Dewenter. "Our study thus underscores the importance of protected areas for maintaining species diversity of large mammals in tropical mountain regions. To preserve the existing protected areas at Kilimanjaro and to designate further ones is a very desirable goal from the scientific point of view.

Cooperation partners and sponsors

This publication is the result of a cooperation between the University of Würzburg, the Nelson Mandela African Institution of Science and Technology (Arusha, Tanzania), and the College of African Wildlife Management (Mweka, Tanzania).

Credit: 
University of Würzburg

#MeToo media coverage sympathetic to but not necessarily empowering for women

PITTSBURGH--The #MeToo movement has encouraged women to share their personal stories of sexual harassment. While the movement amplifies previously unheard voices, a Carnegie Mellon University analysis of #MeToo media coverage shows accusers are often portrayed as sympathetic, but with less power and agency than their alleged perpetrators.

"The goal of the movement is to empower women, but according to our computational analysis that's not what's happening in news stories," said Yulia Tsvetkov, assistant professor in the School of Computer Science's Language Technologies Institute.

Tsvetkov's research team used natural language processing (NLP) techniques to analyze online media coverage of #MeToo narratives that included 27,602 articles in 1,576 outlets. In a paper published earlier this year, they also looked at how different media outlets portrayed perpetrators, and considered the role of third-party actors in news stories.

"Bias can be unconscious, veiled and hidden in a seemingly positive narrative," Tsvetkov said. "Such subtle forms of biased language can be much harder to detect and to date we have no systematic way of identifying them automatically. The goal of our research was to provide tools to analyze such biased framing."

Their work draws insights from social psychology research, and looks at the dynamics of power, agency and sentiment, which is a measurement of sympathy. The researchers analyzed verbs to understand their meaning, and put them into context to discern their connotation. Take, for instance, the verb "deserves." In the sentence "The boy deserves praise," the verb takes on a very different meaning than in the context of "The boy deserves to be punished."

"We were inspired by previous work that looked at the meaning of verbs in individual sentences," Tsvetkov said. "Our analysis incorporates context." This method allowed her team to consider much longer chunks of text, and to analyze narrative.

The research team developed ways to generate scores for words in context, and mapped out the power, sentiment, and agency of each actor within a news story. Their results show that the media consistently presents men as powerful, even after sexual harassment allegations. Tsvetkov said this threatens to undermine the goals of the #MeToo movement, which is often characterized as "empowerment through empathy."

The team's analysis also showed that the people portrayed with the most positive sentiment in #MeToo stories were those not directly involved with allegations, like activists, journalists, or celebrities commenting on the movement, such as Oprah Winfrey.

A supplementary paper extending the analysis was presented with graduate student Anjalie Field in Florence, Italy, last month at the Association of Computational Linguistics conference.

This paper proposes different methods for measuring power, agency and sentiment, and analyzes the portrayals of characters in movie plots, as well as prominent members of society in general newspaper articles.

One of the consistent trends detected in both papers is that women are portrayed as less powerful than men. This was evident in an analysis of the 2016 Forbes list of most powerful people. In news stories from myriad outlets about women and men who ranked similarly, men were consistently described as being more powerful.

"These methodologies can extend beyond just people," Tsvetkov said. "You could look at narratives around countries, if they are described as powerful and sympathetic, or unfriendly, and compare that with reactions on social media to understand the language of manipulation, and how people actually express their personal opinions as a consequence of different narratives."

Tsvetkov said she hopes this work will raise awareness of the importance of media framing. "Journalists can choose which narratives to highlight in order to promote certain portrayals of people," she said. "They can encourage or undermine movements like #MeToo. We also hope that the tools we developed will be useful to social and political scientists, to analyze narratives about people and abstract entities such as markets and countries, and to improve our understanding of the media landscape by analyzing large volumes of texts."

Credit: 
Carnegie Mellon University

Scientists make first observation of fish schooling using bioluminescent flashes

video: This is a view of schooling flashlight fish; when diver-held artificial lights were used to illuminate the school, the fish quickly scattered.

Image: 
© J. Sparks

A new study is the first to demonstrate that schooling in fishes can be facilitated by bioluminescent flashes in the absence of ambient light. Led by researchers at the American Museum of Natural History, the research raises the possibility that fish schooling may occur in the deep sea, where it was previously assumed to be too dark for fish to coordinate their movements with each other. The study is published today in the journal PLOS ONE.

"Being in the middle of one of these bioluminescing schools was one of the most magical things I've ever experienced as a marine biologist," said lead author David Gruber, a professor of biology at Baruch College and a research associate at the Museum. Gruber was part of the team that serendipitously came across a school of thousands of flashlight fish (Anomalops katoptron) while scuba diving at night off a remote island in the Solomon Islands. "It was like a moment from the film Avatar as we watched rivers of bioluminescent flashes, like a blue-brick road, descend down the reef."

It is estimated that more than 25 percent of the world's fish species school, a collective behavior that is thought to lower the risk of predation, as well as give the fish greater access to food and mates. But schooling fishes rely on their ability to see one another, an idea that is supported by observations of fish schools dispersing at depths with critically low light levels. So when the team of researchers--including John Sparks, a curator in the Museum's Department of Ichthyology, came across schooling flashlight fish in the pitch dark, it was quite the surprise.

"It was simply astonishing--we observed thousands of flashlight fish using synchronized bioluminescent flashes to coordinate their movement and facilitate schooling behavior in complete darkness, a behavior and function that had not previously been reported for bioluminescence," said Sparks, who is senior author on the paper.

Bioluminescence is visible light generated by living things through a chemical reaction. Flashlight fish, which typically hide in reef crevasses and caves during the day and only venture out on moonless nights, have pockets under their eyes that are filled with bioluminescent bacteria manipulated by an organ that allows them to "flash" with different patterns.

The researchers collected footage of the Solomon Islands school--the largest recorded aggregation of bioluminescent flashlight fish, comprised of thousands of individuals--as part of their 2013 trip, and returned to the remote, uninhabited volcanic island in 2016 and 2019 to gather more data. To capture footage of the fish's nighttime behavior in their natural environment, observations had to be made without artificial light. Both scuba divers and a three-person submarine used a suite of low-light cameras, including a custom-built underwater high-speed, high-resolution scientific complementary metal-oxide-semiconductor (sCMOS) camera.

To capture nighttime imagery of the flashlight fish schooling, Brennan Phillips, an assistant professor of ocean engineering at the University of Rhode Island and a coauthor of the study, modified a deep-sea low-light camera system so that it could be utilized by a scuba diver. "Everything had to come together to get these data--the new camera system, the weather, the divers, and, of course, the fish. It was one of the hardest challenges I have faced as a marine roboticist."

Through a combined analysis of field video recordings and modeling, the researchers demonstrate that the flashlight fish are indeed schooling--synchronizing their behavior and swimming in an oriented, polarized manner relative to one another--and not just "shoaling,"a term that describes a loosely organized group of fish.

"Our finding reveals a completely novel function for bioluminescence in the ocean, and shows that fishes are able to school using only the natural light they emit, without the need to rely on ambient light," Sparks said.

Credit: 
American Museum of Natural History

Early exposure to manganese could affect teens' cognitive ability and motor control

(New York, NY - August 14, 2019) -- Early-life exposure to the mineral manganese disrupts the way different areas of the brain involved in cognitive ability and motor control connect in teenagers, Mount Sinai researchers report in a study published in PLOS ONE in August.

This study is the first to link evidence of metal exposure found in baby teeth to measures of brain connectivity. Researchers found links between early-life manganese exposure and altered functional connectivity of brain areas that support cognitive and motor control, potentially leading to low IQ, attention disorders, and hyperactivity.

"These findings could inform prevention and intervention efforts to reduce these poor outcomes in adolescents exposed to high levels of manganese," said Erik de Water, the first author and a postdoctoral fellow in the Department of Environmental Medicine and Public Health at the Icahn School of Medicine at Mount Sinai.

People can be exposed to manganese via air pollution, diet, drinking water, pesticides, and secondhand smoke. Researchers measured manganese concentrations in baby teeth to determine exposure during pregnancy, the first year of life, and early childhood.

They used functional magnetic resonance imaging (fMRI) scans to measure intrinsic functional connectivity of the brain in adolescents. Higher manganese concentrations in the first year of life were associated with increased intrinsic functional connectivity within cognitive control brain areas, but decreased connectivity between motor areas in adolescents.

Credit: 
The Mount Sinai Hospital / Mount Sinai School of Medicine

AI used to test evolution's oldest mathematical model

image: Butterfly co-mimic pairs from the species Heliconius erato (odd columns) and Heliconius melpomene (even columns). Illustrated butterflies are sorted by greatest similarity (along rows, top left to bottom right) using machine learning methods which enable new tests and discoveries in evolutionary theory. I

Image: 
J Hoyal Cuthill

Researchers have used artificial intelligence to make new discoveries, and confirm old ones, about one of nature's best-known mimics, opening up whole new directions of research in evolutionary biology.

The researchers, from the University of Cambridge, the University of Essex, the Tokyo Institute of Technology and the Natural History Museum London used their machine learning algorithm to test whether butterfly species can co-evolve similar wing patterns for mutual benefit. This phenomenon, known as Müllerian mimicry, is considered evolutionary biology's oldest mathematical model and was put forward less than two decades after Darwin's theory of evolution by natural selection.

The algorithm was trained to quantify variation between different subspecies of Heliconius butterflies, from subtle differences in the size, shape, number, position and colour of wing pattern features, to broad differences in major pattern groups.

This is the first fully automated, objective method to successfully measure overall visual similarity, which by extension can be used to test how species use wing pattern evolution as a means of protection. The results are reported in the journal Science Advances.

The researchers found that different butterfly species act both as model and as mimic, 'borrowing' features from each other and even generating new patterns.

"We can now apply AI in new fields to make discoveries which simply weren't possible before," said lead author Dr Jennifer Hoyal Cuthill from Cambridge's Department of Earth Sciences. "We wanted to test Müller's theory in the real world: did these species converge on each other's wing patterns and if so how much? We haven't been able to test mimicry across this evolutionary system before because of the difficulty in quantifying how similar two butterflies are."

Müllerian mimicry theory is named after German naturalist Fritz Müller, who first proposed the concept in 1878, less than two decades after Charles Darwin published On the Origin of Species in 1859. Müller's theory proposed that species mimic each other for mutual benefit. This is also an important case study for the phenomenon of evolutionary convergence, in which the same features evolve again and again in different species.

For example, Müller's theory predicts that two equally bad-tasting or toxic butterfly populations in the same location will come to resemble each other because both will benefit by 'sharing' the loss of some individuals to predators learning how bad they taste. This provides protection through cooperation and mutualism. It contrasts with Batesian mimicry, which proposes that harmless species mimic harmful ones to protect themselves.

Heliconius butterflies are well-known mimics, and are considered a classic example of Müllerian mimicry. They are widespread across tropical and sub-tropical areas in the Americas. There are more than 30 different recognisable pattern types within the two species that the study focused on, and each pattern type contains a pair of mimic subspecies.

However, since previous studies of wing patterns had to be done manually, it hadn't been possible to do large-scale or in-depth analysis of how these butterflies are mimicking each other.

"Machine learning is allowing us to enter a new phenomic age, in which we are able to analyse biological phenotypes - what species actually look like - at a scale comparable to genomic data," said Hoyal Cuthill, who also holds positions at the Tokyo Institute of Technology and University of Essex.

The researchers used more than 2,400 photographs of Heliconius butterflies from the collections of the Natural History Museum, representing 38 subspecies, to train their algorithm, called 'ButterflyNet'.

ButterflyNet was trained to classify the photographs, first by subspecies, and then to quantify similarity between the various wing patterns and colours. It plotted the different images in a multidimensional space, with more similar butterflies closer together and less similar butterflies further apart.

"We found that these butterfly species borrow from each other, which validates Müller's hypothesis of mutual co-evolution," said Hoyal Cuthill. "In fact, the convergence is so strong that mimics from different species are more similar than members of the same species."

The researchers also found that Müllerian mimicry can generate entirely new patterns by combining features from different lineages.

"Intuitively, you would expect that there would be fewer wing patterns where species are mimicking each other, but we see exactly the opposite, which has been an evolutionary mystery," said Hoyal Cuthill. "Our analysis has shown that mutual co-evolution can actually increase the diversity of patterns that we see, explaining how evolutionary convergence can create new pattern feature combinations and add to biological diversity.

"By harnessing AI, we discovered a new mechanism by which mimicry can produce evolutionary novelty. Counterintuitively, mimicry itself can generate new patterns through the exchange of features between species which mimic each other. Thanks to AI, we are now able to quantify the remarkable diversity of life to make new scientific discoveries like this: it might open up whole new avenues of research in the natural world."

Credit: 
University of Cambridge

NIH's All of Us Research Program recaps progress and next steps

WHAT:

The All of Us Research Program at the National Institutes of Health has made strong progress in its efforts to advance precision medicine, according to program leadership in a forthcoming paper in the New England Journal of Medicine.

With information provided by volunteers across the United States, All of Us is developing a robust data platform to support a wide range of health studies. The program aims to include data from 1 million or more people from diverse communities. As of July 2019, more than 230,000 people have enrolled, including 175,000 participants who have completed the core protocol. Of those, 80% are from groups that have been historically underrepresented in biomedical research. Participants contribute information in a variety of ways, including surveys; electronic health records (EHR); physical measurements; blood, urine, and saliva samples; and Fitbit devices. In the future, the program will add new surveys and linkages to other data sets and digital health technologies, and begin genotyping and whole-genome sequencing participants' biological samples. Data will be broadly accessible to approved researchers, and participants will receive information back about themselves.

In May 2019, with enrollment ongoing, the program released initial summary data at https://www.researchallofus.org/. Now, the All of Us team is planning demonstration projects to assess the usefulness and validity of the data set, in preparation for the launch of the Researcher Workbench--the secure platform where researchers will be able to conduct analyses.

The program's ongoing success will rely on several factors, according to the authors. The program must continue to enroll participants from across the country, including those in rural and other underserved areas. The program needs to ensure that participants, once enrolled, derive value, remain engaged, and retain trust in the program such that they continue to share data long term. Additionally, the program must continue to protect from cyberattacks, protect participant privacy, and harmonize data from different EHR systems. Work is underway on all these fronts.

The authors anticipate that the program's value will become even more rich as it matures, enabling new discoveries over time. A goal of the study is to improve population health through the identification of risk factors and biomarkers (including environmental exposures, habits, and social determinants) to allow more efficient and accurate diagnosis and screening, better understanding of diverse populations, more rational use of existing therapeutics, and the development of new treatments.

Credit: 
NIH/All of Us Research Program

Hospital ratings systems get low grades from experts

CHICAGO --- Experts have turned the tables on hospital rating systems and graded the rating systems on their strengths and weaknesses. Most only got "C's" and "D's." The highest grade received was a "B" by U.S. News & World Report, according to a new study.

The study, "Rating the Raters," will be published Aug. 14 in the New England Journal of Medicine Catalyst. It grades the four major publicly reported hospital quality rating systems in the U.S.

"Current hospital quality rating systems often offer conflicting results - a hospital might rate best on one rating system and worst on another," said lead author Dr. Karl Bilimoria, director of the Northwestern Medicine Surgical Outcomes and Quality Improvement Center."We wanted to provide information on how to interpret these contradictory ratings, so people can better select the best hospital for their needs."

The Centers for Medicare & Medicaid Services' Star Ratings received a "C." The lowest grades were for Leapfrog, "C-," and Healthgrades, "D+."

Until now, there has been little guidance on how to compare these rating systems.

"It's been confusing for patients who are trying to make sense of these ratings," said Bilimoria, also the John B. Murphy Professor of Surgery at Northwestern University Feinberg School of Medicine."How are patients supposed to know which rating systems are good or bad? This study gives them information from a group of quality measurement experts to figure out which rating system is the best."

To the investigators' knowledge, this is the first systematic evaluation of current rating systems that could help inform patients, clinicians, policymakers and hospitals themselves of the various rating systems' methodologies, strengths and weaknesses.

Hospital rating systems are proliferating, which makes it essential to accurately evaluate them, Bilimoria said. "There are all these competing rating systems. More and more are coming out," he said. "The public needs some way to know which ones are valid and reliable; otherwise, it will be pure chaos when you are trying to figure out which is the best hospital for you."

There are about 1,000 so-called "top 100" hospitals in the country, depending on which rating system is used. "There is so little overlap," Bilimoria said. "That means there is conflicting information. Different rating systems are identifying different hospitals as being the best."

"A lot of the so-called 'top hospitals' identified by some rating systems are not places that most physicians would refer their patients to," he added. "Because of the flaws in the approach used to rank hospitals, the ratings do not provide an accurate picture. Some hospitals that got top ratings from some of these rating systems just don't make sense based on what we know about them."

The investigators identified several limitations of each individual rating system and the field as a whole. Each system had weaknesses that led to potential misclassification of hospital performance, ranging from inclusion of flawed measures, use of proprietary data that are not validated and methodological decisions.

"We found that many of the current hospital quality rating systems should be used cautiously as they likely often misclassify hospital performance and mislead the public," Bilimoria said.

The diverse group of six evaluators includes established physician scientists with methodological expertise in health care quality measurement from both academic centers and the private sector. Given their experience in this field, all of the evaluators currently or previously have had some relationship with one or more of the rating systems. Thus, each evaluator was required to disclose the nature, timing and financial arrangement of any current or prior relationships.

Credit: 
Northwestern University

Researchers use blockchain to drive electric-vehicle infrastructure

Researchers at the University of Waterloo have integrated the use of blockchain into energy systems, a development that could result in expanded charging infrastructure for electric vehicles.

In a study that outlines the new blockchain-oriented charging system, the researchers found that there is a lack of trust among charging service providers, property owners and owners of electric vehicles (EVs).

With an open blockchain platform, all parties will have access to the data and can see if it has been tampered with. Using a blockchain-oriented charging system will, therefore, allow EV owners to see if they are being overcharged while property owners will know if they are being underpaid.

"Energy services are increasingly being provided by entities that do not have well-established trust relationships with their customers and partners," said Christian Gorenflo, a PhD candidate in Waterloo's David R. Cheriton School of Computer Science. "In this context, blockchains are a promising approach for replacing a central trusted party, for example, making it possible to implement direct peer-to-peer energy trading."

In undertaking the study, Gorenflo, his supervisor, professor Srinivasan Keshav of the Cheriton School of Computer Science, and Lukasz Golab, professor of Management Science, collaborated with an EV-charging service provider. The provider works with property owners to install EV supply equipment that is used by EV owners for a fee. The revenue stream from these charging stations is then shared between the charging service provider and each property owner. The EV supply equipment is operated by the charging service provider, so the property owners must trust the provider to compensate them fairly for the electricity used.

From the case study, the researchers were able to identify three steps necessary for the incorporation of blockchain technology into an energy system. The first is to identify the involved parties and their trust relations. If the level of trust in a relation is insufficient to achieve the application's goal or if it restricts an action necessary to reach that goal, this should be recorded as a trust issue.

Secondly, design a minimal blockchain system, including smart contracts, that resolves the trust issues identified in the first step. If parts of a legacy system need to be replaced, the new system should closely mimic existing interfaces so that dependencies can continue to work with minimal modifications.

Finally, with the trust-mitigating blockchain in place, the rest of the system can be migrated iteratively over time. This allows the business model to eventually grow from a legacy/blockchain hybrid into a truly decentralized solution.

"Mitigating trust issues in EV charging could result in people who have charging stations and even those who just have an outdoor outlet being much more willing to team up with an EV charging service provider resulting in much better coverage of charging stations," said Gorenflo.

"In the end, we could even have a system where there is machine-to-machine communication rather than people-to-machine. If an autonomous vehicle needs power, it could detect that and drive to the nearest charging station and communicate on a platform with that charging station for the power."

Credit: 
University of Waterloo

Atomic 'Trojan horse' could inspire new generation of X-ray lasers and particle colliders

image: This is an illustration, based on simulations, of the Trojan horse technique for the production of high-energy electron beams. A laser beam (red, at left) strips electrons (blue dots) off of helium atoms. Some of the freed electrons (red dots) get accelerated inside a plasma bubble (white elliptical shape) created by an electron beam (green).

Image: 
Thomas Heinemann/University of Strathclyde

How do researchers explore nature on its most fundamental level? They build "supermicroscopes" that can resolve atomic and subatomic details. This won't work with visible light, but they can probe the tiniest dimensions of matter with beams of electrons, either by using them directly in particle colliders or by converting their energy into bright X-rays in X-ray lasers. At the heart of such scientific discovery machines are particle accelerators that first generate electrons at a source and then boost their energy in a series of accelerator cavities.

Now, an international team of researchers, including scientists from the Department of Energy's SLAC National Accelerator Laboratory, has demonstrated a potentially much brighter electron source based on plasma that could be used in more compact, more powerful particle accelerators.

The method, in which the electrons for the beam are released from neutral atoms inside the plasma, is referred to as the Trojan horse technique because it's reminiscent of the way the ancient Greeks are said to have invaded the city of Troy by hiding their forceful soldiers (electrons) inside a wooden horse (plasma), which was then pulled into the city (accelerator).

"Our experiment shows for the first time that the Trojan horse method actually works," says Bernhard Hidding from the University of Strathclyde in Glasgow, Scotland, the principal investigator of a study published today in Nature Physics. "It's one of the most promising methods for future electron sources and could push the boundaries of today's technology."

Replacing metal with plasma

In current state-of-the-art accelerators, electrons are generated by shining laser light onto a metallic photocathode, which kicks electrons out of the metal. These electrons are then accelerated inside metal cavities, where they draw more and more energy from a radiofrequency field, resulting in a high-energy electron beam. In X-ray lasers, such as SLAC's Linac Coherent Light Source (LCLS), the beam drives the production of extremely bright X-ray light.

But metal cavities can only support a limited energy gain over a given distance, or acceleration gradient, before breaking down, and therefore accelerators for high-energy beams become very large and expensive. In recent years, scientists at SLAC and elsewhere have looked into ways to make accelerators more compact. They demonstrated, for example, that they can replace metal cavities with plasma that allows much higher acceleration gradients, potentially shrinking the length of future accelerators 100 to 1,000 times.

The new paper expands the plasma concept to the electron source of an accelerator.

"We've previously shown that plasma acceleration can be extremely powerful and efficient, but we haven't been able yet to produce beams with high enough quality for future applications," says co-author Mark Hogan from SLAC. "Improving beam quality is a top priority for the next years, and developing new types of electron sources is an important part of that."

According to previous calculations by Hidding and colleagues, the Trojan horse technique could make electron beams 100 to 10,000 times brighter than today's most powerful beams. Brighter electron beams would also make future X-ray lasers brighter and further enhance their scientific capabilities.

"If we're able to marry the two major thrusts - high acceleration gradients in plasma and beam creation in plasma - we could be able to build X-ray lasers that unfold the same power over a distance of a few meters rather than kilometers," says co-author James Rosenzweig, the principal investigator for the Trojan horse project at the University of California, Los Angeles.

Producing superior electron beams

The researchers carried out their experiment at SLAC's Facility for Advanced Accelerator Experimental Tests (FACET). The facility, which is currently undergoing a major upgrade, generates pulses of highly energetic electrons for research on next-generation accelerator technologies, including plasma acceleration.

First, the team flashed laser light into a mixture of hydrogen and helium gas. The light had just enough energy to strip electrons off hydrogen, turning neutral hydrogen into plasma. It wasn't energetic enough to do the same with helium, though, whose electrons are more tightly bound than those for hydrogen, so it stayed neutral inside the plasma.

Then, the scientists sent one of FACET's electron bunches through the plasma, where it produced a plasma wake, much like a motorboat creates a wake when it glides through the water. Trailing electrons can "surf" the wake and gain tremendous amounts of energy.

In this study, the trailing electrons came from within the plasma (see animation above and movie below). Just when the electron bunch and its wake passed by, the researchers zapped the helium in the plasma with a second, tightly focused laser flash. This time the light pulse had enough energy to kick electrons out of the helium atoms, and the electrons were then accelerated in the wake.

The synchronization between the electron bunch, rushing through the plasma with nearly the speed of light, and the laser flash, lasting merely a few millionths of a billionth of a second, was particularly important and challenging, says UCLA's Aihua Deng, one of the study's lead authors: "If the flash comes too early, the electrons it produces will disturb the formation of the plasma wake. If it comes too late, the plasma wake has moved on and the electrons won't get accelerated."

The researchers estimate that the brightness of the electron beam obtained with the Trojan horse method can already compete with the brightness of existing state-of-the-art electron sources.

"What makes our technique transformative is the way the electrons are produced," says Oliver Karger, the other lead author, who was at the University of Hamburg, Germany, at the time of the study. When the electrons are stripped off the helium, they get rapidly accelerated in the forward direction, which keeps the beam narrowly bundled and is a prerequisite for brighter beams.

More R&D work ahead

But before applications like compact X-ray lasers could become a reality, much more research needs to be done.

Next, the researchers want to improve the quality and stability of their beam and work on better diagnostics that will allow them to measure the actual beam brightness, instead of estimating it.

These developments will be done once the FACET upgrade, FACET-II, is completed. "The experiment relies on the ability to use a strong electron beam to produce the plasma wake," says Vitaly Yakimenko, director of SLAC's FACET Division. "FACET-II will be the only place in the world that will produce such beams with high enough intensity and energy."

Credit: 
DOE/SLAC National Accelerator Laboratory

The largest impact crater in the US, buried for 35 million years

image: The location of the crater in Chesapeake Bay. It is now completely covered by younger sediments, but was discovered in the early 1990s by marine geophysical surveys and subsequent drilling. It is the largest known impact crater in the U.S. and the 15th-largest on Earth.

Image: 
Powars et al. 2015, Christoph Kersten / GEOMAR

About 35 million years ago, an asteroid hit the ocean off the East Coast of North America. Its impact formed a 25-mile diameter crater that now lies buried beneath the Chesapeake Bay, an estuary in Virginia and Maryland. From this impact, the nearby area experienced fires, earthquakes, falling molten glass droplets, an air blast and a devastating tsunami.

While the resulting "Chesapeake Bay impact crater" is now completely buried, it was discovered in the early 1990s by scientific drilling. It now ranks as the largest known impact crater in the U.S., and the 15th largest on Earth.

When the asteroid hit, it also produced an impact ejecta layer, which includes tektites (natural glass formed from debris during meteorite impacts) and shocked zircon crystals which were thrown out of the impact area. Scientists refer to this layer as the "North American tektite strewn field," which covers a region of roughly 4 million square miles, about 10 times the size of Texas. Some ejecta landed on land while the rest immediately cooled on contact with seawater and then sank to the ocean floor.

A team of researchers, including Arizona State University School of Earth and Space Exploration scientist and lead author Marc Biren, along with co-authors Jo-Anne Wartho, Matthijs Van Soest and Kip Hodges, has obtained drilling samples from the Ocean Drilling Project site 1073 and dated them with the "uranium-thorium-helium technique" for the first time.

Their research was recently published in the international journal Meteoritics & Planetary Science.

"Determining accurate and precise ages of impact events is vital in our understanding of the Earth's history," Biren said. "In recent years, for example, the scientific community has realized the importance of impact events on Earth's geological and biological history, including the 65 million years old dinosaur mass extinction event that is linked to the large Chicxulub impact crater."

The team studied zircon crystals in particular because they preserve evidence of shock metamorphism, which is caused by shock pressures and high temperatures associated with impact events. The dated crystals were tiny, about the thickness of a human hair.

"Key to our investigation were zircon -- or to be more precise: zirconium silicate -- crystals that we found in the oceanic sediments of a borehole, which is located almost 400 kilometers (250 miles) northeast of the impact site, in the Atlantic Ocean," says co-author Wartho, who began the study when she was a lab manager at the Mass Spectrometry Lab at ASU.

For this study, Biren worked with co-authors Wartho (now working at GEOMAR Helmholtz Centre for Ocean Research Kiel), Van Soest and Hodges to prepare samples for analysis and to date zircon crystals with the uranium-thorium-helium dating method. Biren then identified and processed shocked zircon fragments for imaging and chemical analysis with an electron microprobe.

"This research adds a tool for investigators dating terrestrial impact structures," Biren said. "Our results demonstrate the uranium-thorium-helium dating method's viability for use in similar cases, where shocked materials were ejected away from the crater and then allowed to cool quickly, especially in cases where the sample size is small."

Credit: 
Arizona State University

Global tracking devices negatively affect the survival rate of sage-grouses

A new study in The Condor: Ornithological Applications finds that currently-available global positioning system (GPS) tracking devices, previously thought to not alter animal survival rates, can decrease greater sage-grouse survival.

The researchers monitored sage-grouse survival at 14 sites throughout California and Nevada. Between 2012 and 2017, VHF transmitters were attached to 821 female and 52 male sage-grouse. GPS devices were attached to 234 female and 125 male sage-grouse.

Researchers here combined the measured survival of the tracked sage-grouse throughout the duration of the study with models to estimate independent random effects and correlating fatalities, to determine differences in the survival rate for sage-grouse fitted with GPS devices versus those fitted with VHF transmitters.

The researchers recorded 316 mortalities in VHF-marked sage-grouses, and 261 mortalities were recorded in GPS-marked sage-grouse. Median annual survival estimates were higher for VHF-marked birds than GPS-marked birds for both sexes and all ages. Median seasonal survival estimates were 1.08-1.19 times greater for females marked with VHF devices rather than those marked with GPS devices. Seasonal survival estimates for males marked with VHF devices ranged from 0.98-1.32 times that of birds marked with GPS devices. The median annual estimate calculated across all age classes, for GPS-marked males and females was 0.58 and 0.61 times that of VHF-marked males and females, respectively.

Researchers here found that females marked with VHF devices were expected to live 9.6 months longer than those marked with GPS devices. VHF-marked males were expected to live 10.5 months longer than those marked with GPS devices.

Differences in survival could be attributed to features associated with GPS devices, including greater weight, the rump position of the attachment as compared to the necklace positioning of VHF devices that might impair mobility, and a semi-reflective solar panel which results in a shiny glare that may potentially attract predators.

"We are researching the potential causes of GPS devices on decreased survival in the hope that future design and attachment modifications can minimize negative impacts, and lead to benefits to both the birds and to our understanding of their ecology," said one of the study's authors, Mark A. Ricca, Ph.D.

Credit: 
Oxford University Press USA

Arctic could be iceless in September if temps increase 2 degrees

image: Polar bears sleep on the beach in the Arctic National Wildlife Refuge in early September waiting for ice to form on the Arctic Ocean.

Image: 
Michael Miller

Arctic sea ice could disappear completely through September each summer if average global temperatures increase by as little as 2 degrees, according to a new study by the University of Cincinnati.

The study by an international team of researchers was published in Nature Communications.

"The target is the sensitivity of sea ice to temperature," said Won Chang, a study co-author and UC assistant professor of mathematics.

"What is the minimum global temperature change that eliminates all arctic sea ice in September? What's the tipping point?"

The study predicted that the Arctic Ocean could be completely ice-free in September with as little as 2 degrees Celsius of temperature change. Limiting warming to 2 degrees is the stated goal of the 2009 Paris Agreement, the international effort to curb carbon emissions to address warming. The Trump Administration withdrew the United States as a participant in 2017.

"Most likely, September Arctic sea ice will effectively disappear between approximately 2 and 2.5 degrees of global warming," the study said. "Yet limiting the warming to 2 degrees (as proposed under the Paris agreement) may not be sufficient to prevent an ice-free Arctic Ocean."

Historically, September is the month that sees the Arctic Ocean's least ice cover during the year after the short polar summer.

"They use September as a measure because that's the transition period between summer and winter in the Arctic," Chang said. "Ice recedes from June to September and then in September it begins to grow again in a seasonal cycle. And we're saying we could have no ice in September."

The less summer sea ice the Arctic has, the longer it takes for the Arctic Ocean to ice back over for the polar winter. That could spell bad news for Arctic wildlife such as seals and polar bears that rely on sea ice to raise pups and hunt them, respectively.

The researchers applied the new statistical method to climate model projections of the 21st century. Using the climate models, the authors found at least a 6% probability that summer sea ice in the Arctic Ocean will disappear with warming of 1.5 degrees above pre-industrial levels. At 2 degrees, the likelihood increases to 28%. 

"Our work provides a new statistical and mathematical framework to calculate climate change and impact probabilities," said Jason Evans, a professor who works at the University of New South Wales and its Climate Change Research Centre.

"While we only tested the new approach on climate models, we are eager to see if the technique can be applied to other fields, such as stock market predictions, plane accident investigations, or in medical research," says Roman Olson, the lead author and researcher at the Institute for Basic Science in South Korea. 

Chang said he has not gotten much feedback on this study yet. But sometimes climate change skeptics will approach him at his public presentations.

"Climate scientists are very honest," he said. "We try to be as transparent as possible about the amount of uncertainty we have and lay out all of our assumptions and emphasize that when we say there is a possibility, we quantify it in the form of a probability."

He thinks public perceptions about climate change might depend on where you live.

"Most South Koreans don't question climate change, not because they're more scientific but because they can see the effects firsthand," Chang said.

"My hometown is a southern city called Daegu. It's about the size of Cincinnati. And it was famous for growing a delicious apple. But now they can't grow the apples there. The orchards are gone. It's just too hot. Now they grow them farther north."

Credit: 
University of Cincinnati

Making microbes that transform greenhouse gases

image: USF researcher Alex Chou manipulates DNA to engineer E. coli for C1 conversion.

Image: 
University of South Florida

Researchers at the University of South Florida are harnessing the power of human physiology to transform greenhouse gases into usable chemical compounds - a method that could help lessen industrial dependence on petroleum and reduce our carbon footprint.

The new biologically-based technique, published in Nature Chemical Biology, was developed by USF Professor Ramon Gonzalez, PhD, and his research team. It utilizes the human enzyme, 2-hydroxyacyl-coenzyme A lyase (HACL), to convert specific one-carbon (C1) materials into more complex compounds commonly used as the building blocks for an endless number of consumer and industrial products.

"In humans, this enzyme degrades branched chain fatty acids," Gonzalez said. "It basically breaks down long carbon chains into smaller pieces. We needed it to do the opposite. So, we engineered the process to work in reverse - taking single carbon molecules and converting them into larger compounds.

By manipulating the DNA encoding the enzyme, researchers are able to insert the modified enzyme into E. coli microorganisms, which act as hosts. When those microbes are introduced to C1 feedstock, such as methanol, formaldehyde, formate, carbon dioxide and methane, a metabolic bioconversion process takes place, transforming the molecules into more complex compounds.

This research represents a significant breakthrough in biologically-based carbon conversion and has the potential to transform current petrochemical processes as well as reduce the amount of greenhouse gas released into the atmosphere during crude oil production.

"When crude oil is pumped out of the ground, it comes with a lot of associated gas," Gonzalez said. "Much of the time, that gas is burned off through flaring and released into the atmosphere. We see that gas as a wasted resource."

Through their work, Gonzalez believes he and his team have engineered a method to utilize that wasted resource in a way that is economically feasible and enticing for oil manufacturers.

Right now, the vast majority of oil production facilities utilize flaring to burn-off gas like methane. While that process is wasteful, according to Gonzalez, it is also inefficient and leads to the release of excess, unburned methane into the atmosphere as well as additional carbon dioxide produced through the burning process.

By implementing the USF-developed technique, oil producers could not only better manage their impact on the environment but also begin producing valuable chemical compounds like ethylene glycol and glycolic acid - molecules that are used in the production of plastics, cosmetics, polymers, cleaning solutions and much more.

Traditionally, the building blocks for these products are made using petroleum. So, while employing the bioconversion method would help reduce greenhouse gas emissions, it also has the potential to reduce the overall dependence on petroleum - multiple benefits that Gonzalez hopes will attract manufacturers to explore adopting their process.

"While this study details the overarching science that makes all of this possible, we are currently working with partners in the private sector to try and implement our technique," Gonzalez said. "It's exciting to be able to take this project from its initial inception all the way to industrial implementation and hopefully have a meaningful impact on not just the industry but to the environment as well."

Credit: 
University of South Florida

Scientists discover potential path to improving samarium-cobalt magnets

image: Charge density models calculated density functional theory. New research findings will help scientists sort out the parameters of magnetism in rare-earth materials, and help speed discovery of potentially useful magnets in the future.

Image: 
Ames Laboratory, US Department of Energy

Scientists have discovered a potential tool to enhance magnetization and magnetic anisotropy, making it possible to improve the performance of samarium-cobalt magnets.

The scientists, at the U.S. Department of Energy's Critical Materials Institute at Ames Laboratory, in collaboration with the Nebraska Center for Materials and Nanoscience and the Department of Physics and Astronomy at the University of Nebraska, identified orbital-moment quenching as the possible tool, and rationalized the quenching in terms of the dependence of electrical charge distribution in samarium atoms.

Sm-Co magnets were the first rare-earth permanent magnets, and are still the top performer in applications where resistance to demagnetization - its coercivity - and performance at high temperatures are important.

The scientists at first sought to test the limits of substituting iron for some of the cobalt, attempting to make a Sm-Co magnet comparable in strength to neodymium iron boron (Nd-Fe-B) magnets, which has a higher magnetic moment.

"The Critical Materials Institute (CMI) has as one of its moonshots the discovery of materials that are comparable in strength to neodymium magnets, but with the high-temperature durability of samarium magnets," said Durga Paudyal, Ames Laboratory scientist and project leader for Predicting Magnetic Anisotropy at CMI. "We were looking to increase the magnetic moment of the standard Sm-Co magnet."

The research collaboration led to the discovery that substitutions of iron could range as high as 20 percent, keeping the coercivity of the magnet intact. Computational theory and modeling results showed that the electronic structure of the Samarium in the material may violate Hund's rule, which predicts how electrons occupy available orbitals in the atomic structure.

The research findings will help scientists sort out the parameters of magnetism in rare-earth materials, and help speed discovery of potentially useful magnets in the future.

Credit: 
DOE/Ames National Laboratory

Study finds link between long-term exposure to air pollution and emphysema

Long-term exposure to air pollution was linked to increases in emphysema between 2000 and 2018, according to a new study funded by the National Institute of Environmental Health Sciences (NIEHS) and the National Heart, Lung, and Blood Institute (NHLBI), both part of the National Institutes of Health. Emphysema, usually associated with smokers, is a chronic disease in which lung tissue is destroyed and unable to effectively transfer oxygen in the body. The study is published in The Journal of the American Medical Association.

"These findings may offer one explanation for why emphysema is found in some people who never smoked," said James Kiley, Ph.D., NHLBI's director of the Division of Lung Diseases. "The study's results, duration, and timing offer insight into the long-term effects of air pollution on the U.S. population."

The relationship between various air pollutants and emphysema was measured through computed tomography (CT) lung imaging and lung function testing. Consistent results were found in these varied metropolitan regions: Winston-Salem, North Carolina; St. Paul, Minnesota; New York City; Baltimore; Chicago; and Los Angeles. Participants came from the Multi-Ethnic Study of Atherosclerosis (MESA), a medical research study, and involved more than 7,000 men and women from the six localities.

"The combined health effect of multiple air pollutants ? ozone, fine particles known as PM2.5, nitrogen oxides, and black carbon ? was greater than when the pollutants were assessed individually," said Bonnie Joubert, Ph.D., a scientific program director at NIEHS. "With the study's long-running duration, repeated CT scans allowed analysis of changes in emphysema over time."

Researchers measured all major air pollutants with longitudinal increases in percent emphysema revealed by more than 15,000 CT scans acquired from 2000 to 2018. Over the same period, MESA carefully tracked air pollution. MESA is unique in its meticulous characterization of air pollution exposures along with repeated CT scans of lungs in study participants.

"Air pollution is a significant public health concern around the world," said Gwen Collman, Ph.D., director of NIEHS' Division of Extramural Research and Training. "It's been a priority of NIEHS research for many years, so it's great when we can accelerate our efforts by joining with other NIH institutes in supporting research on lung disease."

Emphysema is a debilitating disease, and people with emphysema have difficulty breathing along with a persistent cough and phlegm. It makes physical and social activities difficult, creates work hardships, and may result in detrimental emotional conditions. Its development can be a slow, lifelong process. Emphysema is not curable, but treatments help manage the disease.

Understanding and controlling emphysema may lead to better treatment.

"It's important that we continue to explore factors that contribute to emphysema, particularly in a large, multi-ethnic group of adults such as those represented by MESA," Kiley said.

"We need to assess the effectiveness of strategies to control air pollutants in our efforts to improve heart and lung health," said David Goff, M.D., Ph.D., director of NHLBI's Division of Cardiovascular Sciences. "At the same time, people need to remember the importance of a healthy diet, physical activity, and smoking cessation for overall health."

Credit: 
NIH/National Institute of Environmental Health Sciences