Earth

NASA analyzed Tropical Storm Fernand's strength before landfall

image: On Sept. 3, at 5:23 a.m. EDT (0923 UTC) the AIRS instrument aboard NASA's Aqua satellite analyzed cloud top temperatures of Tropical Storm Fernand in infrared light. AIRS found coldest cloud top temperatures (purple) of strongest thunderstorms were as cold as or colder than minus 63 degrees Fahrenheit (minus 53 degrees Celsius) around the center and in a thick band of thunderstorms over northeastern Mexico.

Image: 
NASA JPL/Heidar Thrastarson

NASA's Aqua satellite provided forecasters at the National Hurricane Center with infrared data and cloud top temperature information for Tropical Storm Fernand as it was making landfall in northeastern Mexico. Those temperatures indicated Fernand's rainmaking capabilities. The infrared data also showed wind shear was affecting the storm.

NASA researches tropical cyclones and infrared data is one of the ways NASA uses. Cloud top temperatures provide information to forecasters about where the strongest storms are located within a tropical cyclone. The stronger the storms, the higher they extend into the troposphere, and they have the colder cloud temperatures.

NASA's Aqua satellite analyzed the storm on Sept. 3, at 5:23 a.m. EDT (0923 UTC) using the Atmospheric Infrared Sounder or AIRS instrument. AIRS found coldest cloud top temperatures as cold as or colder than minus 63 degrees Fahrenheit (minus 53 degrees Celsius) around the center and in a thick band of thunderstorms over northeastern Mexico. NASA research has shown that cloud top temperatures that cold indicate strong storms that have the capability to create heavy rain.

That heavy rainfall potential is apparent in the warnings posted today. In northeastern Mexico, from Tamaulipas and Central/Southern Nuevo Leon, 6 to 12 inches of rain are possible with isolated 18 inches, highest along the immediate Gulf Coast and in the Sierra Madre Oriental.  This rainfall may cause life-threatening flash floods and mudslides. Areas from Northern Nuevo Leon and Southern Coahuila can expect 3 to 6 inches. The south Texas and the lower Texas coast can also expect 2 to 4 inches, with isolated totals to 6 inches. In addition, a tornado or two are possible across far South Texas through this evening.

On Sept. 3, NOAA's National Hurricane Center's (NHC) discussion indicated that Fernand has been experiencing moderate easterly to southeasterly vertical wind shear and ingesting dry air in the southeastern semicircle. In general, wind shear is a measure of how the speed and direction of winds change with altitude.

That wind shear was indicated in the AIRS infrared imagery because the southeastern quadrant of Fernand appeared devoid of clouds. That's an indication that outside winds from the east-southeast were pushing clouds and showers to the west-northwest of the center, where the AIRS imagery showed the bulk of clouds.

On Wednesday, September 4, 2019, a Tropical Storm Warning was in effect from Puerto Altamira to the Mouth of the Rio Grande River.

NHC noted at 8 a.m. EDT (1200 UTC), the center of Tropical Storm Fernand was located near latitude 23.5 North, longitude 97.2 West. Fernand is moving toward the west near 6 mph (9 kph). That puts the center of Fernand about 45 miles (70 km) southeast of La Pesca, Mexico. Maximum sustained winds are near 50 mph (85 kph) with higher gusts. Little change in strength is expected before the center moves onshore. The estimated minimum central pressure is 1000 millibars.

NHC said, "A motion toward the west or west-northwest is expected today, and the center of Fernand is forecast to cross the northeastern coast of Mexico later today or this evening. The cyclone is forecast to move inland over northeastern Mexico by this evening, and then dissipate quickly over the rugged terrain of the Sierra Madre Oriental mountains."

For updated forecasts, visit: http://www.nhc.noaa.gov

For updated warnings from the Mexican Meteorological Service, visit: https://smn.conagua.gob.mx/es/

Credit: 
NASA/Goddard Space Flight Center

Young adults exposed to incarceration as children prone to depression

Young adults with childhood history of both parental incarceration and juvenile justice involvement were nearly three times more likely to have depression or post-traumatic stress disorder (PTSD) compared to peers without any experience with the criminal justice system, according to a study published in JAMA Network Open. They also were nearly twice as likely to have anxiety compared to young adults without childhood exposure to incarceration.

"This is a particularly vulnerable and understudied population. Incarceration impacts families across generations, and youth who had a parent in jail or prison more often find themselves in the juvenile justice system," says lead author Nia Heard-Garris, MD, MSc, a pediatrician at Ann & Robert H. Lurie Children's Hospital of Chicago and Instructor of Pediatrics at Northwestern University Feinberg School of Medicine. "Young adults with histories of both juvenile incarceration and parental incarceration as children had a strong association with poor mental health outcomes in young adulthood."

Five million U.S. children have had a parent incarcerated, and those children are estimated to be involved in the juvenile justice system at three times the rate of their peers without a history of parental incarceration.

To examine the association between childhood history of incarceration (parental incarceration plus juvenile justice involvement) and mental health outcomes, Dr. Heard-Garris and co-lead author, Kaitlyn Sacotte, MD, a former medical student at Northwestern University Feinberg School of Medicine and current pediatric resident physician at Oregon Health Science University, and colleagues, used data from the National Longitudinal Study of Adolescent Health to Adult Health.

Out of 13,083 participants, 1,247 (9 percent) had childhood history of parental incarceration, 492 (4.5 percent) had juvenile justice involvement, and 141 (1 percent) had a childhood history of both parental incarceration and juvenile justice involvement. Black individuals accounted for over 33 percent of participants who reported both parental incarceration and juvenile justice involvement, and Latinx participants accounted for over 17 percent.

"Our analyses highlight that a history of both parental incarceration and juvenile justice involvement occurs for 1 out of every 100 U.S. children overall and is disproportionally more common among youth of color," says Dr. Heard-Garris.

Although Black and Latinx individuals were more highly represented, researchers found that the group with dual incarceration exposure had higher odds of poor mental health outcomes that are independent of other factors, such as race or ethnicity, age, family structure, parental education, receipt of public assistance, and residence in the city, suburbs or rural areas.

The researchers additionally found that a history parental incarceration or juvenile justice involvement alone was also associated with worse mental health outcomes compared to peers without incarceration exposure.

"Currently parental incarceration is considered an adverse childhood experience, but juvenile justice involvement is not," says Dr. Heard-Garris. "Given the increased risk for poor mental health outcomes we found in our study, perhaps we should also consider juvenile justice involvement an adverse childhood experience and start screening youth for any incarceration exposure during typical healthcare visits. This would allow us to further support vulnerable patients by connecting them with appropriate resources."

Credit: 
Ann & Robert H. Lurie Children's Hospital of Chicago

Is childhood criminal justice exposure associated with risk of poor adult mental health?

Bottom Line: A childhood history of both personal involvement in the juvenile justice system and parental incarceration was associated with a greater likelihood of depression, anxiety and posttraumatic stress disorder in young adulthood compared to peers without those experiences in this observational study. And, having either one of those experiences with the criminal justice system as a child was associated with risk of adverse mental health outcomes. This analysis of a nationally representative survey included 12,379 participants (141 reported experiencing both parental incarceration and juvenile justice involvement) who were in grades 7 to 12 in 1994-1995 and who were 24 to 32 years old at follow-up in 2008. The study didn't account for the nature of offenses contributing to parental incarceration or involvement in the juvenile justice system or the duration of childhood exposure to the criminal justice system. The focus also was on incarceration of biological parents, which doesn't account for children who experienced the incarceration of nonbiological caregivers. Study authors suggest that because childhood exposure to the criminal justice system appears to put people at risk for poor mental health outcomes later, policies to mitigate that impact could help improve their mental well-being as adults.

Authors: Nia Heard-Garris, M.D., M.Sc., of the Ann & Robert H. Lurie Children's Hospital of Chicago, and coauthors

(doi:10.1001/jamanetworkopen.2019.10465)

Editor's Note: Please see the article for additional information, including other authors, author contributions and affiliations, financial disclosures, funding and support, etc.

Credit: 
JAMA Network

Minority students still underrepresented in medical schools

(PHILADELPHIA) - Black, Hispanic, and American Indian students remain underrepresented in medical schools, despite increasing efforts to create a diverse physician workforce, according to a new study by researchers in the Perelman School of Medicine at the University of Pennsylvania. Though absolute numbers of historically underrepresented medical students have increased over time, these changes occurred at a rate much slower than their age-matched counterparts in the U.S. population. The findings were published today in JAMA Network Open.

"Recent studies have shown a steady increase in the enrollment of nonwhite medical students over the past decade. While those numbers are promising, they don't tell the full story," said the study's co-senior author Jaya Aysola, MD, MPH, an assistant professor of Medicine, assistant dean of the Office of Inclusion and Diversity, and executive director of the Penn Medicine Center for Health Equity Advancement. "We still have a long way to go before our physician workforce mirrors the population of patients who they serve."

The findings come after a decade-long focused effort to diversify the medical field in response to mounting evidence that demonstrates the benefits -- including the advancement of patient care, science, and health equity -- of a physician workforce reflective of its patient population. With that goal in mind, in 2009 the Liaison Committee of Medical Education (LCME) instituted formal accreditation guidelines, which required medical schools to develop programs or partnerships designed "to make admissions to medical education more accessible to potential applicants of diverse backgrounds." These efforts have largely centered on individuals defined by the American Association of Medical Colleges (AAMC) as those who are "underrepresented in the medical profession relative to their numbers in the general population."

According to a 2018 study, the guidelines seemed to be working. After 2009, the research showed, the overall percentages of female, black, and Hispanic matriculants in U.S. medical schools increased. However, this and other similar studies failed to account for the changing demographics of the country's population, explained lead author Lanair Amaad Lett, an MD-PhD student and associate fellow in the Leonard Davis Institute of Health Economics.

"Past research has shown that the medical workforce has indeed become more diverse, but it doesn't account for how much the country is diversifying as a whole. If the Hispanic population grows by 25 percent and the Hispanic medical student population grows at the same rate, you can't directly attribute that growth to new guidelines," Lett said.

To assess the racial/ethnic representation of medical school applicants and matriculants (enrollees), the Penn researchers used data from the AAMC and the U.S. census to determine the "representation quotient," or the ratio of a racial/ethnic group in the medical student body (both applicants and matriculants) to an age-matched U.S. population from years 2002 to 2017. Specifically, they looked at male and female individuals ages 20 to 34 years old, who identified as white, black, Hispanic, Asian, American Indian or Alaska Native (AIAN), and Native Hawaiian/Pacific Islander.

From 2002 to 2017, the number of total medical school applicants increased 53 percent -- from 33,625 to 51,658 -- and the number of enrolled students increased 29 percent -- from 16,488 to 21,326. During this time period, proportions of individuals that identify as black, Hispanic, and Native Hawaiian/Pacific Islander in the U.S. aged 20 to 34 years old grew, while proportions of white individuals decreased, and AIAN individuals were stable.

When accounting for these shifting demographics in the United States overall, the authors found no statistically significant trends towards increased minority representation in medical school applicants or matriculants. In fact, by year 2017, Hispanic medical school enrollees remained underrepresented by nearly 70 percent as compared to their age-matched population; black male matriculants by nearly 60 percent; and black female matriculants by nearly 40 percent.

"The efforts to increase diversity in medical education have clearly not been sufficient," Lett said. "In light of the evidence that physicians from underrepresented backgrounds are more likely to serve populations with significant health disparities, and that a diverse physician workforce improves health care for all, the need for representation is an evidence-based imperative."

In a separate analysis, the team, led by MD student H. Moses Murdock, applied this method to also assess racial/ethnic representation at the state level. They found that states with less overall diversity -- in Vermont, for instance -- did better in terms of representation, than in more diverse states such as California and New York. Ensuring state-level representation is paramount, the authors contend, given that over 50 percent of physician trainees end up practicing in the state where they trained.

Co-senior author Ronnie Sebro, MD, PhD, an assistant professor of Radiology as well as Biostatistics, Epidemiology and Informatics, said he hopes these findings will lead to a deeper examination of how to increase diversity in medical schools at national, state, and local levels, and to look to history to come up with evidence-based solutions. The greatest increase in racial/ethnic representation among medical school matriculants occurred in the late 1960s and early 1970s, for instance, but that increase in representation was not sustained.

"You really have to start thinking about how to change the pathway or the pipeline. Our results suggest that physician racial/ethnic representation will become even more disparate, if changes are not made immediately," Sebro said.

However, the authors note that a lack of diverse applicants is far from the only contributing factor to achieving a workforce that reflects the U.S. population. Aysola added that there also needs to be a concerted effort to ensure structural inequities and biases are addressed at the institutional level.

"Addressing the structural inequities laden in our system of selection of medical students and physician trainees begins with ensuring we are using accurate metrics to set goals and track our progress," she said. "The representation quotient, one such metric, can be applied at the state and institutional level to ensure efforts align with the intended goal of creating a future workforce reflective of their respective patient populations."

Credit: 
University of Pennsylvania School of Medicine

Ageing research to accelerate with experimental validation in AI-powered drug discovery

image: AI-Empowered Design of New Drug in 21 Days, and Synthesis + Preclinical Validation in Just 25 Additional Days

Image: 
Biogerontology Research Foundation

Wednesday, September 4th, 2019, London, UK: The Biogerontology Research Foundation salutes its Founder and Chief Scientific Advisor Alex Zhavoronkov on leading a team of researchers who have succeeded to use Artificial Intelligence to design, synthesize and validate a novel drug candidate in just 46 days, compared to the typical 2-3 years required using the standard hit to lead (H2L) approach used by the majority of pharma corporations.

By using a combination of Generative Adversarial Networks (GANs) and Reinforcement Learning (RL), the team of Insilico Medicine researchers behind this study (documented in a paper published in Nature Biotechnology this month) have succeeded in validating the real power that AI has to expedite timelines in drug discovery and development, and to transform the entire process of bringing new drugs to market from a random process rife with dead ends and wrong turns to an intelligent, focused and directed process, that takes into account the specific molecular properties of a given disease target into account from the very first step.

The Biogerontology Research Foundation has collaborated with Insilico Medicine on a number of projects and studies, and has long advocated for the extreme potentials that AI has in terms of making the process of discovering and validating new drugs a faster and more efficient process, especially as it pertains to aging and longevity research and the development of drugs capable of extending human healthspan and compressing the incidence of age-related disease into the last few years of life.

While this is the newest in a long line of steps and accomplishments aiming to turn the theoretical potentials of AI for longevity research into practice, it is also the largest step made thus far, and goes a very long way in terms of proving that potential via hard science.

"This newest achievement made by Insilico Medicine, a leading AI for drug discovery and longevity company and an official partner of Ageing Research at King's, demonstrates the truly disruptive potential that AI holds in terms of accelerating the pace of progress in drug discovery. Furthermore, this is just the latest step in a much grander agenda of applying AI for ageing and longevity R&D, and to the accelerated translation of that research into real-world therapies for human patients. It is also quite notable that the team released the code behind their algorithm in an open-source format, allowing other researchers to apply their techniques and build upon their achievements for the advancement of the entire field of AI for drug design, ageing research and longevity" said Richard Siow, Ph.D., Director of Ageing Research at King's College London and former Vice-Dean (International), Faculty of Life Sciences & Medicine, King's College London.

It is the hope of the Biogerontology Research Foundation that this study motivates additional researchers to harness the potential for AI in longevity research, and provides incentives for larger drug developers to begin on-boarding AI into their drug discovery and development programs, in order to expedite the time it takes to bring life-saving drugs into the hands of real patients.

The Biogerontology Research Foundation also salutes the team's decision to release the code behind the GAN-RL method to the public in a freely-available open-source format, so that other researchers have the power to take this approach and apply it to their own work.

Credit: 
Biogerontology Research Foundation

It is best not to fly to conferences

Several times a year, researchers from all over the world travel long distances in order to share their latest findings and establish contacts at conferences. Dr. Sebastian Jäckle from the Department of Political Science at the University of Freiburg advocates a more conscious approach to such research trips. The political scientist examined the travel-related CO2 emissions of the last six conferences from the European Consortium for Political Research (ECPR). In addition to calculating the travel-related CO2 emissions of the conference participants, Jäckle is also investigating how emissions can be reduced. The study was published in the journal European Political Science.

According to Jäckle's calculations, the average CO2 balance of a conference visitor ranges from 0.5 to 1.5 tons of CO2 equivalents per three-day ECPR meeting. In comparison, every German emits a total of about 11 tons of CO2 equivalents per year; according to the current report of the Intergovernmental Panel on Climate Change, every human being worldwide is likely to emit only 2.5 tons of CO2 equivalents per year in 2030, meaning that the 1.5 degree target of climate protection is still achievable.

In order to determine the total emissions of the conferences, Jäckle took into account the distances travelled by the participants and the CO2 emissions per kilometer for air, bus and rail transport. Travelling by plane is by far the worst option, whereas there is not much difference in CO2 emissions between bus and train. Jäckle's data also show that a significant proportion of the issues are attributable to a small number of conference topics that travel very long distances: Seven percent of the participants at the conference in Hamburg, Germany in 2018 caused more than half of the total CO2 emissions.

Nevertheless, Jäckle shows in his work that there is a large savings potential. By choosing central conference venues with good rail connections and by connecting participants through video transmission, especially for those from distant regions, the CO2 footprint of conferences could be significantly reduced. "If researchers would then accept somewhat longer travel times by bus or train compared to air travel, up to 85 percent of a conference's emissions could be saved," says Jäckle. "Such savings are only possible, however, if both the professional associations hosting the conferences and individual researchers are aware of the problem and actively strive to make the conference as climate-neutral as possible."

The political scientist has set a good example. In September 2019, he traveled by bicycle from Freiburg to the ECPR conference in Wroclaw, Poland in the most climate-neutral fashion.

Credit: 
University of Freiburg

Seeking moments of disorder

image: Artist's concept depicting magnetic moments having fluctuating alignments 120 degrees different from those of their neighbors.

Image: 
Illustration by Lilli McKinney

The future of technology relies, to a great extent, on new materials, but the work of developing those materials begins years before any specific application for them is known. Stephen Wilson, a professor of materials in UC Santa Barbara's College of Engineering, works in that "long before" realm, seeking to create new materials that exhibit desirable new states.

In the paper "Field-tunable quantum disordered ground state in the triangular-lattice antiferromagnet NaYbO2," published in the journal Nature Physics, Wilson and colleagues Leon Balents, of the campus's Kavli Institute for Theoretical Physics, and Mark Sherwin, a professor in the Department of Physics, describe their discovery of a long-sought "quantum spin liquid state" in the material NaYbO2 (sodium ytterbium oxide). The study was led by materials student Mitchell Bordelon and also involved physics students Chunxiao Liu, Marzieh Kavand and Yuanqi Lyu, and undergraduate chemistry student Lorenzo Posthuma, as well as collaborators at Boston College and at the U.S. National Institute of Standards and Technology.

At the atomic level, electrons in one material's lattice structure behave differently, both individually and collectively, from those in another material. Specifically, the "spin," or the electron's intrinsic magnetic moment (akin to an innate bar magnet) and its tendency to communicate and coordinate with the magnetic moments of nearby electrons differs by material. Various types of spin systems and collective patterns of ordering of these moments are known to occur, and materials scientists are ever seeking new ones, including those that have been hypothesized but not yet shown to exist.

"There are certain, more classical moments that let you know to a very high degree of certainty that the spin is pointing in a particular direction," Wilson explained. "In those, the quantum effects are small. But there are certain moments where the quantum effects are large, and you can't precisely orient the spin, so there is uncertainty, which we call 'quantum fluctuation.'"

Quantum magnetic states are those in which the magnetism of a material is primarily driven by such quantum fluctuations, generally derived from the uncertainty principle, intrinsic to magnetic moments. "So, you envision a magnetic moment, but the uncertainty principle says that I can't perfectly orient that in any one direction," Wilson noted.

Explaining the quantum spin liquid state, which was proposed long ago and is the subject of this paper, Wilson said, "In conventional materials, the magnetic moments talk to one another and want to orient relative to one another to form some pattern of order." In classical materials, this order is disrupted by thermal fluctuations, what Wilson describes as "just heat from the environment."

"If the material is warm enough, it is nonmagnetic, meaning the moments are all sort of jumbled relative to one another," he explained. "Once the material is cooled, the moments start to communicate, such that their connection to one another outcompetes the thermal fluctuations and they form an ordered state. That's classical magnetism."

But things are different in the quantum world, and magnetic moments that fluctuate can actually be the inherent "ground state" of a material.

"So, you can ask if there is a magnetic state in which the moments are precluded from freezing or forming some pattern of long-range order relative to one another, not by thermal fluctuations, but instead, by quantum fluctuations," Wilson said. "Quantum fluctuations become more relevant as a material cools, while thermal fluctuations increase as it heats up, so you want to find a magnet that doesn't order until you can get it cool enough such that the quantum fluctuations preclude it from ever ordering."

That quantum disorder is desirable because it is associated with entanglement, the quantum mechanical quality that makes it possible to encode quantum information. To determine whether NaYbO2 might exhibit that characteristic, the researchers had to determine the intrinsic, or ground state of the material's magnetic moments when all thermal fluctuations are removed. In this particular system, Wilson was able to determine experimentally that the magnetic moments are intrinsically in a fluctuating, disordered state, thus confirming that a quantum disordered state exists.

To find the hypothesized state, said Wilson, "First you have to put highly quantum magnetic moments into a material, but your material needs to be constructed such that the moments don't want to order. You do that by using the principle of 'magnetic frustration.'"

A simple way to think of that, according to Wilson, is to imagine a single triangle in the lattice structure of the material. "Let's say I build my material so that the magnetic moments are all located on a triangular lattice," he said, "and they all talk to one another in a way that has them wanting to orient antiferromagnetically, or antiparallel, to one another."

In that arrangement, any adjacent moment on the triangle wants to orient antiparallel to its neighbor. But because there are an odd number of points, you have one up at one point and one down (antiparallel to the first) at the second point, meaning that the third moment has a differently oriented moment on each side, so it doesn't know what to do. All of the moments are competing with one another.

"That's magnetic frustration, and, as it turns out, it reduces the temperature at which the moments are finally able to find some arrangement they all agree on," Wilson said. "So, for instance, classically, nature decides that at some temperature the mismatched moments agree that they will all point to 120 degrees relative to each other. So they're not all 100 percent happy but it's some compromise that establishes an ordered state."

From there, he added, "The idea is to take a frustrated lattice where you have already suppressed the ordered state, and add quantum fluctuations to it, which take over as you cool the material. Magnetic frustration lowers the ordering temperature enough so that quantum fluctuations eventually take over and the system can stabilize into a fundamentally disordered quantum spin state."

Wilson continued: "That's the paradigm of what people are looking for; however, some materials may seem to display this state when actually, they don't. For instance, all real materials have disorder, such as chemical or structural disorder, and this can also prevent the magnetic moments from talking to each other effectively and becoming ordered. In such a case, Wilson says, "They might form a disordered state, but it's more of a frozen, or static, disordered state than it is a dynamic quantum state.

"So, if I have a magnetic system that doesn't order at the lowest temperatures I can measure, it can be tricky trying to understand whether what I'm measuring is an intrinsic quantum spin liquid fluctuating type of state or a frozen, extrinsic, chemically driven disordered state. That is always debated."

Among the most interesting findings about this new material, Wilson said, is that even at the lowest measurable temperature -- .005 degree Centigrade above absolute zero -- it still doesn't order.

"However, in this material we can also apply a magnetic field, which breaks this competition engendered by magnetic frustration, and then we can drive it to order, inducing a special kind of antiferromagnetic state," he added. "The reason that's important is because this special state is very delicate and a very good fingerprint for how much chemical disorder there is in the system and its influence on the magnetic ground state. The fact that we can drive this field-driven state tells us that the disordered state we see at low temperature with zero magnetic field is indeed an intrinsically quantum disordered state, consistent with being a quantum spin liquid state."

Credit: 
University of California - Santa Barbara

Underwater soundscapes reveal differences in marine environments

image: Divers installing an autonomous underwater hydropohone.

Image: 
Samara Haver

CORVALLIS, Ore. - Storms, boat traffic, animal noises and more contribute to the underwater sound environment in the ocean, even in areas considered protected, a new study from Oregon State University shows.

Using underwater acoustic monitors, researchers listened in on Stellwagen Bank National Marine Sanctuary off the coast of Boston; Glacier Bay National Park and Preserve in Alaska; National Park of American Samoa; and Buck Island Reef National Monument in the Virgin Islands.

They found that the ambient sounds varied widely across the sites and were driven by differences in animal vocalization rates, human activity and weather.

The findings demonstrate that sound monitoring is an effective tool for assessing conditions and monitoring changes, said Samara Haver, a doctoral candidate in the College of Agricultural Sciences at OSU and the study's lead author.

"This is a relatively economical way for us to get a ton of information about the environment," said Haver, who studies marine acoustics and works out of the Cooperative Institute for Marine Resources Studies, a partnership between OSU and the National Oceanic and Atmospheric Administration at the Hatfield Marine Science Center in Newport.

"Documenting current and potentially changing conditions in the ocean soundscape can provide important information for managing the ocean environment."

The findings were published recently in the journal Frontiers in Marine Science. Co-authors include Robert Dziak, an acoustics scientist with NOAA who holds a courtesy appointment in OSU's College of Earth, Ocean, and Atmospheric Sciences; and other researchers from OSU, NOAA, Cornell University and the National Park Service.

Passive acoustic monitoring is seen as a cost-effective and low-impact method for monitoring the marine environment. The researchers' goal was to test how effective acoustic monitoring would be for long-term assessment of underwater conditions.

"Ocean noise levels have been identified as a potential measure for effectiveness of conservation efforts, but until now comparing sound across different locations has been challenging," Haver said. "Using equipment that was calibrated across all of the sites, we were able to compare the sound environments of these diverse areas in the ocean."

The researchers collected low frequency, passive acoustic recordings from each of the locations between 2014 and 2018. They compared ambient sounds as well as sounds of humpback whales, a species commonly found in all four locations.

The inclusion of the humpback whale sounds - mostly songs associated with mating in the southern waters, and feeding or social calls in the northern waters - gives researchers a way to compare the sounds of biological resources across all the soundscapes, Haver said.

The researchers found that ambient sound levels varied across all four study sites and sound levels were driven by differences in animal vocalization rates, human activity and weather. The highest sound levels were found in Stellwagen Bank during the winter/spring, driven by higher animal sound rates, vessel activity and high wind speeds. The lowest sound levels were found in Glacier Bay in the summer.

"Generally, the Atlantic areas were louder, especially around Stellwagen, than the Pacific sites," Haver said. "That makes sense, as there is generally more man-made sound activity in the Atlantic. There also was a lot of vessel noise in the Caribbean."

The researchers also were able to hear how sound in the ocean changes before, during and after hurricanes and other severe storms; the monitoring equipment captured Hurricanes Maria and Irma in the Virgin Islands and Tropical Cyclone Winston in American Samoa.

Ultimately, the study provides a baseline for these four regions and can be used for comparison over time. Documenting current and potentially changing conditions in the ocean soundscape can provide important information for managing the ocean environment, particularly in and around areas that have been designated as protected, Haver said.

Credit: 
Oregon State University

Methane-producing microorganism makes a meal of iron

image: A new understanding of how a microorganism uses iron to more efficiently conserve energy when producing methane and carbon dioxide will allow researchers to make important predictions of future climate change and maybe even manipulate the production of these greenhouse gasses.

Image: 
CC0 license

A new understanding of how an important methane-producing microorganism creates methane and carbon dioxide could eventually allow researchers to manipulate how much of these important greenhouse gases escape into the atmosphere. A new study by Penn State researchers proposes an updated biochemical pathway that explains how the microorganism uses iron to more efficiently capture energy when producing methane. The study appears online in the journal Science Advances.

"The microorganism Methanosarcina acetivorans is a methanogen that plays an important part in the carbon cycle, by which dead plant material is recycled back into carbon dioxide that then generates new plant material by photosynthesis," said James Ferry, Stanley Person Professor of Biochemistry and Molecular Biology at Penn State, who led the research team. "Methanogens produce about 1 billion metric tons of methane annually, which plays a critical role in climate change. Understanding the process by which this microorganism produces methane is important for predicting future climate change and for potentially manipulating how much of this greenhouse gas the organism releases."

Methanosarcina acetivorans, which is found in environments like the ocean floor and rice paddies where it helps to decompose dead plant material, converts acetic acid into methane and carbon dioxide. Prior to this study, however, researchers were not certain how the microorganism had enough energy to survive in the oxygen-free--anaerobic--environments where it lives. The researchers determined that an oxidized form of iron called "iron three," essentially rust, allows the microorganism to work more efficiently, using more acetic acid, creating more methane, and creating more ATP--a chemical that provides energy for biological reactions essential for growth.

"Most organisms like humans use a process called respiration to create ATP, but this requires oxygen," said Ferry. "When no oxygen is present, many organisms instead use a less efficient process called fermentation to create ATP, like the processes used by yeast in the production of wine and beer. But the presence of iron allows M. acetivorans to use respiration even in the absence of oxygen."

The findings allowed the researchers to update the biological pathway by which M. acetivorans converts acetic acid to methane, which now includes respiration. Pathways like this one involve many intermediate steps, during which energy is often lost in the form of heat. The researchers also determined that in the presence of iron, energy loss in this microorganism is reduced due to a recently discovered process called electron bifurcation.

"Electron bifurcation takes one of those steps that has the potential for tremendous heat loss and harvests that energy in the form of ATP rather than heat," said Ferry. "This makes the process more efficient."

This updated pathway could allow researchers to predict the amount of methane that the microorganism will release into the atmosphere.

"Rice paddies--a major source of the methane in the atmosphere--contain decaying rice plants submerged in water that are ultimately processed by M. acetivorans. If we measure the amount of iron three present in the paddies, we can predict how much methane will be released by the microorganisms, which can improve our climate change models."

In the absence of iron, the microorganism produces roughly equal amounts of methane and carbon dioxide from acetic acid. But with increasing amounts of iron, it produces more carbon dioxide relative to methane, so providing the organism with additional iron could alter the relative amounts of these greenhouse gasses that are produced.

"Methane is 30 times more potent as a greenhouse gas than carbon dioxide, which makes it more problematic in terms of our warming planet," said Ferry. "Now that we better understand this biochemical pathway, we see that we can use iron to alter the ratios of the gasses being produced. In the future, we might even be able to go further and inhibit the production of methane by this microorganism.

"In addition to the practical applications, this is a major addition to understanding the biology of the largely unseen but hugely important anaerobic world."

Credit: 
Penn State

Pharmacists in the ER speed delivery of coagulation drug to bleeding patients

MAYWOOD, IL - Millions of patients take blood thinners such as Coumadin to prevent blood clots that can cause strokes.

But when such patients come to the emergency department (ED) with life-threatening bleeding, they may require a drug that counteracts the effect of blood thinners, thereby improving coagulation.

Now a first-of-its-kind study by Loyola Medicine researchers has found that when a pharmacist is present in the ED, patients receive the coagulation drug much more quickly, resulting in less time in the intensive care unit and shorter hospital stays. (The coagulation drug is called four-factor prothrombin complex concentrate or 4F-PCC.)

The retrospective study by first author Dalila Masic, PharmD, senior author Megan A Rech, PharmD, MS, BCPS, BCCCP, and colleagues is published online July 15, 2019 in the Journal of Emergency Medicine.

The study included 116 patients who were on a blood thinner and came to the ED with life-threatening bleeding. The most common blood thinner was warfarin (brand name Coumadin), and the most common indication for the blood thinner was treatment of a heart rhythm disorder called atrial fibrillation. The most common type of bleeding was intracranial hemorrhage (bleeding inside the skull that causes a stroke).

Of the 116 patients, 50 had a clinical pharmacist at their bedsides and 66 had a physician team alone. (A clinical pharmacist is typically present in Loyola's ED during weekdays but not during nights and weekends.)

Among patients who had a pharmacist at the bedside managing medications, the coagulation drug was administered in a median time of 66.5 minutes, compared with 206.5 minutes in patients without a bedside pharmacist. Patients with a bedside pharmacist spent less time in the intensive care unit (2 days vs. 5 days) and in the hospital overall (5.5 days vs. 8 days).

The study findings suggest that bedside pharmacists helped emergency physicians in clinical decision-making and appropriate ordering of 4F-PCC. Pharmacists communicated with the central pharmacy to ensure the life-saving medication was delivered to the patient in a timelier manner.

"A clinical pharmacist provides valuable therapeutic recommendations and optimizes time to receipt of life-saving pharmacotherapy," researchers concluded.

Credit: 
Loyola Medicine

Major Hurricane Juliette's emerging eye spotted in NASA satellite imagery

image: On Sept. 2, 2019 at 4:25 p.m. EDT (2025 UTC) the MODIS instrument aboard NASA's Aqua satellite provided a visible image of Hurricane Juliette in the Eastern Pacific Ocean as its eye began to show.

Image: 
NASA/NRL

NASA's Aqua satellite passed over the Eastern Pacific Ocean and provided an image of Hurricane Juliette as its eye began to emerge. Juliette has grown into a major hurricane, about 450 miles southwest of the southern tip of Baja California, Mexico.

Juliette developed on Sunday, Sept 1 around 5 a.m. EDT (0900 UTC) as a tropical storm. By 5 p.m. EDT on Sept. 2, the storm had strengthened into a hurricane.

On Sept. 2, 2019 at 4:25 p.m. EDT (2025 UTC) the Moderate Resolution Imaging Spectroradiometer or MODIS instrument aboard NASA's Aqua satellite provided a visible image of Hurricane Juliette that revealed its emerging eye around a thick tight circle of powerful thunderstorms. Hurricane-force winds extend outward up to 35 miles (55 km) from the center and tropical-storm-force winds extend outward up to 125 miles (205 km).

NASA researches hurricanes to better understand their behavior, and provides data to forecasters at NOAA's NHC or National Hurricane Center to assist in their forecasting.

On Sept. 3 at 11 a.m. EDT (1500 UTC), NOAA's National Hurricane Center said the eye of Hurricane Juliette was located near latitude 18.4 degrees north and longitude 115.0 degrees west. That's about 455 miles (730 km) southwest of the southern tip of Baja California, Mexico. Juliette is moving toward the northwest near 8 mph (13 kph), and a northwest to west-northwest motion is expected through Friday. Maximum sustained winds have increased to near 125 mph (205 kph) with higher gusts. Juliette is a category 3 hurricane on the Saffir-Simpson Hurricane Wind Scale. The estimated minimum central pressure is 953 millibars based on satellite estimates and data from the Mexican Navy station on Clarion Island.

NHC said. "Some strengthening is possible today, with weakening forecast to begin by late Wednesday and continuing through Friday."

Credit: 
NASA/Goddard Space Flight Center

Medical imaging rates continue to rise despite push to reduce their use

image: Diana Miglioretti,PhD, Senior Author

Image: 
UC Davis Health

Despite a broad campaign among physician groups to reduce the amount of imaging in medicine, the rates of use of CT, MRI and other scans have continued to increase in both the U.S. and Ontario, Canada, according to a new study of more than 135 million imaging exams conducted by researchers at UC Davis, UC San Francisco and Kaiser Permanente. A recent reacceleration in the growth of imaging concerns researchers because it is widely believed to be overused.

The study, published Sept. 3, 2019 in the Journal of the American Medical Association, is the first of its size to determine imaging rates across different populations. It found that although the growth in imaging slowed in the early 2000s, it ticked back up in recent years for computerized tomography (CT) and magnetic resonance imaging (MRI) in most patient age groups. A notable exception was a decline in CT use in children in recent years.

CT scans use ionizing radiation to create images of the inside of the body, and they deliver a radiation dose far higher than a conventional X-ray, while MRIs use magnetic fields and radio waves to create images and do not expose patients to ionizing radiation. The authors noted that the study does not discuss whether the documented imaging use was appropriate or associated with better patient outcomes.

“Medical imaging is an important part of health care and contributes to accurate disease diagnosis and treatment, but it also can lead to patient harms such as incidental findings, overdiagnosis, anxiety and radiation exposure that is associated with an increased risk of cancer,” said lead author Rebecca Smith-Bindman, MD, a UCSF professor of radiology, epidemiology and biostatistics, and obstetrics and reproductive medicine and a member of the Philip R. Lee Institute of Health Policy Studies.

Although it is widely believed that imaging rates are declining due to payment and educational efforts that have targeted unnecessary imaging, the authors found a reacceleration in imaging use, with ongoing growth in the use of CT and MRI in adults.

“Like all aspects of medicine, it’s important to make sure imaging is justified, and that the potential benefits are balanced against the potential harms,” said Smith-Bindman. “These potential harms of false positive diagnoses and overdiagnoses can impact everyone who undergoes a test and thus need to be considered when imaging is used.”

Diana Miglioretti, PhD, biostatistics professor at UC Davis Department of Public Health Sciences and senior author on the study, said there were some hopeful signs among the findings.

“The good news is that rates of CT imaging are starting to come down in children,” Miglioretti said. “However, they’re still far lower in Ontario than in the U.S., suggesting there is additional room for improvement. It’s also important to reduce unnecessary imaging in adults given they are also at risk of radiation-induced cancers.”

The researchers analyzed patterns of medical imaging between 2000 and 2016 among a diverse group of 16-21 million adult and pediatric patients enrolled in seven U.S. health care systems and in the universal, publicly funded health care system in Ontario, Canada. For the U.S. data, they included people receiving care in both fully integrated health care systems such as Kaiser Permanente, and systems with mixed insurance including HMOs and PPOs with fee-for-service plans.

“Our capture of medical imaging utilization across seven U.S. health care systems and Ontario, Canada, over a 16-year period provides one of the most comprehensive assessments to date of imaging in children to older adults in North America,” said Marilyn Kwan, PhD, co-author and senior research scientist in the Kaiser Permanente Northern California Division of Research.

Among the findings:

Annual growth in CT, MRI and ultrasound were highest in earlier years (2000-2006), but utilization has continued to rise year over year; between 2012 and 2016 there has been 1%-5% annual growth for most age groups and most tests in both the U.S. and Ontario.

The one exception was CT use in children, which declined in the U.S. from 2009 -2013 and remained stable since 2013 and declined in Ontario since 2006.

Rates of imaging with CT and MRI are higher in the U.S. than in Ontario, but that gap is closing. For example, among older adults in 2016, there were 51 MRIs per 1,000 patients in the U.S. and 32 per 1000 patients in Ontario.

Rates of imaging accelerated after initially dropping in many cases. For example, the rate of growth in CT scans among the elderly was 9.5 percent in 2000-2005, dropped to 0.9 percent in 2006-2011, but then increased to 3 percent annual growth over the last five years.

Imaging rates for both adults and children were higher in the mixed model versus fully integrated healthcare systems, but the differences were modest.

The authors note that potential overuse of diagnostic testing has been addressed with the “Choosing Wisely” campaign launched in 2012 by the American Board of Internal Medicine Foundation and endorsed by 85 professional medical societies. The effort urges physicians to talk with their patients about whether an imaging study is necessary, free from harm and supported by evidence. Other initiatives, including by the federal Centers for Medicaid and Medicare Services, have created incentives to discourage overuse of imaging by reducing reimbursement rates for certain scans.

The study authors say their findings suggest that neither the financial incentives nor the campaign to reduce use of medical imaging have been entirely effective.

“Although most physicians are aware that imaging tests are frequently overused, there are not enough evidenced-based guidelines that rely on a careful consideration of the evidence, including information on benefits and harms that can inform their testing decisions,” Smith-Bindman said. “In the absence of balanced evidence, the default decision is to image.”

Credit: 
University of California - Davis Health

Medical imaging rates continue to rise despite push to reduce their use

Despite a broad campaign among physician groups to reduce the amount of imaging in medicine, the rates of use of CT, MRI and other scans have continued to increase in both the U.S. and Ontario, Canada, according to a new study of more than 135 million imaging exams conducted by researchers at UC Davis, UC San Francisco and Kaiser Permanente. This concerns researchers because medical imaging is widely believed to be overused.

The study, published September 3, 2019 in the Journal of the American Medical Association, is the first of its size to determine imaging rates across different populations. It found that although the growth in imaging slowed in the early 2000s, it ticked back up in recent years for computerized tomography (CT) and magnetic resonance imaging (MRI) in most patient age groups. A notable exception was a decline in CT use in children in recent years.

CT scans use ionizing radiation to create images of the inside of the body, and they deliver a radiation dose far higher than a conventional X-ray, while MRIs use magnetic fields and radio waves to create images and do not expose patients to ionizing radiation. The authors noted that the study does not discuss whether the documented imaging use was appropriate or associated with better patient outcomes.

"Medical imaging is an important part of health care and contributes to accurate disease diagnosis and treatment, but it also can lead to patient harms such as incidental findings, overdiagnosis, anxiety and radiation exposure that is associated with an increased risk of cancer," said lead author Rebecca Smith-Bindman, MD, a UCSF professor of radiology, epidemiology and biostatistics, and obstetrics and reproductive medicine.

Although it is widely believed that imaging rates are declining due to payment and educational efforts that have targeted unnecessary imaging, the authors found a reacceleration in imaging use, with ongoing growth in the use of CT and MRI in adults.

"Like all aspects of medicine, it's important to make sure imaging is justified, and that the potential benefits are balanced against the potential harms," said Smith-Bindman, a member of the Philip R. Lee Institute of Health Policy Studies. "These potential harms of false positive diagnoses and overdiagnoses can impact everyone who undergoes a test and thus need to be considered when imaging is used."

Diana Miglioretti, PhD, biostatistics professor at UC Davis Department of Public Health Sciences and senior author on the study, said there were some hopeful signs among the findings.

"The good news is that rates of CT imaging are starting to come down in children," said Miglioretti, who is also a senior investigator with Kaiser Permanente Washington Health Research Institute. "However, they're still far lower in Ontario than in the U.S., suggesting there is additional room for improvement. It's also important to reduce unnecessary imaging in adults given they are also at risk of radiation-induced cancers."

The researchers analyzed patterns of medical imaging between 2000 and 2016 among a diverse group of 16-21 million adult and pediatric patients enrolled in seven U.S. health care systems and in the universal, publicly funded health care system in Ontario, Canada. For the U.S. data, they included people receiving care in both fully integrated health care systems such as Kaiser Permanente, and systems with mixed insurance including HMOs and PPOs with fee-for-service plans.

"Our capture of medical imaging utilization across seven U.S. health care systems and Ontario, Canada, over a 16-year period provides one of the most comprehensive assessments to date of imaging in children to older adults in North America," said Marilyn Kwan, PhD, co-author and senior research scientist in the Kaiser Permanente Northern California Division of Research.

Among the findings:

Annual growth in CT, MRI and ultrasound were highest in earlier years (2000-2006), but utilization has continued to rise year over year; between 2012 and 2016 there has been 1%-5% annual growth for most age groups and most tests in both the U.S. and Ontario.

The one exception was CT use in children, which declined in the U.S. from 2009 -2013 and remained stable since 2013 and declined in Ontario since 2006.

Rates of imaging with CT and MRI are higher in the U.S. than in Ontario, but that gap is closing. For example, among older adults in 2016, there were 51 MRIs per 1000 patients in the U.S. and 32 per 1000 patients in Ontario.

Rates of imaging accelerated after initially dropping in many cases. For example, the rate of growth in CT scans among the elderly was 9.5 percent in 2000-2005, dropped to 0.9 percent in 2006-2011, but then increased to 3 percent annual growth over the last five years.

Imaging rates for both adults and children were higher in the mixed model versus fully integrated healthcare systems, but the differences were modest.

The authors note that potential overuse of diagnostic testing has been addressed with the "Choosing Wisely" campaign launched in 2012 by the American Board of Internal Medicine Foundation and endorsed by 85 professional medical societies. The effort urges physicians to talk with their patients about whether an imaging study is necessary, free from harm and supported by evidence. Other initiatives, including by the federal Centers for Medicaid and Medicare Services, have created incentives to discourage overuse of imaging by reducing reimbursement rates for certain scans.

The study authors say their findings suggest that neither the financial incentives nor the campaign to reduce use of medical imaging have been entirely effective.

"Although most physicians are aware that imaging tests are frequently overused, there are not enough evidenced-based guidelines that rely on a careful consideration of the evidence, including information on benefits and harms that can inform their testing decisions," Smith-Bindman said. "In the absence of balanced evidence, the default decision is to image."

Credit: 
University of California - San Francisco

Oldest lake in Europe reveals more than one million years of climate history

image: The researchers drilled to a maximum depth of 568 metres and a water depth of 245 metres. This makes the endeavour one of the most successful lake drillings carried out in the framework of the International Continental Scientific Drilling Program (ICDP), allowing the team to collect high-resolution regional climate data for a period of over 1.3 million years.

Image: 
Niklas Leicher

A deep drilling project at Lake Ohrid, situated at the border between Albania and North Macedonia and involving 47 researchers from 13 nations, has brought new insights into climate history to light. The team, headed by the geologist Professor Dr Bernd Wagner from the University of Cologne, has now published its findings under the title 'Mediterranean winter rainfall in phase with African monsoon during past 1.36 million years' in 'Nature'.

Lake Ohrid is considered the oldest existing lake in Europe. The project began 15 years ago with first preliminary investigations to determine the age of the lake and better understand the climate history of the Mediterranean region. The deep drilling took place in 2013. With a maximum drilling depth of 568 metres and a water depth of 245 metres, it became one of the most successful lake drillings carried out in the framework of the International Continental Scientific Drilling Program (ICDP). The analysis of the extracted material - the drill cores with their sediment layers - took five years.

'We have shown that the lake formed exactly 1.36 million years ago and has existed continuously ever since', the leading geologists Professor Dr Bernd Wagner and Dr Hendrik Vogel (University of Bern) said. 'We were thrilled when we realized that we had retrieved one of the longest and most complete lake sediment cores from the oldest lake in Europe. Getting the chance to obtain high-resolution regional climate data for a period of over 1.3 million years is the dream of every climate researcher.'

The sediments deposited in the lake allow the scientists to reconstruct the climate history of the region over this period, for example about precipitation. The drilling cores for the first time provide data sets over such long time periods. This can now be compared with data from models. 'This way, our research helps us to better understand the causes of rain phases and to more accurately assess the effects of climate change for future predictions,' says Wagner.

The sediment data show a significant increase in winter precipitation in the northern Mediterranean region during the warm seasons. The Mediterranean climate is characterized by strong seasonal contrasts between dry summers and wet winters. Changes in winter rainfall are difficult to reconstruct on time scales of the last million years, Wagner explains. This is partly due to the fact that so far there are few regional hydro-climate records covering several glacial-interglacial cycles with different earth orbital geometries, global ice volume and atmospheric greenhouse gas concentrations.

The data from models of the research project have shown that low pressure increased over the western Mediterranean especially in the autumn months, triggered by an increase in surface temperatures in the Mediterranean. 'Similar effects could also be caused by current global warming', says Wagner.

Credit: 
University of Cologne

Study reveals 'radical' wrinkle in forming complex carbon molecules in space

image: This composite image shows an illustration of a carbon-rich red giant star (middle) warming an exoplanet (bottom left) and an overlay of a newly found pathway that could enable complex carbons to form near these stars.

Image: 
ESO/L. Calçada; Berkeley Lab, Florida International University, and University of Hawaii at Manoa

A team of scientists has discovered a new possible pathway toward forming carbon structures in space using a specialized chemical exploration technique at the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab).

The team's research has now identified several avenues by which ringed molecules known as polycyclic aromatic hydrocarbons, or PAHs, can form in space. The latest study is a part of an ongoing effort to retrace the chemical steps leading to the formation of complex carbon-containing molecules in deep space.

PAHs - which also occur on Earth in emissions and soot from the combustion of fossil fuels - could provide clues to the formation of life's chemistry in space as precursors to interstellar nanoparticles. They are estimated to account for about 20 percent of all carbon in our galaxy, and they have the chemical building blocks needed to form 2D and 3D carbon structures.

In the latest study, published in Nature Communications, researchers produced a chain of ringed, carbon-containing molecules by combining two highly reactive chemical species that are called free radicals because they contain unpaired electrons. The study ultimately showed how these chemical processes could lead to the development of carbon-containing graphene-type PAHs and 2D nanostructures. Graphene is a one-atom-thick layer of carbon atoms.

Importantly, the study showed a way to connect a five-sided (pentagon-shaped) molecular ring with a six-sided (hexagonal) molecular ring and to also convert five-sided molecular rings to six-sided rings, which is a stepping stone to a broader range of large PAH molecules.

"This is something that people have tried to measure experimentally at high temperatures but have not done before," said Musahid Ahmed, a scientist in Berkeley Lab's Chemical Sciences Division. He led the chemical-mixing experiments at Berkeley Lab's Advanced Light Source (ALS) with Professor Ralf I. Kaiser at the University of Hawaii at Manoa. "We believe this is yet another pathway that can give rise to PAHs."

Professor Alexander M. Mebel at Florida International University assisted in the computational work for the study. Previous studies by the same research team have also identified a couple of other pathways for PAHs to develop in space. The studies suggest there could be multiple chemical routes for life's chemistry to take shape in space.

"It could be all the above, so that it isn't just one," Ahmed said. "I think that's what makes this interesting."

The experiments at Berkeley Lab's ALS - which produces X-rays and other types of light supporting many different types of simultaneous experiments - used a portable chemical reactor that combines chemicals and then jets them out to study what reactants formed in the heated reactor.

Researchers used a light beam tuned to a wavelength known as "vacuum ultraviolet" or VUV produced by the ALS, coupled with a detector (called a reflectron time-of-flight mass spectrometer), to identify the chemical compounds jetting out of the reactor at supersonic speeds.

The latest study combined the chemical radicals CH3 (aliphatic methyl radical) with C9H7 (aromatic 1-indenyl radical) at a temperature of about 2,105 Fahrenheit degrees to ultimately produce molecules of a PAH known as naphthalene (C10H8) that is composed of two joined benzene rings.

The conditions required to produce naphthalene in space are present in the vicinity of carbon-rich stars, the study noted.

The reactants produced from two radicals, the study notes, had been theorized but hadn't been demonstrated before in a high-temperature environment because of experimental challenges.

"The radicals are short-lived - they react with themselves and react with anything else around them," Ahmed said. "The challenge is, 'How do you generate two radicals at the same time and in the same place, in an extremely hot environment?' We heated them up in the reactor, they collided and formed the compounds, and then we expelled them out of the reactor."

Kaiser said, "For several decades, radical-radical reactions have been speculated to form aromatic structures in combustion flames and in deep space, but there has not been much evidence to support this hypothesis." He added, "The present experiment clearly provides scientific evidence that reactions between radicals at elevated temperatures do form aromatic molecules such as naphthalene."

While the method used in this study sought to detail how specific types of chemical compounds form in space, the researchers noted that the methods used can also enlighten broader studies of chemical reactions involving radicals exposed to high temperatures, such as in the fields of materials chemistry and materials synthesis.

Credit: 
DOE/Lawrence Berkeley National Laboratory