Culture

Exoplanet apparently disappears in latest Hubble observations

video: This video simulates what astronomers, studying Hubble Space Telescope observations, consider evidence for the first-ever detection of the aftermath of a titanic planetary collision in another star system. The color-tinted Hubble image on the left is of a vast ring of icy debris encircling the star Fomalhaut, located 25 light-years away. The animated diagram on the right is a simulation of the expanding and fading cloud, based on Hubble observations taken over a period of several years.

Watch on YouTube: https://youtu.be/aBWwRQ4YIcs

Image: 
NASA, ESA, and A. Gáspár and G. Rieke (University of Arizona)

Now you see it, now you don't.

What astronomers thought was a planet beyond our solar system has now seemingly vanished from sight. Though this happens in science fiction, such as Superman's home planet Krypton exploding, astronomers are looking for a plausible explanation.

One interpretation is that, rather than being a full-sized planetary object, which was first photographed in 2004, it could instead be a vast, expanding cloud of dust produced in a collision between two large bodies orbiting the bright nearby star Fomalhaut. Potential follow-up observations might confirm this extraordinary conclusion.

"These collisions are exceedingly rare and so this is a big deal that we actually get to see one," said András Gáspár of the University of Arizona, Tucson. "We believe that we were at the right place at the right time to have witnessed such an unlikely event with NASA's Hubble Space Telescope."

"The Fomalhaut system is the ultimate test lab for all of our ideas about how exoplanets and star systems evolve," added George Rieke of the University of Arizona's Steward Observatory. "We do have evidence of such collisions in other systems, but none of this magnitude has been observed in our solar system. This is a blueprint of how planets destroy each other."

The object, called Fomalhaut b, was first announced in 2008, based on data taken in 2004 and 2006. It was clearly visible in several years of Hubble observations that revealed it was a moving dot. Until then, evidence for exoplanets had mostly been inferred through indirect detection methods, such as subtle back-and-forth stellar wobbles, and shadows from planets passing in front of their stars.

Unlike other directly imaged exoplanets, however, nagging puzzles arose with Fomalhaut b early on. The object was unusually bright in visible light, but did not have any detectable infrared heat signature. Astronomers conjectured that the added brightness came from a huge shell or ring of dust encircling the planet that may possibly have been collision-related. The orbit of Fomalhaut b also appeared unusual, possibly very eccentric.

"Our study, which analyzed all available archival Hubble data on Fomalhaut revealed several characteristics that together paint a picture that the planet-sized object may never have existed in the first place," said Gáspár.

The team emphasizes that the final nail in the coffin came when their data analysis of Hubble images taken in 2014 showed the object had vanished, to their disbelief. Adding to the mystery, earlier images showed the object to continuously fade over time, they say. "Clearly, Fomalhaut b was doing things a bona fide planet should not be doing," said Gáspár.

The interpretation is that Fomalhaut b is slowly expanding from the smashup that blasted a dissipating dust cloud into space. Taking into account all available data, Gáspár and Rieke think the collision occurred not too long prior to the first observations taken in 2004. By now the debris cloud, consisting of dust particles around 1 micron (1/50th the diameter of a human hair), is below Hubble's detection limit. The dust cloud is estimated to have expanded by now to a size larger than the orbit of Earth around our Sun.

Equally confounding is that the team reports that the object is more likely on an escape path, rather than on an elliptical orbit, as expected for planets. This is based on the researchers adding later observations to the trajectory plots from earlier data. "A recently created massive dust cloud, experiencing considerable radiative forces from the central star Fomalhaut, would be placed on such a trajectory," said Gáspár. "Our model is naturally able to explain all independent observable parameters of the system: its expansion rate, its fading, and its trajectory."

Because Fomalhaut b is presently inside a vast ring of icy debris encircling the star, colliding bodies would likely be a mixture of ice and dust, like the comets that exist in the Kuiper belt on the outer fringe of our solar system. Gáspár and Rieke estimate that each of these comet-like bodies measured about 125 miles (200 kilometers) across (roughly half the size of the asteroid Vesta).

According to the authors, their model explains all the observed characteristics of Fomalhaut b. Sophisticated dust dynamical modeling done on a cluster of computers at the University of Arizona shows that such a model is able to fit quantitatively all the observations. According to the author's calculations, the Fomalhaut system, located about 25 light-years from Earth, may experience one of these events only every 200,000 years.

Gáspár and Rieke -- along with other members of an extended team -- will also be observing the Fomalhaut system with NASA's upcoming James Webb Space Telescope in its first year of science operations. The team will be directly imaging the inner warm regions of the system, spatially resolving for the first time the elusive asteroid-belt component of an extrasolar planetary system. The team will also search for bona fide planets orbiting Fomalhaut that might be gravitationally sculpting the outer disk. They will also analyze the chemical composition of the disk.

Credit: 
NASA/Goddard Space Flight Center

Hubble observes aftermath of massive collision

image: Data from the NASA/ESA Hubble Space Telescope have revealed an expanding cloud of dust produced in a collision between two large bodies orbiting the bright nearby star Fomalhaut. This is the first time such a catastrophic event around another star has been imaged.

Image: 
ESA/NASA, M. Kornmesser

What astronomers thought was a planet beyond our solar system, has now seemingly vanished from sight. Astronomers now suggest that a full-grown planet never existed in the first place. The NASA/ESA Hubble Space Telescope had instead observed an expanding cloud of very fine dust particles caused by a titanic collision between two icy asteroid-sized bodies orbiting the bright star Fomalhaut, about 25 light-years from Earth.

"The Fomalhaut system is the ultimate test lab for all of our ideas about how exoplanets and star systems evolve," said George Rieke of the University of Arizona's Steward Observatory. "We do have evidence of such collisions in other systems, but none of this magnitude has ever been observed. This is a blueprint for how planets destroy each other."

The object was previously believed to be a planet, called Fomalhaut b, and was first announced in 2008 based on data taken in 2004 and 2006. It was clearly visible in several years of Hubble observations that revealed it as a moving dot. Unlike other directly imaged exoplanets, nagging puzzles with Fomalhaut b arose early on. The object was unusually bright in visible light, but did not have any detectable infrared heat signature. Astronomers proposed that the added brightness came from a huge shell or ring of dust encircling the object that may have been collision-related. Also, early Hubble observations suggested the object might not be following an elliptical orbit, as planets usually do.

"These collisions are exceedingly rare and so this is a big deal that we actually get to see one," said András Gáspár of the University of Arizona. "We believe that we were at the right place at the right time to have witnessed such an unlikely event with the Hubble Space Telescope."

"Our study, which analysed all available archival Hubble data on Fomalhaut b, including the most recent images taken by Hubble, revealed several characteristics that together paint a picture that the planet-sized object may never have existed in the first place," [1] said Gáspár.

Hubble images from 2014 showed the object had vanished, to the disbelief of the astronomers. Adding to the mystery, earlier images showed the object to continuously fade over time. "Clearly, Fomalhaut b was doing things a bona fide planet should not be doing," said Gáspár.

The resulting interpretation is that Fomalhaut b is not a planet, but a slowly expanding cloud blasted into space as a result of a collision between two large bodies. Researchers believe the collision occurred not too long prior to the first observations taken in 2004. By now the debris cloud, consisting of dust particles around 1 micron (1/50th the diameter of a human hair), is below Hubble's detection limit. The dust cloud is estimated to have expanded by now to a size larger than the orbit of Earth around our Sun.

Equally confounding is that the object is not on an elliptical orbit, as expected for planets, but on an escape trajectory, or hyperbolic path. "A recently created massive dust cloud, experiencing considerable radiative forces from the central star Fomalhaut, would be placed on such a trajectory" Gáspár said, "Our model is naturally able to explain all independant observable paramters of the system: its expansion rate, its fading and its trajectory."

Because Fomalhaut b is presently inside a vast ring of icy debris encircling the star, the colliding bodies were likely a mixture of ice and dust, like the cometary bodies that exist in the Kuiper belt on the outer fringe of our solar system. Gáspár and Rieke estimate that each of these comet-like bodies measured about 200 kilometers across. The also suggest that the Fomalhaut system may experience one of these collision events only every 200 000 years.

Gáspár, Rieke, and other astronomers will also be observing the Fomalhaut system with the upcoming NASA/ESA/CSA James Webb Space Telescope, which is scheduled to launch in 2021.

Credit: 
ESA/Hubble Information Centre

Turning on the 'off switch' in cancer cells

image: A view of the tool-compound docked to PP2A.

Image: 
Derek Taylor Lab

A team of scientists led by the University of Michigan Rogel Cancer Center and Case Comprehensive Cancer Center has identified the binding site where drug compounds could activate a key braking mechanism against the runaway growth of many types of cancer.

The discovery marks a critical step toward developing a potential new class of anti-cancer drugs that enhance the activity of a prevalent family of tumor suppressor proteins, the authors say.

The findings, which appear in the leading life sciences journal Cell, are less a story of what than how.

Scientists have known for a while that certain molecules were capable of increasing the activity of the tumor suppressor protein PP2A, killing cancer cells and shrinking tumors in cell lines and animal models -- but without information about the physical site where the molecules interact with the protein, trying to optimize their properties to turn them into actual drugs would require endless trial and error.

"We used cryo-electron microscopy to obtain three-dimensional images of our tool-molecule, DT-061, bound to PP2A," says study co-senior author Derek Taylor, Ph.D., an associate professor of pharmacology and biochemistry at Case Western Reserve University and member of the Case Comprehensive Cancer Center. "This allowed us to see for the first time precisely how different parts of the protein were brought together and stabilized by the compound. We can now use that information to start developing compounds that could achieve the desired profile, specificity and potency to potentially translate to the clinic."

The researchers propose calling this class of molecules SMAPs -- for small molecule activators of PP2A.

Along with cancer, PP2A is also dysregulated in a number of other diseases including cardiovascular and neurodegenerative diseases. And the researchers are optimistic the findings could also open opportunities to develop new medicines against diseases like heart failure and Alzheimer's as well.

Team science

The research required a marriage of scientific disciplines and areas of expertise, notes co-senior author Goutham Narla, M.D., Ph.D., chief of the division of genetic medicine in the department of internal medicine at the U-M Medical School.

"It's an illustration of how collaboration and team science can solve some of the questions like this that scientists have been asking for many years," Narla says. "Solving the structure without the biological knowledge of how best to apply it against cancer, would only be half of the story. And if we were just activating PP2A, killing cancer cells and slowing the growth of cancer without the structural data -- that would be a really nice half-story as well. But working together, we now have a story about being able to drug this previously undruggable tumor suppressor."

The study was led by first authors Daniel Leonard, an M.D. and Ph.D. student and member of Narla's lab when the research was at Case Western Reserve and the Case Comprehensive Cancer Center, and research scientist Wei Huang, Ph.D., of the Taylor lab.

There has been a lot of activity and excitement in recent years around the development of kinase inhibitors -- small molecule compounds that go after the protein kinases whose dysfunction is involved in the explosive growth and proliferation of cancer cells. That is, turning off cancer's "on switch," Leonard explains.

The new research attacks cancer from the opposite side of the equation, turning on cancer's "off switch" by stabilizing protein phosphatases whose malfunction removes a key brake on cancer growth.

In the paper, the researchers speculate how a combination of both approaches simultaneously might offer an even more powerful one-two punch -- potentially helping to overcome cancer's ability to evolve to thwart a singular approach.

"The binding pocket we identified provides a launch pad for optimizing the next generation of SMAPs toward use in the clinic -- in cancer, and potentially other diseases," Huang adds.

Credit: 
Michigan Medicine - University of Michigan

Examining rates of thyroid cancer among World Trade Center rescue/recovery workers

What The Study Did: Rates and methods of detection of thyroid cancer diagnosed in male rescue/recovery workers at the World Trade Center site after the 9/11 terrorist attacks were compared with demographically similar individuals from Olmsted County, Minnesota, to see if increased rates of thyroid cancer among those workers were associated with the identification of asymptomatic cancers detected during heightened nonthyroid-related medical surveillance.

Authors: Rachel Zeig-Owens, Dr.P.H., M.P.H., of the Bureau of Health Services in Brooklyn, New York, is the corresponding author.

To access the embargoed study: Visit our For The Media website at this link https://media.jamanetwork.com/

(doi:10.1001/jamainternmed.2020.0950)

Editor's Note: The article includes conflict of interest and funding/support disclosures. Please see the article for additional information, including other authors, author contributions and affiliations, conflicts of interest and financial disclosures, and funding and support.

Credit: 
JAMA Network

Examining association between infant screen viewing, social activities and development of autism-like symptoms

What The Study Did: Data from a study of environmental influences on child health and development were used to investigate the extent to which frequency of screen viewing and social activities such as parent-child play and reading through 18-months of age were associated with the risk of autism spectrum disorder (ASD) and ASD-like symptoms among 2,100 children at age 2.

Authors: David S. Bennett, Ph.D., of the Drexel University College of Medicine in Philadelphia, is the corresponding author.

To access the embargoed study: Visit our For The Media website at this link https://media.jamanetwork.com/

(doi:10.1001/jamapediatrics.2020.0230)

Editor's Note: Please see the article for additional information, including other authors, author contributions and affiliations, conflicts of interest and financial disclosures, and funding and support.

Credit: 
JAMA Network

New high-throughput approach yields libraries of probes for immunological assays

image: This diagram outlines a workflow for preparing and using a library of peptide-loaded MHC multimers for assessment of T cell repertoires in patient blood samples.

Image: 
Overall et al., Nature Communications, 2020

An immunological test known as a "tetramer assay" can detect and quantify the T cells in a blood sample that are able to recognize a specific antigen, such as a viral protein. Making the molecular probes needed for this type of assay, however, has always been a difficult and time-consuming process.

Now, a team led by researchers at UC Santa Cruz has developed a method to create libraries of probes for high-throughput, large-scale assessments of T cell repertoires in blood samples. The new approach, described in a paper published April 20 in Nature Communications, opens up new opportunities for immunological research, development of cancer immunotherapies, and assessing the immune responses of patients with viral infections, including COVID-19.

"Every person in the field knows how cumbersome it is to make the probes for these assays," said corresponding author Nikolaos Sgourakis, assistant professor of chemistry and biochemistry at UC Santa Cruz. "With conventional methods, you would need about a week to make a single peptide complex, but now we can make a plate of 100 in a day."

T cells play a central role in the immune system, complementary to antibodies (produced by B cells). Antibodies recognize and bind to antigens (foreign proteins) in the blood and other body fluids, whereas T cells only bind to antigens displayed on the surfaces of cells in the body, enabling the immune system to detect infected or cancerous cells.

This difference makes antibody tests relatively straightforward and T cell assays much more challenging. To construct a probe for detecting a specific T cell receptor, the corresponding antigen must be incorporated into a molecular complex that mimics the way antigens are displayed on cell surfaces, bound to major histocompatibility complex (MHC) proteins. Sgourakis has been studying how protein fragments are selected and bound to MHC proteins in cells, and the new method builds on his lab's discoveries about the role of "molecular chaperones" in this process.

"Molecular chaperones are designed by nature to load MHC proteins with peptides in the cell, so we took our favorite chaperone and repurposed it," Sgourakis said.

His previous research had shown that the chaperone can eject antigens that have low affinity for the MHC protein, ensuring that it binds only high-affinity antigens that can be displayed at the cell surface in the proper conformation to activate a T cell response. So Sgourakis designed a "placeholder" peptide for use in preparing large quantities of pre-loaded MHC complexes. When incubated with a high-affinity antigen, the placeholder is displaced, and this reaction can be performed in parallel with large numbers of antigens in a high-throughput system.

"It's a force multiplier, enabling us to perform these reactions at high throughput," Sgourakis said. "A lot of groups are working on similar methodologies, all of which have their pros and cons. This technology has the advantage of using the same system that cells use naturally, and we can combine it very elegantly with existing single-cell analytical tools."

This work was done in close collaboration with researchers at the New York Genome Center and the University of Pennsylvania's Children's Hospital and Perelman School of Medicine. The researchers started to use the new method to develop libraries of probes for assessing T cell responses to neuroblastoma and designing cancer immunotherapies.

Then came COVID-19, and the team began exploring ways to apply the new technology to address the challenges of the novel coronavirus. With a viral infection, there are many different fragments of the viral proteins that an infected cell can display, and it is important to determine which of these peptides elicit a strong immune response.

"Based on the coronavirus genome, we can predict all the possible peptides, synthesize them, load them onto MHC tetramers, and do a fishing expedition to find which ones are recognized by the T cells in blood samples from patients," Sgourakis explained. "Certain peptides are immunodominant--they steer the immune response--and those are the ones we want to discover so we can potentially use them in a vaccine."

This approach can also be used to compare the T cell receptor repertoires in different cohorts of patients. As people age, their T cell repertoire declines, resulting in a diminished ability to mount an immune response to a novel threat. This may be why older people are more vulnerable to COVID-19.

"One of the big questions is why there is so much variability in the severity of this disease," Sgourakis said. "We can use this technology to screen patients and see what the gaps are in their T cell repertoires, and maybe use this as a diagnostic for which patients will need more intensive treatment."

Credit: 
University of California - Santa Cruz

Neolithic genomes from modern-day Switzerland indicate parallel ancient societies

image: Top view of the Dolmen of Oberbipp, one of the largest burial sites in the study. In this study, researchers analyze 96 ancient genomes to trace the arrival and demographic structure of peoples with Steppe-related ancestry into late Neolithic, early Bronze Age Switzerland and povide new insights into the ancestry of modern Europeans.

Image: 
Urs Dardel, Archäologischer Dienst des Kanton Bern (Switzerland)

Genetic research throughout Europe shows evidence of drastic population changes near the end of the Neolithic period, as shown by the arrival of ancestry related to pastoralists from the Pontic-Caspian steppe. But the timing of this change and the arrival and mixture process of these peoples, particularly in Central Europe, is little understood. In a new study published in Nature Communications, researchers analyze 96 ancient genomes, providing new insights into the ancestry of modern Europeans.

Scientists sequence almost one hundred ancient genomes from Switzerland

With Neolithic settlements found everywhere from lake shore and bog environments to inner alpine valleys and high mountain passes, Switzerland's rich archeological record makes it a prime location for studies of population history in Central Europe. Towards the end of the Neolithic period, the emergence of archaeological finds from Corded Ware Complex cultural groups (CWC) coincides with the arrival of new ancestry components from the Pontic-Caspian steppe, but exactly when these new peoples arrived and how they mixed with indigenous Europeans remains unclear.

To find out, an international team led by researchers from the University of Tübingen, the University of Bern and the Max Planck Institute for the Science of Human History (MPI-SHH) sequenced the genomes of 96 individuals from 13 Neolithic and early Bronze Age sites in Switzerland, southern Germany and the Alsace region of France. They detect the arrival of this new ancestry as early as 2800 BCE, and suggest that genetic dispersal was a complex process, involving the gradual mixture of parallel, highly genetically structured societies. The researchers also identified one of the oldest known Europeans that was lactose tolerant, dating to roughly 2100 BCE.

Slow genetic turnover indicates highly structured societies

"Remarkably, we identified several female individuals without any detectable steppe-related ancestry up to 1000 years after this ancestry arrives in the region," says lead author Anja Furtwängler of the University of Tübingen's Institute for Archeological Sciences. Evidence from genetic analysis and stable isotopes suggest a patrilocal society, in which males stayed local to where they were born and females came from distant families that did not carry steppe ancestry.

These results show that CWC was a relatively homogenous population that occupied large parts of Central Europe in the early Bronze Age, but they also show that populations without steppe-related ancestry existed parallel to the CWC cultural groups for hundreds of years.

"Since the parents of the mobile females in our study couldn't have had steppe-related ancestry either, it remains to be shown where in Central Europe such populations were present, possibly in the Alpine mountain valleys that were less connected to the lower lands" says Johannes Krause, director of the Department of Archaeogenetics at MPI-SHH and senior author of the study. The researchers hope that further studies of this kind will help to illuminate the cultural interactions that precipitated the transition from the Neolithic to the Early Bronze age in Central Europe.

Credit: 
Max Planck Institute of Geoanthropology

Origins of human language pathway in the brain at least 25 million years old

Scientists have discovered an earlier origin to the human language pathway in the brain, pushing back its evolutionary origin by at least 20 million years.

Previously, a precursor of the language pathway was thought by many scientists to have emerged more recently, about 5 million years ago, with a common ancestor of both apes and humans.

For neuroscientists, this is comparable to finding a fossil that illuminates evolutionary history. However, unlike bones, brains did not fossilize. Instead neuroscientists need to infer what the brains of common ancestors may have been like by studying brain scans of living primates and comparing them to humans.

Professor Chris Petkov from the Faculty of Medical Sciences, Newcastle University, UK the study lead said: "It is like finding a new fossil of a long lost ancestor. It is also exciting that there may be an older origin yet to be discovered still."

The international teams of European and US scientists carried out the brain imaging study and analysis of auditory regions and brain pathways in humans, apes and monkeys which is published in Nature Neuroscience.

They discovered a segment of this language pathway in the human brain that interconnects the auditory cortex with frontal lobe regions, important for processing speech and language. Although speech and language are unique to humans, the link via the auditory pathway in other primates suggests an evolutionary basis in auditory cognition and vocal communication.

Professor Petkov added: "We predicted but could not know for sure whether the human language pathway may have had an evolutionary basis in the auditory system of nonhuman primates. I admit we were astounded to see a similar pathway hiding in plain sight within the auditory system of nonhuman primates."

Remarkable transformation

The study also illuminates the remarkable transformation of the human language pathway. A key human unique difference was found: the human left side of this brain pathway was stronger and the right side appears to have diverged from the auditory evolutionary prototype to involve non-auditory parts of the brain.

The study relied on brain scans from openly shared resources by the global scientific community. It also generated original new brain scans that are globally shared to inspire further discovery. Also since the authors predict that the auditory precursor to the human language pathway may be even older, the work inspires the neurobiological search for its earliest evolutionary origin - the next brain 'fossil' - to be found in animals more distantly related to humans.

Professor Timothy Griffiths, consultant neurologist at Newcastle University, UK and joint senior author on the study notes: "This discovery has tremendous potential for understanding which aspects of human auditory cognition and language can be studied with animal models in ways not possible with humans and apes. The study has already inspired new research underway including with neurology patients."

Credit: 
Newcastle University

WashU engineer awarded federal funding for rapid COVID-19 test

image: Engineers at the McKelvey School of Engineering at Washington University in St. Louis have received federal funding for a rapid COVID-19 test using a newly developed technology called plasmonic-fluor.

Image: 
Washington University in St. Louis

Engineers at the McKelvey School of Engineering at Washington University in St. Louis have received federal funding for a rapid COVID-19 test using a newly developed technology.

Srikanth Singamaneni, professor of mechanical engineering and materials science, and his team have developed a rapid, highly sensitive and accurate biosensor based on an ultrabright fluorescent nanoprobe, which has the potential to be broadly deployed.

Called plasmonic-fluor, the ultrabright fluorescent nanoprobe can also help in resource-limited conditions because it requires fewer complex instruments to read the results. The National Science Foundation has awarded Singamaneni and his team a $100,008 grant toward developing a COVID-19 test using plasmonic-fluor.

Singamaneni hypothesizes their plasmonic-fluor-based biosensor will be 100 times more sensitive compared with the conventional SARS-CoV-2 antibody detection method. Increased sensitivity would allow clinicians and researchers to more easily find positive cases and lessen the chance of false negatives.

Plasmonic-fluor works by increasing the fluorescence signal to background noise. Imagine trying to catch fireflies outside on a sunny day. You might net one or two, but against the glare of the sun, those little buggers are difficult to see. What if those fireflies had the similar brightness as a high-powered flashlight?

Plasmonic-fluor effectively turns up the brightness of fluorescent labels used in a variety of biosensing and bioimaging methods. In addition to COVID-19 testing, it could potentially be used to diagnose, for instance, that a person has had a heart attack by measuring the levels of relevant molecules in blood or urine samples.

Using plasmonic-fluor, which is composed of gold nanoparticles coated with conventional dyes, researchers have been able to achieve up to a 6,700-fold brighter fluorescent nanolabel compared with conventional dyes, which can potentially lead to early diagnosis. Using this nanolabel as an ultrabright flashlight, they have demonstrated the detection of extremely small amounts of target biomolecules in biofluids and even molecules present on the cells.

The study was published in the April 20 issue of Nature Biomedical Engineering.

Gold nanoparticles serve as beacons

In biomedical research and clinical labs, fluorescence is used as a beacon to see and follow target biomolecules with precision. It's an extremely useful tool, but it's not perfect.

"The problem in fluorescence is, in a lot of cases, it's not sufficiently intense," Singamaneni said. If the fluorescent signal isn't strong enough to stand out against background signals, just like fireflies against the glare of the sun, researchers may miss seeing something less abundant but important.

"Increasing the brightness of a nanolabel is extremely challenging," said Jingyi Luan, lead author of the paper. But here, it's the gold nanoparticle sitting at the center of the plasmonic-fluor that really does the work of efficiently turning the fireflies into flashlights, so to speak. The gold nanoparticle acts as an antenna, strongly absorbing and scattering light. That highly concentrated light is funneled into the fluorophore placed around the nanoparticle. In addition to concentering the light, the nanoparticles speed up the emission rate of the fluorophores. Taken together, these two effects increase the fluorescence emission.

Essentially, each fluorophore becomes a more efficient beacon, and the 200 fluorophores sitting around the nanoparticle emit a signal that is equal to 6,700 fluorophores.

In addition to detecting low quantities of molecules, sensing time can be shortened using plasmonic-fluor as brighter beacons mean fewer captured proteins are needed to determine their presence.

The researchers have also shown that plasmonic-fluor allows the detection of multiple proteins simultaneously. And in flow cytometry, plasmonic-fluor's brightening effect allows for a more precise and sensitive measurement of proteins on cell surface, whose signal may have been buried in the background noise using traditional fluorescent tagging.

There have been other efforts to enhance fluorescent tagging in imaging, but many require the use of an entirely new workflow and measurement platform. In addition to plasmonic-fluor's ability to greatly increase the sensitivity and decrease the sensing time, it doesn't require any changes to existing laboratory tools or techniques.

The technology has been licensed to Auragent Bioscience LLC by Washington University's Office of Technology Management. Auragent is in the process of further development and scaling up the production of plasmonic-fluors for commercialization.

Credit: 
Washington University in St. Louis

ALMA reveals unusual composition of interstellar comet 2I/Borisov

A galactic visitor entered our solar system last year - interstellar comet 2I/Borisov. When astronomers pointed the Atacama Large Millimeter/submillimeter Array (ALMA) toward the comet on 15 and 16 December 2019, for the first time they directly observed the chemicals stored inside an object from a planetary system other than our own. This research is published online on 20 April 2020 in the journal Nature Astronomy.

The ALMA observations from a team of international scientists led by Martin Cordiner and Stefanie Milam at NASA's Goddard Space Flight Center in Greenbelt, Maryland, revealed that the gas coming out of the comet contained unusually high amounts of carbon monoxide (CO). The concentration of CO is higher than anyone has detected in any comet within 2 au from the Sun (within less than 186 million miles, or 300 million kilometers) [1]. 2I/Borisov's CO concentration was estimated to be between nine and 26 times higher than that of the average solar system comet.

Astronomers are interested to learn more about comets, because these objects spend most of their time at large distances from any star in very cold environments. Unlike planets, their interior compositions have not changed significantly since they were born. Therefore, they could reveal much about the processes that occurred during their birth in protoplanetary disks. "This is the first time we've ever looked inside a comet from outside our solar system," said astrochemist Martin Cordiner, "and it is dramatically different from most other comets we've seen before."

ALMA detected two molecules in the gas ejected by the comet: hydrogen cyanide (HCN) and carbon monoxide (CO). While the team expected to see HCN, which is present in 2I/Borisov at similar amounts to that found in solar system comets, they were surprised to see large amounts of CO. "The comet must have formed from material very rich in CO ice, which is only present at the lowest temperatures found in space, below -420 degrees Fahrenheit (-250 degrees Celsius)," said planetary scientist Stefanie Milam.

"ALMA has been instrumental in transforming our understanding of the nature of cometary material in our own solar system - and now with this unique object coming from our next door neighbors. It is only because of ALMA's unprecedented sensitivity at submillimeter wavelengths that we are able to characterize the gas coming out of such unique objects," said Anthony Remijan of the National Radio Astronomy Observatory in Charlottesville, Virginia and co-author of the paper.

Carbon monoxide is one of the most common molecules in space and is found inside most comets. Yet, there's a huge variation in the concentration of CO in comets and no one quite knows why. Some of this might be related to where in the solar system a comet was formed; some has to do with how often a comet's orbit brings it closer to the Sun and leads it to release its more easily evaporated ices.

"If the gases we observed reflect the composition of 2I/Borisov's birthplace, then it shows that it may have formed in a different way than our own solar system comets, in an extremely cold, outer region of a distant planetary system," added Cordiner. This region can be compared to the cold region of icy bodies beyond Neptune, called the Kuiper Belt.

The team can only speculate about the kind of star that hosted 2I/Borisov's planetary system. "Most of the protoplanetary disks observed with ALMA are around younger versions of low-mass stars like the Sun," said Cordiner. "Many of these disks extend well beyond the region where our own comets are believed to have formed, and contain large amounts of extremely cold gas and dust. It is possible that 2I/Borisov came from one of these larger disks."

Due to its high speed when it traveled through our solar system (33 km/s or 21 miles/s) astronomers suspect that 2I/Borisov was kicked out from its host system, probably by interacting with a passing star or giant planet. It then spent millions or billions of years on a cold, lonely voyage through interstellar space before it was discovered on 30 August 2019 by amateur astronomer Gennady Borisov.

2I/Borisov is only the second interstellar object to be detected in our solar system. The first - 1I/'Oumuamua - was discovered in October 2017, at which point it was already on its way out, making it difficult to reveal details about whether it was a comet, asteroid, or something else. The presence of an active gas and dust coma surrounding 2I/Borisov made it the first confirmed interstellar comet.

Until other interstellar comets are observed, the unusual composition of 2I/Borisov cannot easily be explained and raises more questions than it answers. Is its composition typical of interstellar comets? Will we see more interstellar comets in the coming years with peculiar chemical compositions? What will they reveal about how planets form in other star systems?

"2I/Borisov gave us the first glimpse into the chemistry that shaped another planetary system," said Milam. "But only when we can compare the object to other interstellar comets, will we learn whether 2I/Borisov is a special case, or if every interstellar object has unusually high levels of CO."

Credit: 
National Radio Astronomy Observatory

Stabilizing brain-computer interfaces

Researchers from Carnegie Mellon University (CMU) and the University of Pittsburgh (Pitt) have published research in Nature Biomedical Engineering that will drastically improve brain-computer interfaces and their ability to remain stabilized during use, greatly reducing or potentially eliminating the need to recalibrate these devices during or between experiments.

Brain-computer interfaces (BCI) are devices that enable individuals with motor disabilities such as paralysis to control prosthetic limbs, computer cursors, and other interfaces using only their minds. One of the biggest problems facing BCI used in a clinical setting is instability in the neural recordings themselves. Over time, the signals picked up by BCI can vary, and a result of this variation is that an individual can lose the ability to control their BCI.

As a result of this loss of control, researchers ask the user to go through a recalibration session which requires them to stop what they're doing and reset the connection between their mental commands and the tasks being performed. Typically, another human technician is involved just to get the system to work.

"Imagine if every time we wanted to use our cell phone, to get it to work correctly, we had to somehow calibrate the screen so it knew what part of the screen we were pointing at," says William Bishop, who was previously a PhD student and postdoctoral fellow in the Department of Machine Learning at CMU and is now a fellow at Janelia Farm Research Campus. "The current state of the art in BCI technology is sort of like that. Just to get these BCI devices to work, users have to do this frequent recalibration. So that's extremely inconvenient for the users, as well as the technicians maintaining the devices."

The paper, "A stabilized brain-computer interface based on neural manifold alignment," presents a machine learning algorithm that accounts for these varying signals and allows the individual to continue controlling the BCI in the presence of these instabilities. By leveraging the finding that neural population activity resides in a low-dimensional "neural manifold," the researchers can stabilize neural activity to maintain good BCI performance in the presence of recording instabilities.

"When we say 'stabilization,' what we mean is that our neural signals are unstable, possibly because we're recording from different neurons across time," explains Alan Degenhart, a postdoctoral researcher in electrical and computer engineering at CMU. "We have figured out a way to take different populations of neurons across time and use their information to essentially reveal a common picture of the computation that's going on in the brain, thereby keeping the BCI calibrated despite neural instabilities."

The researchers aren't the first to propose a method for self-recalibration; the problem of unstable neural recordings has been up in the air for a long time. A few studies have proposed self-recalibration procedures, but have faced the issue of dealing with instabilities. The method presented in this paper is able to recover from catastrophic instabilities because it doesn't rely on the subject performing well during the recalibration.

"Let's say that the instability were so large such that the subject were no longer able to control the BCI," explains Byron Yu, a professor of electrical and computer engineering and biomedical engineering at CMU. "Existing self-recalibration procedures are likely to struggle in that scenario, whereas in our method, we've demonstrated it can in many cases recover from those catastrophic instabilities."

"Neural recording instabilities are not well characterized, but it's a very large problem," says Emily Oby, a postdoctoral researcher in neurobiology at Pitt. "There's not a lot of literature we can point to, but anecdotally, a lot of the labs that do clinical research with BCI have to deal with this issue quite frequently. This work has the potential to greatly improve the clinical viability of BCIs, and to help stabilize other neural interfaces."

Other authors on the paper include CMU's Steve Chase, professor of biomedical engineering and the Neuroscience Institute, and Pitt's Aaron Batista, associate professor of bioengineering, and Elizabeth Tyler-Kabara, associate professor of neurological surgery. This research was funded by the Craig H Neilsen Foundation, the National Institutes of Health, DSF Charitable Foundation, National Science Foundation, PA Dept of Health Research, and the Simons Foundation.

Credit: 
College of Engineering, Carnegie Mellon University

Why relying on new technology won't save the planet

image: Putting our hopes in yet more new technologies is unwise say researchers.

Image: 
Lancaster University

Overreliance on promises of new technology to solve climate change is enabling delay, say researchers from Lancaster University.

Their research published in Nature Climate Change calls for an end to a longstanding cycle of technological promises and reframed climate change targets.

Contemporary technological proposals for responding to climate change include nuclear fusion power, giant carbon sucking machines, ice-restoration using millions of wind-powered pumps, and spraying particulates in the stratosphere.

Researchers Duncan McLaren and Nils Markusson from Lancaster Environment Centre say that: "For forty years, climate action has been delayed by technological promises. Contemporary promises are equally dangerous. Our work exposes how such promises have raised expectations of more effective policy options becoming available in the future, and thereby enabled a continued politics of prevarication and inadequate action.

"Prevarication is not necessarily intentional, but such promises can feed systemic 'moral corruption', in which current elites are enabled to pursue self-serving pathways, while passing off risk onto vulnerable people in the future and in the global South.

The article describes a history of such promises, showing how the overarching international goal of 'avoiding dangerous climate change' has been reinterpreted and differently represented in the light of new modelling methods, scenarios and technological promises.

The researchers argue that the targets, models and technologies have co-evolved in ways that enable delay: "Each novel promise not only competes with existing ideas, but also downplays any sense of urgency, enabling the repeated deferral of political deadlines for climate action and undermining societal commitment to meaningful responses.

They conclude: "Putting our hopes in yet more new technologies is unwise. Instead, cultural, social and political transformation is essential to enable widespread deployment of both behavioural and technological responses to climate change."

The researchers map the history of climate targets in five phases: "stabilization", followed by a focus on "percentage emissions reductions", shifting to "atmospheric concentrations" (expressed in parts per million), "cumulative budgets" (in tonnes of carbon dioxide), and currently "outcome temperatures".

In the first phase (around Rio, 1992) technological promises included improved energy efficiency, large-scale enhancement of carbon sinks, and nuclear power

In the second phase around the Kyoto summit (1997) policy promises focused on cutting emissions with efficiency, fuel switching and carbon capture and storage (CCS).

In the third phase (around Copenhagen, 2009), CCS became linked to bioenergy, while policy focused on atmospheric concentrations.

Phase four saw the development of sophisticated global carbon budgeting models and the emergence of a range of putative negative emissions technologies.

Policy in phase five focused increasingly on temperature outcomes, formalised with the Paris accord of 2015.

Credit: 
Lancaster University

What did scientists learn from Deepwater Horizon?

image: Over a span of 87 days, the Deepwater Horizon well released an estimated 168 million gallons of oil and 45 million gallons of natural gas into the ocean, making it the largest accidental marine oil spill in history.

Image: 
(Photo by Cabell Davis, © Woods Hole Oceanographic Institution)

Ten years ago, a powerful explosion destroyed an oil rig in the Gulf of Mexico, killing 11 workers and injuring 17 others. Over a span of 87 days, the Deepwater Horizon well released an estimated 168 million gallons of oil and 45 million gallons of natural gas into the ocean, making it the largest accidental marine oil spill in history.

Researchers from Woods Hole Oceanographic Institution (WHOI) quickly mobilized to study the unprecedented oil spill, investigating its effects on the seafloor and deep-sea corals and tracking dispersants used to clean up the spill.

In a review paper published in the journal Nature Reviews Earth & Environment, WHOI marine geochemists Elizabeth Kujawinski and Christopher Reddy review what they-- and their science colleagues from around the world--have learned from studying the spill over the past decade.

"So many lessons were learned during the Deepwater Horizon disaster that it seemed appropriate and timely to consider those lessons in the context of a review," says Kujawinski. "We found that much good work had been done on oil weathering and oil degradation by microbes, with significant implications for future research and response activities."

"At the end of the day, this oil spill was a huge experiment," adds Reddy. "It shed great light on how nature responds to an uninvited guest. One of the big takeaways is that the oil doesn't just float and hang around. A huge amount of oil that didn't evaporate was pummeled by sunlight, changing its chemistry. That's something that wasn't seen before, so now we have insight into this process."

Released for the first time in a deep ocean oil spill, chemical dispersants remain one of the most controversial debates in the aftermath of Deepwater Horizon. Studies offer conflicting conclusions about whether dispersants released in the deep sea reduced the amount of oil that reached the ocean surface, and the results are ambiguous about whether dispersants helped microbes break down the oil at all.

"I think the biggest unknowns still center on the impact of dispersants on oil distribution in seawater and their role in promoting--or inhibiting--microbial degradation of the spilled oil," says Kujawinski, whose lab was the first to identify the chemical signature of the dispersants, making it possible to track in the marine environment.

Though the authors caution that the lessons learned from the Deepwater Horizon release may not be applicable to all spills, the review highlights advances in oil chemistry, microbiology, and technology that may be useful at other deep-sea drilling sites and shipping lanes in the Arctic. The authors call on the research community to work collaboratively to understand the complex environmental responses at play in cold climates, where the characteristics of oil are significantly different from the Gulf of Mexico.

"Now we have a better sense of what we need to know," Kujawinski says. "Understanding what these environments look like in their natural state is really critical to understanding the impact of oil spill conditions."

Credit: 
Woods Hole Oceanographic Institution

Picking up threads of cotton genomics

image: In the United States, 95 percent of the cotton grown is Gossypium hirsutum, known as Upland cotton. This image complements a news release from the DOE Joint Genome Institute regarding a Nature Genetics paper published April 20, 2020 reporting that a multi-institutional team has now sequenced and assembled the genomes of the five major cotton lineages. The genomes are available on JGI's plant data portal Phytozome.

Image: 
Cotton Inc.

Come harvest time, the cotton fields look like popcorn is literally growing on plants, with fluffy white bolls bursting out of the green pods in every direction. There are 100 million families around the world whose livelihoods depend on cotton production, and the crop's annual economic impact of $500 billion worldwide underscores its value and importance in the fabric of our lives.

In the United States, cotton production centers around two varieties: 95 percent of what is grown is known as Upland cotton (Gossypium hirsutum), while the remaining 5 percent is called American Pima (G. barbadense.) These are two of the five major lineages of cotton; G. tomentosum, G. mustelinum, and G. darwinii are the others. All of these cotton lineages have genomes approximately 2.3 billion bases or Gigabases (Gb) in size, and are hybrids comprised of cotton A and cotton D genomes.

A multi-institutional team including researchers at the U.S. Department of Energy (DOE) Joint Genome Institute (JGI), a DOE Office of Science User Facility located at Lawrence Berkeley National Laboratory (Berkeley Lab) has now sequenced and assembled the genomes of these five cotton lineages. Senior authors of the paper published April 20, 2020 in Nature Genetics include Jane Grimwood and Jeremy Schmutz of JGI's Plant Program, both faculty investigators at the HudsonAlpha Institute for Biotechnology.

"The goal has been for all this new cotton work, and even the original cotton project was to try to bring in molecular methods of breeding into cotton," said Schmutz, who heads JGI's Plant Program. He and Grimwood were also part of the JGI team that contributed to the multinational consortium of researchers that sequenced and assembled the simplest cotton genome (G. raimondii) several years ago. Studying the cotton genomes provides breeders with insights on crop improvements at a genetic level, including why having multiple copies of their genomes (polyploidy) is so important to crops. Additionally, cotton is almost entirely made up of cellulose and it is a fiber model to understand the molecular development of cellulose.

Cotton Genomes on Phytozome

The genomes of all five cotton lineages and of cotton D are available for comparative analysis on JGI's plant data portal Phytozome, which is a community repository and resource for plant genomes. They are annotated with the JGI Plant Annotation pipeline, which provides high quality comparisons of these genomes within themselves and to other plant genomes.

"Globally, cotton is the premier natural fiber crop of the world, a major oilseed crop, and important cattle feed crop," noted David Stelly, another study co-author at Texas A&M University. "This report establishes new opportunities in multiple basic and applied scientific disciplines that relate directly and indirectly to genetic diversity, evolution, wild germplasm utilization and increasing the efficacy with which we use natural resources for provisioning society."

The comparative analysis of the five cotton genomes identified unique genes related to fiber and seed traits in the domesticated G. barbadense and G. hirsutum species. Unique genes were also identified in the other three wild species. "We thought, 'In all of these wild tetraploids, there will be many disease resistance genes that we can make use of,'" Schmutz said. "But it turns out there isn't really that kind of diversity in the wild in cotton. And this is amazing to me for a species that was so widely distributed."

In the field, growers can easily distinguish the cotton species by traits such as flower color, plant height, or fiber yield. To the team's surprise, even though the major cotton lineages had dispersed and diversified over a million years ago, their genomes were "remarkably" stable. "We thought we were sequencing the same genome multiple times," Schmutz recalled. "We were a little confused because they were so genetically similar."

Benefits of High Impact Science

"The results described in this Nature Genetics publication will facilitate deeper understanding of cotton biology and lead to higher yield and improved fiber while reducing input costs. Growers, the textile industry, and consumers will derive benefit from this high impact science for years to come," said Don Jones, who handles variety improvement for Cotton Incorporated, the research and marketing company representing upland cotton funded by U.S. growers of upland cotton and importers of cotton and cotton textile products, often referred to as the dirt-to-shirt value chain.

Assembling cotton's large and complex genome means being selective in choosing which team to financially support, Jones added. "We must be careful who we ask to take on these projects due to their difficulty and complexity, but we have been extremely pleased with Jeremy, Jane and their team. Many groups assemble genomes, but very few do it so well that it stands the test of time and is considered the gold standard by the world cotton community. This is one such example."

Jones noted that he talks to growers about Cotton Inc.'s long-term investment in crop research. "What I have told our growers is, 'Think of these reference genomes as a surgeon's knowledge, and of gene editing as a new tool. In order to know exactly where to use your incredibly precise tool, you have to know where to use it, which exact base or series of bases you have to alter.' Why should we invest in something that may not be an immediate benefit to us for a decade? We believe this basic research has to occur in order to drive the research. Oftentimes, these things take not five or eight years, but sometimes 10 or 15 years, because the technology develops over time."

Credit: 
DOE/Lawrence Berkeley National Laboratory

KIST develops low-price, high-efficiency catalyst that converts CO2 into chemicals

image: The research team at KIST used various in-situ/operando analytical techniques to design an effective catalyst. Using an in-situ/operando X-ray absorbtion spectroscopy, they found that the catalyst, with its core-shell structure, had high performance due to the short distance between the iridium and oxygen in the catalyst. They further examined the catalyst, using an in-situ/operando inductively coupled plasma (ICP) analytical technique, and found that it had high durability due to the relatively small loss of the catalyst. It is even more significant that these results were obtained during actual catalyst reaction processes. The results of these analyses will continue to be used to design various catalysts.

Image: 
Korea Institute of Science and Technology (KIST)

The Korea Institute of Science and Technology (KIST, Acting President: Yoon Seok-jin) announced that a research team, led by Dr. Oh Hyung-Suk and Dr. Lee Woong-hee, at the Clean Energy Research Center at KIST, developed a technology to reduce the use of precious metal catalysts at electrodes where oxygen is produced. The use of precious metal catalysts is one of the problems hindering the practical application of artificial photosynthesis technology.

Artificial photosynthesis technology involves artificially recreating a process, like the process seen in plants, by which water, sunlight, and carbon dioxide (CO2) are converted into hydrocarbon and oxygen, with chlorophyll serving as the catalyst. This technology has been receiving a lot of attention because it can produce clean energy and value-added chemicals while at the same time absorbing carbon dioxide.

In order for this technology to be commercialized, the efficiency of the catalyst, which in plants, is the chlorophyll, must be improved and associated costs must be reduced. Of the effective electrochemical catalysts that have been studied thus far, iridium-based catalysts have been found to be some of the most stable and high-performing and therefore are widely known as some of the best oxygen-producing catalysts. However, iridium is high in price and its reserves and production volume are quite limited. Recently, much research has been conducted on how to reduce the usage of iridium and improve catalyst performance.

One of the most effective methods to reduce the usage of iridium is to fabricate a nanoscale iridium alloy catalyst using low-price metal. The KIST-Technical University Berlin (TU-Berlin) joint research team developed a core-shell nanocatalyst with an iridium oxide shell by using iridium-cobalt alloy nanoparticles to reduce the use of iridium.

The research team at KIST used various in-situ/operando analytical techniques to design an effective catalyst. Using an in-situ/operando X-ray absorbtion spectroscopy, they found that the catalyst, with its core-shell structure, had high performance due to the short distance between the iridium and oxygen in the catalyst. They further examined the catalyst, using an in-situ/operando inductively coupled plasma (ICP) analytical technique, and found that it had high durability due to the relatively small loss of the catalyst. It is even more significant that these results were obtained during actual catalyst reaction processes. The results of these analyses will continue to be used to design various catalysts.

The catalyst developed by KIST's research team uses 20% less iridium, a precious metal, than existing catalysts and shows at least 31% higher performance. A long-term test using tap water was performed to verify the practical feasibility of the catalyst. When tested, the catalyst maintained a high level of performance for hundreds of hours, indicating high durability.

When the developed catalyst was applied to the actual carbon dioxide conversion system, the energy required during the process was reduced by more than half. This resulted in more than twice the amount of compounds typically produced at the same voltage using other iridium oxide catalysts.

"We used an iridium-cobalt alloy core and a core-shell nanocatalyst with an iridium oxide shell to considerably improve the performance of the oxygen evolution reaction and durability, which were the problems previously associated with the electrochemical CO2 conversion system," said KIST's Dr. Oh Hyung-Suk, who led the research. "I expect that this research will contribute greatly to the practicability of the electrochemical CO2 conversion system as it can be applied to water electrolysis systems for hydrogen production as well as various other electrolysis systems."

Credit: 
National Research Council of Science & Technology