Culture

Moffitt researchers demonstrate tissue architecture regulates tumor evolution

image: Darwinian Evolution

Image: 
Moffitt Cancer Center

TAMPA, Fla. -- Tumors are genetically diverse with different mutations arising at different times throughout growth and development. Many models have tried to explain how genetic heterogeneity arises and what impact these alterations have on tumor growth. In a new article published in Nature Communications, Moffitt Cancer Center researchers show how the location of the tumor and spatial constraints put on it by the surrounding tissue architecture impact genetic heterogeneity of tumors.

Genetic differences are apparent among tumors from different patients, as well as within different regions of the same tumor of an individual patient. Some of these mutations may benefit the tumor and become selected for, such as mutations that allow the tumor to grow faster and spread to other sites. This type of tumor evolution is known as Darwinian evolution. Alternatively, other cellular mutations may have no immediate impact on the tumor but still accumulate over time, known as neutral evolution. Researchers in Moffitt's Center of Excellence for Evolutionary Therapy wanted to determine how the surrounding tissue architecture impacts these different types of tumor evolution patterns and genetic heterogeneity.

The team used mathematical modeling to determine how spatial constraints impact tumor evolution with a focus on the three-dimensional architecture of ductal carcinoma of the breast. They used a well-studied model of tumor evolution and altered variables related to spatial constraints and cell mixing and demonstrated that the surrounding tissue architecture greatly affects the genetic heterogeneity of tumors over time. For example, the ductal network of breast tissue is similar to the trunk and branches of a tree. A tumor that forms within the wider region of the base of the duct has less spatial constraints placed on it compared to a tumor that initially forms in the smaller ductal branches. As a result, a tumor near the base tends to acquire mutations over an extended time and will have more genetic heterogeneity due to neutral evolution. On the other hand, a tumor within the smaller ductal regions tends to undergo accelerated genetic changes that result in one mutation becoming dominant due to Darwinian evolution, - also known as clonal sweep.

"Two otherwise identical tumors may realize dramatic differences in ?tness depending on constraints imposed by tissue architecture," said Sandy Anderson, Ph.D., study author and director of the Center of Excellence for Evolutionary Therapy at Moffitt. "On the cell scale, any given subclone may have a selective advantage. Yet, the effective outcome of this subclonal advantage depends on the surrounding competitive context of that cell. In other words, cell-speci?c phenotypic behavior can be 'overridden' by the tissue architecture, allowing the tumor to realize increased ?tness."

The study sheds light on the important role that tumor location has in the development and progression of cancer and helps explain the wide variety of mutational patterns observed among different patients.

"Our approach adds clarity to the debate of neutral tumor evolution by exploring a key mechanism behind both interpatient and intratumoral tumor heterogeneity: competition for space," said Jeffrey West, Ph.D., study co-author and postdoctoral fellow in the Integrated Mathematical Oncology Department at Moffitt.

Credit: 
H. Lee Moffitt Cancer Center & Research Institute

Hubble spots double quasars in merging galaxies

image: This artist's conception shows the brilliant light of two quasars residing in the cores of two galaxies that are in the chaotic process of merging. The gravitational tug-of-war between the two galaxies stretches them, forming long tidal tails and igniting a firestorm of starbirth. Quasars are brilliant beacons of intense light from the centers of distant galaxies. They are powered by supermassive black holes voraciously feeding on infalling matter. This feeding frenzy unleashes a torrent of radiation that can outshine the collective light of billions of stars in the host galaxy. In a few tens of millions of years, the black holes and their galaxies will merge, and so will the quasar pair, forming an even more massive black hole. A similar sequence of events will happen a few billion years from now when our Milky Way galaxy merges with the neighboring Andromeda galaxy.

Image: 
NASA, ESA, and J. Olmsted (STScI)

NASA's Hubble Space Telescope is "seeing double." Peering back 10 billion years into the universe's past, Hubble astronomers found a pair of quasars that are so close to each other they look like a single object in ground-based telescopic photos, but not in Hubble's crisp view.

The researchers believe the quasars are very close to each other because they reside in the cores of two merging galaxies. The team went on to win the "daily double" by finding yet another quasar pair in another colliding galaxy duo.

A quasar is a brilliant beacon of intense light from the center of a distant galaxy that can outshine the entire galaxy. It is powered by a supermassive black hole voraciously feeding on inflating matter, unleashing a torrent of radiation.

"We estimate that in the distant universe, for every 1,000 quasars, there is one double quasar. So finding these double quasars is like finding a needle in a haystack," said lead researcher Yue Shen of the University of Illinois at Urbana-Champaign.

The discovery of these four quasars offers a new way to probe collisions among galaxies and the merging of supermassive black holes in the early universe, researchers say.

Quasars are scattered all across the sky and were most abundant 10 billion years ago. There were a lot of galaxy mergers back then feeding the black holes. Therefore, astronomers theorize there should have been many dual quasars during that time.

"This truly is the first sample of dual quasars at the peak epoch of galaxy formation with which we can use to probe ideas about how supermassive black holes come together to eventually form a binary," said research team member Nadia Zakamska of Johns Hopkins University in Baltimore, Maryland.

The team's results appeared in the April 1 online issue of the journal Nature Astronomy.

Shen and Zakamska are members of a team that is using Hubble, the European Space Agency's Gaia space observatory, and the Sloan Digital Sky Survey, as well as several ground-based telescopes, to compile a robust census of quasar pairs in the early universe.

The observations are important because a quasar's role in galactic encounters plays a critical part in galaxy formation, the researchers say. As two close galaxies begin to distort each other gravitationally, their interaction funnels material into their respective black holes, igniting their quasars.

Over time, radiation from these high-intensity "light bulbs" launch powerful galactic winds, which sweep out most of the gas from the merging galaxies. Deprived of gas, star formation ceases, and the galaxies evolve into elliptical galaxies.

"Quasars make a profound impact on galaxy formation in the universe," Zakamska said. "Finding dual quasars at this early epoch is important because we can now test our long-standing ideas of how black holes and their host galaxies evolve together."

Astronomers have discovered more than 100 double quasars in merging galaxies so far. However, none of them is as old as the two double quasars in this study.

The Hubble images show that quasars within each pair are only about 10,000 light-years apart. By comparison, our Sun is 26,000 light-years from the supermassive black hole in the center of our galaxy.

The pairs of host galaxies will eventually merge, and then the quasars also will coalesce, resulting in an even more massive, single solitary black hole.

Finding them wasn't easy. Hubble is the only telescope with vision sharp enough to peer back to the early universe and distinguish two close quasars that are so far away from Earth. However, Hubble's sharp resolution alone isn't good enough to find these dual light beacons.

Astronomers first needed to figure out where to point Hubble to study them. The challenge is that the sky is blanketed with a tapestry of ancient quasars that flared to life 10 billion years ago, only a tiny fraction of which are dual. It took an imaginative and innovative technique that required the help of the European Space Agency's Gaia satellite and the ground-based Sloan Digital Sky Survey to compile a group of potential candidates for Hubble to observe.

Located at Apache Point Observatory in New Mexico, the Sloan telescope produces three-dimensional maps of objects throughout the sky. The team pored through the Sloan survey to identify the quasars to study more closely.

The researchers then enlisted the Gaia observatory to help pinpoint potential double-quasar candidates. Gaia measures the positions, distances, and motions of nearby celestial objects very precisely. But the team devised a new, innovative application for Gaia that could be used for exploring the distant universe. They used the observatory's database to search for quasars that mimic the apparent motion of nearby stars. The quasars appear as single objects in the Gaia data. However, Gaia can pick up a subtle, unexpected "jiggle" in the apparent position of some of the quasars it observes.

The quasars aren't moving through space in any measurable way, but instead their jiggle could be evidence of random fluctuations of light as each member of the quasar pair varies in brightness. Quasars flicker in brightness on timescales of days to months, depending on their black hole's feeding schedule.

This alternating brightness between the quasar pair is similar to seeing a railroad crossing signal from a distance. As the lights on both sides of the stationary signal alternately flash, the sign gives the illusion of "jiggling."

When the first four targets were observed with Hubble, its crisp vision revealed that two of the targets are two close pairs of quasars. The researchers said it was a "light bulb moment" that verified their plan of using Sloan, Gaia, and Hubble to hunt for the ancient, elusive double powerhouses.

Team member Xin Liu of the University of Illinois at Urbana-Champaign called the Hubble confirmation a "happy surprise." She has long hunted for double quasars closer to Earth using different techniques with ground-based telescopes. "The new technique can not only discover dual quasars much further away, but it is much more efficient than the methods we've used before," she said.

Their Nature Astronomy article is a "proof of concept that really demonstrates that our targeted search for dual quasars is very efficient," said team member Hsiang-Chih Hwang, a graduate student at Johns Hopkins University and the principal investigator of the Hubble program. "It opens a new direction where we can accumulate a lot more interesting systems to follow up, which astronomers weren't able to do with previous techniques or datasets."

The team also obtained follow-up observations with the National Science Foundation NOIRLab's Gemini telescopes. "Gemini's spatially-resolved spectroscopy can unambiguously reject interlopers due to chance superpositions from unassociated star-quasar systems, where the foreground star is coincidentally aligned with the background quasar," said team member Yu-Ching Chen, a graduate student at the University of Illinois at Urbana-Champaign.

Although the team is convinced of their result, they say there is a slight chance that the Hubble snapshots captured double images of the same quasar, an illusion caused by gravitational lensing. This phenomenon occurs when the gravity of a massive foreground galaxy splits and amplifies the light from the background quasar into two mirror images. However, the researchers think this scenario is highly unlikely because Hubble did not detect any foreground galaxies near the two quasar pairs.

Galactic mergers were more plentiful billions of years ago, but a few are still happening today. One example is NGC 6240, a nearby system of merging galaxies that has two and possibly even three supermassive black holes. An even closer galactic merger will occur in a few billion years when our Milky Way galaxy collides with neighboring Andromeda galaxy. The galactic tussle would likely feed the supermassive black holes in the core of each galaxy, igniting them as quasars.

Future telescopes may offer more insight into these merging systems. NASA's James Webb Space Telescope, an infrared observatory scheduled to launch later this year, will probe the quasars' host galaxies. Webb will show the signatures of galactic mergers, such as the distribution of starlight and the long streamers of gas pulled from the interacting galaxies.

Credit: 
NASA/Goddard Space Flight Center

Breast cancer survivors' fear of cancer returning linked to genomic testing, psychological factors

Breast cancer survivors with a higher risk of cancer recurrence based on genomic testing may experience greater fear of their cancer returning, according to a new study led by researchers at NYU Rory Meyers College of Nursing. However, psychological factors such as anxiety are the best predictors of survivors' fear of their cancer recurring.

"Although genomic test results were associated with fear of cancer recurrence, our findings highlight that distressing, but treatable, psychological factors fuel cancer survivors' fear of recurrence," said Maurade Gormley, PhD, RN, an assistant professor and faculty fellow at NYU Meyers and the lead author of the study, which was published in the journal Psycho-Oncology.

For breast cancer survivors, fear and worry that their cancer will return is a significant, unmet psychological need. Over half of breast cancer survivors experience moderate to severe levels of fear of cancer recurrence; this grows to up to 70 percent among younger breast cancer survivors.

The Oncotype Dx® test is a genomic test that is used to predict the likelihood of cancer recurring in women with early-stage hormone receptor-positive breast cancer, the most common form of breast cancer. It analyzes breast cancer cells after surgery or biopsy to predict the 10-year risk of recurrence, creating a "recurrence score" that is divided into three risk categories: low, intermediate, and high. The genomic test can also be used to plan breast cancer treatment, including whether a patient will benefit from chemotherapy in addition to hormone therapy.

"We wanted to address the question of whether women with a history of breast cancer have greater fear of recurrence when they are told they are at high risk from genomic testing," said Gormley.

In the study, Gormley and her colleagues studied 110 breast cancer survivors to explore associations between the genomic test's recurrence score and its relationship to a range of factors: fear of cancer recurrence, distress, anxiety, depression, health-related quality of life, including pain and fatigue, and perceived risk of cancer recurring and spreading. They also measured women's beliefs about their illness, including their emotional response to it, perceived consequences of cancer on their lives, whether they believe they have control over their illness, and whether they perceive their cancer to be chronic.

The researchers found that breast cancer survivors with high recurrence scores reported higher overall fear and greater perceived consequences of their cancer compared to those with low recurrence scores. A greater fear of cancer recurrence was associated with higher distress, anxiety, depression, lower quality of life, and certain beliefs about their cancer, including worse perceived consequences and greater emotional response to illness.

The duration of time since breast cancer diagnosis was not associated with fear of cancer recurrence or perceived risk. However, younger women had greater fear of recurrence and worse psychosocial outcomes.

Further analyses revealed that the best predictors of whether someone was at high risk for fear of cancer recurrence were actually modifiable factors--anxiety, greater emotional response to cancer, and perceived consequences of illness--and not unchangeable factors like genomic test results and age. The modifiable, psychological factors explained 58 percent of the variance in fear of cancer recurrence.

"These findings are important because they illustrate that an individual's understanding of and response to their illness may explain who is at greatest risk for developing fear of cancer recurrence," said Gormley. "This could pave the way for developing targeted support--for instance, mental health interventions like cognitive behavioral skills--to address maladaptive beliefs about illness that occur among many breast cancer survivors."

Credit: 
New York University

Competing for high status speeds up aging in male baboons

image: Male baboons in Amboseli National Park, Kenya, engage in physical competition for high rank, demonstrating the potential costs of attaining high status

Image: 
Beth Archie (CC BY 4.0)

Battling other male baboons to achieve high social status comes with physiological costs that accelerate aging, according to study published today in eLife.

The findings suggest that current life circumstances may be more important contributors to premature aging than early life hardship, at least in baboons.

Chemical changes to DNA, also called epigenetic changes, can be used as a kind of 'clock' to measure aging. While these epigenetic changes usually correspond with age, they can also be used to detect signs of premature aging.

"Environmental stressors can make the clock tick faster, so that some individuals appear biologically older than their actual age and experience a higher risk of age-related disease," explains co-first author Jordan Anderson, a PhD student in Evolutionary Anthropology at Duke University, Durham, North Carolina, US. "We sought to answer what social or early life experiences contribute to accelerated aging in baboons."

The team measured aging in 245 wild baboons from a well-studied population in Kenya using the epigenetic clock and other methods. They found that the epigenetic clock was a good predictor of chronological age overall. But contrary to what they expected, early life adversity was not a good predictor of accelerated aging in the animals.

Instead, they found that the highest-ranking males showed signs of accelerated aging. Higher body mass index, which is associated with having more lean muscle mass in baboons, was also associated with accelerated aging, likely because of the physical demands of maintaining high status. The team was also able to show that the epigenetic clock sped up as the animals climbed the social ladder and slowed down as they moved down it.

"Our results argue that achieving high rank for male baboons - the best predictor of reproductive success in these animals - imposes costs that are consistent with a 'live fast, die young,' life history strategy," says co-first author Rachel Johnston, Postdoctoral Associate in Evolutionary Anthropology at Duke University.

"While the findings reveal how social pressures can influence aging for males, we don't see the same effect of rank in female baboons, who are born into their social rank rather than having to fight for it," adds senior author Jenny Tung, Associate Professor in the Departments of Evolutionary Anthropology and Biology at Duke University, and a Faculty Associate of the Duke University Population Research Institute.

"Our results have important implications for research on the social determinants of health in humans and other animals because they show that 'high status' can mean very different things in different contexts. They also highlight the importance of examining the effects of both early life and current life environments on biological aging," Tung concludes.

Credit: 
eLife

Houston flooding polluted reefs more than 100 miles offshore

image: Rice University marine biologists (from left) Lauren Howe-Kerr, Amanda Shore and Adrienne Correa prepare for a research dive at the Flower Garden Banks National Marine Sanctuary in October 2018.

Image: 
Photo by Carsten Grupstra/Rice University

HOUSTON - (April 6, 2021) - Runoff from Houston's 2016 Tax Day flood and 2017's Hurricane Harvey flood carried human waste onto coral reefs more than 100 miles offshore in the Flower Garden Banks National Marine Sanctuary, according to a Rice University study.

"We were pretty shocked," said marine biologist Adrienne Correa, co-author of the study in Frontiers in Marine Science. "One thing we always thought the Flower Garden Banks were safe from was terrestrial runoff and nutrient pollution. It's a jolt to realize that in these extreme events, it's not just the salt marsh or the seagrass that we need to worry about. Offshore ecosystems can be affected too."

The Flower Garden Banks sit atop several salt domes near the edge of the continental shelf about 100 miles from the Texas and Louisiana coast. Rising several hundred feet from the seafloor, the domes are topped with corals, algae, sponges and fish. Each bank, or dome-topped ecosystem, is separated by miles of open ocean. The Flower Garden Banks National Marine Sanctuary, which was recently expanded, protects 17 banks.

Correa and colleagues sampled sponges at the sanctuary in 2016, 2017 and 2018. They showed samples collected after extreme storm flooding in 2016 and 2017 contained E. coli and other human fecal bacteria. They also used a catalog of E. coli genetic markers contributed by Rice environmental engineer and co-author Lauren Stadler to show that E. coli on sponges in 2017 came from Harvey floodwaters.

Lead author Amanda Shore, who conducted the research while a Rice Academy Postdoctoral Fellow in Correa's lab, said many studies have shown nearshore reefs can be harmed by pollutants that are washed into the ocean by rainfall over land. But marine biologists generally assume ecosystems far from shore are safe from such dangers.

"This shows perhaps they aren't protected from severe events," said Shore, an assistant professor of biology at Farmingdale State College in New York. "And these events are increasing in frequency and intensity with climate change."

Correa said, "That's the other piece of this. There actually was a massive flooding event in 2015 with the Memorial Day flood. Dips in salinity after that event were detected at surface buoys offshore, but nobody looked or sampled out at the Flower Garden Banks. Nobody imagined you would see something like this 160 kilometers out."

In April 2016, widespread flooding occurred in the Houston area when a severe storm dropped more than 17 inches of rain in some places in less than 24 hours. Three months after the flood, recreational divers reported murky waters and dead and dying organisms at East Flower Garden Bank. Marine biologists, including study co-author Sarah Davies of Boston University, arrived two weeks later to investigate.

Shore and co-authors Carsten Grupstra, a Rice graduate student, and Jordan Sims, a Rice undergraduate, analyzed samples from the expedition, including tissue collected from sponges. Shore said sponges are indicators of water quality because they "are basically filtering seawater to catch organic material to use as food."

She said previous studies have shown sponges have a microbiome, a population of bacteria that normally live in and on these animals. In this study, Shore characterized the microbiomes on two species: giant barrel sponges, or Xestospongia muta, and orange elephant ear sponges, or Agelas clathrodes. It was the first time the species' microbiomes had been assayed at Flower Garden Banks, and Correa said that was one reason it took so long to understand what happened in the flood years.

Correa said, "In 2016, we saw differences between sponge bacteria at a location that showed signs of death and a location that didn't show signs of death, but we couldn't get at the cause of the differences because we had no baseline data. We thought we'd be able to get the baseline data -- the normal year -- the next year in 2017. But then there was another disaster. We couldn't get a normal sample in a no-flood year until 2018."

Shore joined Correa's lab in 2018, helped collect samples that year and analyzed the microbiomes from each year.

Correa said, "There was a big change in community composition, a shift of the team players, on the sponges that were most affected in 2016. Then, following Harvey in 2017 there was also a shift, but less water made it out there that year, and we think it was less stressful. We didn't see dead and dying organisms like we had the previous year."

Harvey, the most intense rainfall event in U.S. history, dropped an estimated 13 trillion gallons of rain over southeast Texas in late August 2017. The researchers said Harvey posed a greater potential threat to the Flower Garden Banks, by far, than the 2016 flood. So why did reefs fare better in 2017?

"Because we got lucky with ocean currents," Shore said. "Instead of going straight out from Galveston Bay and over the Flower Garden Banks, the water ended up turning a bit and going down the Texas coast instead."

Harvey's runoff still sideswiped the banks. Research buoys at the reefs measured a 10% drop in salinity in less than a day on Sept. 28, and Correa's team found genetic evidence that fecal pollution gathered from the banks in October originated in Harvey floodwaters in Houston.

Correa said the story in 2016 was more complicated.

"There was an upwelling event that brought nutrients and cooler waters up from the deep to the top part of the Flower Garden Banks," she said. "Fresh water is less dense than salt water, and we think the floodwaters came at the surface and sort of sat there like a lens on top of the salt water and kept oxygen from mixing in from the top. The combination of this surface event and the nutrients coming up from the bottom contributed to a bacterial bloom that drew down so much oxygen that things just asphyxiated."

The big question is whether pollution from extreme storms poses a long-term threat to the Flower Garden Banks. Correa said the answer could come from an investment in research that follows the health and microbiomes of individual sponges and corals on the reef over time. She said her group at Rice and her collaborators are committed to learning as much as they can about the reefs, and they are determined to support efforts to conserve and protect them.

Credit: 
Rice University

Spin defects under control

image: Schematic representation of the coherent control of a spin defect (red) in an atomic layer of boron nitride. Boron nitride consists of boron (yellow spheres) and nitrogen (blue spheres) and lies on a stripline. The spin defect is excited by a laser and its state is read out via photoluminescence. The qubit can be manipulated both by microwave pulses (light blue) of the stripline and also by a magnetic field.

Image: 
(Image: Andreas Gottscholl / University of Wuerzburg)

Boron nitride is a technologically interesting material because it is very compatible with other two-dimensional crystalline structures. It therefore opens up pathways to artificial heterostructures or electronic devices built on them with fundamentally new properties.

About a year ago, a team from the Institute of Physics at Julius-Maximilians-Universität (JMU) Wuerzburg in Bavaria, Germany, succeeded in creating spin defects, also known as qubits, in a layered crystal of boron nitride and identifying them experimentally.

Recently, the team led by Professor Vladimir Dyakonov, his PhD student Andreas Gottscholl and group leader PD Dr. Andreas Sperlich, succeeded in taking an important next step: the coherent control of such spin defects, and that even at room temperature. The researchers report their findings in the impactful journal Science Advances. Despite the pandemic, the work was carried out in an intensive international collaboration with groups from the University of Technology Sydney in Australia and Trent University in Canada.

Measuring local electromagnetic fields even more precisely

"We expect that materials with controllable spin defects will allow more precise measurements of local electromagnetic fields once they are used in a sensor", explains Vladimir Dyakonov, "and this is because they are, by definition, at the border to the surrounding world, which needs to be mapped. Conceivable areas of application are imaging in medicine, navigation, everywhere where contactless measurement of electromagnetic fields is necessary, or in information technology.

"The research community's search for the best material for this is not yet complete, but there are several potential candidates," adds Andreas Sperlich. "We believe we found a new candidate that stands out because of its flat geometry, which offers the best integration possibilities in electronics."

Limits of spin coherence times trickily overcome

All spin-sensitive experiments with the boron nitride were carried out at JMU. "We were able to measure the characteristic spin coherence times, determine their limits and even trickily overcome these limits," says a delighted Andreas Gottscholl, PhD student and first author of the publication. Knowledge of spin coherence times is necessary to estimate the potential of spin defects for quantum applications, and long coherence times are highly desirable as one eventually wants to perform complex manipulations.

Gottscholl explains the principle in simplified terms: "Imagine a gyroscope that rotates around its axis. We have succeeded in proving that such mini gyroscopes exist in a layer of boron nitride. And now we have shown how to control the gyroscope, i.e., for example, to deflect it by any angle without even touching it, and above all, to control this state."

Coherence time reacts sensitively to neighboring atomic layers

The contactless manipulation of the "gyroscope" (the spin state) was achieved through the pulsed high-frequency electromagnetic field, the resonant microwaves. The JMU researchers were also able to determine how long the "gyroscope" maintains its new orientation. Strictly speaking, the deflection angle should be seen here as a simplified illustration of the fact that a qubit can assume many different states, not just 0 and 1 like a bit.

What does this have to do with sensor technology? The direct atomic environment in a crystal influences the manipulated spin state and can greatly shorten its coherence time. "We were able to show how extremely sensitive the coherence reacts to the distance to the nearest atoms and atomic nuclei, to magnetic impurities, to temperature and to magnetic fields - so the environment of the qubit can be deduced from the measurement of the coherence time," explains Andreas Sperlich.

Goal: Electronic devices with spin decorated boron nitride layers

The JMU team's next goal is to realize an artificially stacked two-dimensional crystal made of different materials, including a spin-bearing component. The essential building blocks for the latter are atomically thin boron nitride layers containing optically active defects with an accessible spin state.

"It would be particularly appealing to control the spin defects and their surroundings in the 2D devices not only optically, but via the electric current. This is completely new territory," says Vladimir Dyakonov.

Credit: 
University of Würzburg

Deep learning networks prefer the human voice -- just like us

image: A deep neural network that is taught to speak out the answer demonstrates higher performances of learning robust and efficient features. This study opens up new research questions on the role of label representations for object recognition.

Image: 
Creative Machines Lab/Columbia Engineering

New York, NY--April 6, 2021--The digital revolution is built on a foundation of invisible 1s and 0s called bits. As decades pass, and more and more of the world's information and knowledge morph into streams of 1s and 0s, the notion that computers prefer to "speak" in binary numbers is rarely questioned. According to new research from Columbia Engineering, this could be about to change.

A new study from Mechanical Engineering Professor Hod Lipson and his PhD student Boyuan Chen proves that artificial intelligence systems might actually reach higher levels of performance if they are programmed with sound files of human language rather than with numerical data labels. The researchers discovered that in a side-by-side comparison, a neural network whose "training labels" consisted of sound files reached higher levels of performance in identifying objects in images, compared to another network that had been programmed in a more traditional manner, using simple binary inputs.

VIDEO: https://youtu.be/Iq2YjHCAPRQ

PROJECT WEBSITE: https://www.creativemachineslab.com/label-representation.html
https://engineering.columbia.edu/faculty/hod-lipson

"To understand why this finding is significant," said Lipson, James and Sally Scapa Professor of Innovation and a member of Columbia's Data Science Institute, "It's useful to understand how neural networks are usually programmed, and why using the sound of the human voice is a radical experiment."

When used to convey information, the language of binary numbers is compact and precise. In contrast, spoken human language is more tonal and analog, and, when captured in a digital file, non-binary. Because numbers are such an efficient way to digitize data, programmers rarely deviate from a numbers-driven process when they develop a neural network.

Lipson, a highly regarded roboticist, and Chen, a former concert pianist, had a hunch that neural networks might not be reaching their full potential. They speculated that neural networks might learn faster and better if the systems were "trained" to recognize animals, for instance, by using the power of one of the world's most highly evolved sounds--the human voice uttering specific words.

One of the more common exercises AI researchers use to test out the merits of a new machine learning technique is to train a neural network to recognize specific objects and animals in a collection of different photographs. To check their hypothesis, Chen, Lipson and two students, Yu Li and Sunand Raghupathi, set up a controlled experiment. They created two new neural networks with the goal of training both of them to recognize 10 different types of objects in a collection of 50,000 photographs known as "training images."

One AI system was trained the traditional way, by uploading a giant data table containing thousands of rows, each row corresponding to a single training photo. The first column was an image file containing a photo of a particular object or animal; the next 10 columns corresponded to 10 possible object types: cats, dogs, airplanes, etc. A "1" in any column indicates the correct answer, and nine 0s indicate the incorrect answers.

The team set up the experimental neural network in a radically novel way. They fed it a data table whose rows contained a photograph of an animal or object, and the second column contained an audio file of a recorded human voice actually voicing the word for the depicted animal or object out loud. There were no 1s and 0s.

Once both neural networks were ready, Chen, Li, and Raghupathi trained both AI systems for a total of 15 hours and then compared their respective performance. When presented with an image, the original network spat out the answer as a series of ten 1s and 0s--just as it was trained to do. The experimental neural network, however, produced a clearly discernible voice trying to "say" what the object in the image was. Initially the sound was just a garble. Sometimes it was a confusion of multiple categories, like "cog" for cat and dog. Eventually, the voice was mostly correct, albeit with an eerie alien tone (see example on website).

At first, the researchers were somewhat surprised to discover that their hunch had been correct--there was no apparent advantage to 1s and 0s. Both the control neural network and the experimental one performed equally well, correctly identifying the animal or object depicted in a photograph about 92% of the time. To double-check their results, the researchers ran the experiment again and got the same outcome.

What they discovered next, however, was even more surprising. To further explore the limits of using sound as a training tool, the researchers set up another side-by-side comparison, this time using far fewer photographs during the training process. While the first round of training involved feeding both neural networks data tables containing 50,000 training images, both systems in the second experiment were fed far fewer training photographs, just 2,500 apiece.

It is well known in AI research that most neural networks perform poorly when training data is sparse, and in this experiment, the traditional, numerically trained network was no exception. Its ability to identify individual animals that appeared in the photographs plummeted to about 35% accuracy. In contrast, although the experimental neural network was also trained with the same number of photographs, its performance did twice as well, dropping only to 70% accuracy.

Intrigued, Lipson and his students decided to test their voice-driven training method on another classic AI image recognition challenge, that of image ambiguity. This time they set up yet another side-by-side comparison but raised the game a notch by using more difficult photographs that were harder for an AI system to "understand." For example, one training photo depicted a slightly corrupted image of a dog, or a cat with odd colors. When they compared results, even with more challenging photographs, the voice-trained neural network was still correct about 50% of the time, outperforming the numerically-trained network that floundered, achieving only 20% accuracy.

Ironically, the fact their results went directly against the status quo became challenging when the researchers first tried to share their findings with their colleagues in computer science. "Our findings run directly counter to how many experts have been trained to think about computers and numbers; it's a common assumption that binary inputs are a more efficient way to convey information to a machine than audio streams of similar information 'richness,'" explained Boyuan Chen, the lead researcher on the study. "In fact, when we submitted this research to a big AI conference, one anonymous reviewer rejected our paper simply because they felt our results were just 'too surprising and un-intuitive.'"

When considered in the broader context of information theory however, Lipson and Chen's hypothesis actually supports a much older, landmark hypothesis first proposed by the legendary Claude Shannon, the father of information theory. According to Shannon's theory, the most effective communication "signals" are characterized by an optimal number of bits, paired with an optimal amount of useful information, or "surprise."

"If you think about the fact that human language has been going through an optimization process for tens of thousands of years, then it makes perfect sense, that our spoken words have found a good balance between noise and signal;" Lipson observed. "Therefore, when viewed through the lens of Shannon Entropy, it makes sense that a neural network trained with human language would outperform a neural network trained by simple 1s and 0s."

The study, to be presented at the International Conference on Learning Representations conference on May 3, 2021, is part of a broader effort at Lipson's Columbia Creative Machines Lab to create robots that can understand the world around them by interacting with other machines and humans, rather than by being programed directly with carefully preprocessed data.

"We should think about using novel and better ways to train AI systems instead of collecting larger datasets," said Chen. "If we rethink how we present training data to the machine, we could do a better job as teachers."

One of the more refreshing results of computer science research on artificial intelligence has been an unexpected side effect: by probing how machines learn, sometimes researchers stumble upon fresh insight into the grand challenges of other, well-established fields.

"One of the biggest mysteries of human evolution is how our ancestors acquired language, and how children learn to speak so effortlessly," Lipson said. "If human toddlers learn best with repetitive spoken instruction, then perhaps AI systems can, too."

Credit: 
Columbia University School of Engineering and Applied Science

Aquatic invasive species cause damage worth billions of dollars

image: The comb jellyfish Mnemiopsis leidyi originates from the American east coast. It was introduced into the Black Sea in the early 1980s. As a result, anchovy stocks there declined drastically. In 2006, Mnemiopsis leidyi was also detected in the Baltic Sea for the first time.

Image: 
Cornelia Jaspers, GEOMAR/DTU Aqua

The global movement of goods and people, in its modern form, has many unwanted side effects. One of these is that animal and plant species travel around the world with it. Often they fail to establish themselves in the ecosystems of the destination areas. Sometimes, however, due to a lack of effective management, they multiply to such an extent in the new environment that they become a threat to the entire ecosystem and economy. Thousands of alien species are currently documented worldwide. A quarter of them are in highly vulnerable, aquatic habitats.

So far, research has mainly focused on the ecological consequences of these invasions. In a first global data analysis, 20 scientists from 13 countries led by GEOMAR Helmholtz Centre for Ocean Research Kiel have now compiled the economic costs caused specifically by aquatic invaders. "We come to the conclusion that invasive aquatic species that have established themselves in their new habitats have cost at least 345 billion US dollars since the 1970s," says Dr Ross Cuthbert from GEOMAR. He is lead author of the study, which has now been published in the journal Science of the Total Environment.

Economic costs occur, for example, when invasive species decimate commercially exploited fish stocks, spread deadly diseases or damage infrastructures. "Good examples include invasive mussels that clog intake pipes of factories, power plants or water treatment plants. Or, alien parasites that cause catastrophic declines in commercial fisheries," explains Dr Cuthbert.

For the study, the team used cases recorded in the existing literature and standardized them in a comprehensive database. Invertebrates (62%) accounted for the largest proportion of costs that could be detected in this way, followed by vertebrates (28%) and plants (6%). The largest costs were reported in North America (48 %) and Asia (13 %) and were mainly due to damages to resources such as physical infrastructures, healthcare systems and fisheries. Worryingly, over ten-times less was spent on management actions, such as prevention of future invasions, than damages.

"However, our figures are vastly underestimated due to knowledge gaps. Costs were never reported for many countries and known damaging invasive species, especially in Africa and Asia. So, we can assume that the damages are actually much higher," Dr Cuthbert points out. A comparison with the costs caused by invaders on land confirms this assumption. While aquatic species make up a quarter of the documented invasive species, the economic costs they cause comprise only a twentieth of what is known for terrestrial species.

The team also identified a clear trend that costs have increased significantly in recent years. In 2020 alone, they amounted to at least 23 billion US dollars.

"So, the costs of aquatic invaders are significant, but probably under-reported. Costs have increased over time and are expected to continue to increase with future invasions," Dr Cuthbert summarises the study. The team of authors therefore calls for increased and improved cost reporting by managers, practitioners and researchers to reduce knowledge gaps. It also urges more money to be invested in invasion management and prevention. "This would be money well spent to prevent and limit current and future damage," Dr Cuthbert emphasises.

Credit: 
Helmholtz Centre for Ocean Research Kiel (GEOMAR)

US trade sanctions justified response to human rights abuses in China, law expert argues

LAWRENCE -- An international trade law expert at the University of Kansas argues in a pair of new articles that human rights and trade are now inextricably linked, as evidenced by U.S. and international reactions to actions in China, and asserts that approach is an appropriate use of trade.

Raj BhalaAfter the United States, then Canada and the Netherlands, declared the Chinese Communist Party's actions against Uyghur Muslims as genocide, the nations followed with various trade sanctions. Likewise, countries have adopted trade measures in response to China's violation of its one-country, two-systems agreement with Hong Kong. Raj Bhala, Brenneisen Distinguished Professor of Law at the KU School of Law, details both situations in two new companion case studies, argues the linking of trade to human rights is correct and examines future possibilities for such measures.

"Most people think human rights are to be separated from trade. In fact, that's not true," Bhala said. "There are no express, comprehensive provisions for human rights in the World Trade Organization or General Agreement on Tariffs and Trade, but we're seeing the link come up in U.S. trade policy and some regional free trade agreements. We're entering an era of invigorated enhancements of human rights through trade policy."

Bhala wrote an article on China's treatment of Uyghur Muslims and American trade response, published in India's Journal of the National Human Rights Commission, and another on Hong Kong's democracy, China's violation thereof and American trade response, forthcoming in the Kansas Journal of Law & Public Policy.

Former President Donald Trump's disputes with China and resulting trade war were widely debated and criticized. But, Bhala points out the sometimes-overlooked trade reactions to the events in Xinjiang and Hong Kong are distinct, and defensible, actions. While thoroughly detailing the economic and legal actions of each case, he points out how the United States and China are in a new era of great power competition. He also outlines how, historically, trade and human rights were considered separate matters, and he chronicles how and why the earliest connections between the two issues occurred.

"The articles make the point that the two issues, international trade and human rights, are now inextricably linked," Bhala said. "In one situation, we have what three governments have already called genocide, and what the world generally agrees is a violation of China's one-country, two-systems policy in Hong Kong in the other."

The former system involves genocide of a religious minority, while the latter involves legally codified human rights such as direct elections and peaceful assembly in Hong Kong. The United States has taken various trade actions, such as banning imports of Chinese products like cotton and tomatoes from Xinjiang in the first case, and freezing assets of Chinese Communist Party officials on the mainland in the second.

Bhala argues that such sanctions and related actions are appropriate. The World Trade Organization does not provide for trade remedies to human rights violations or crimes against humanity, hence options through that multilateral venue are limited.

"If we don't use trade measures like sanctions in these two egregious instances, then when would we?" Bhala said.

In addition to outlining in the articles the legal responses and arguing they are justified, Bhala examines how the United States and other nations will most likely continue to use such measures in the future. Numerous contexts, including China's actions toward Tibet, Taiwan and across the South China Sea and its self-declared Nine Dash Line, will most likely cause disagreement and conflict between the two world powers. He also emphasizes the conflicts are not with the Chinese people, but with the actions and policies of their government.

"We know the Chinese people are not monolithic in their views of their own government," Bhala said. "There are many people in Hong Kong and on the mainland who are concerned with what has happened in Xinjiang with the Uyghur population, and also in respect to what has happened in Tibet and Taiwan."

While it may be too early to know what the long-term results of trade remedies to human rights violations may be, or whether they will escalate tensions, the ongoing situations are confirmation that international trade and human rights are two sides of the same coin.

"If we've learned nothing else, it's that trade policy is national security policy is human rights policy," Bhala said. "Our national security is based on our values. We express our values partly through who we decide to trade with, and the terms on which we trade with them. Trade is not only about trade."

Credit: 
University of Kansas

People with HIV at high risk for intimate partner violence

Ann Arbor, April 6, 2021 - New data from the Centers for Disease Control and Prevention (CDC) show that one in four adults with HIV in the United States has experienced intimate partner violence (IPV), which disproportionately affects women and LGBT populations. Further, people with HIV who experienced IPV in the past 12 months were more likely to engage in behaviors associated with elevated HIV transmission risk, were less likely to be engaged in routine HIV care and more likely to seek emergency care services and have poor HIV clinical outcomes. The findings are reported in the American Journal of Preventive Medicine, published by Elsevier.

Lead Investigator Ansley B. Lemons-Lyn, MPH, and colleagues from the CDC's National Center for HIV/AIDS, Viral Hepatitis, STD, and TB Prevention and the National Center for Injury Prevention and Control in Atlanta, GA, USA, used data from the Medical Monitoring Project, an annual survey used to produce national estimates of sociodemographic, behavioral, and clinical characteristics of adults diagnosed with HIV. Analysts estimated the prevalence of respondents who had ever experienced IPV and those who experienced IPV within the last 12 months and compared that with sociodemographic information, behavioral characteristics, clinical outcomes, and the use of emergency or inpatient medical services in the past year.

Among individuals with HIV, 26.3 percent had at least one experience of IPV. Significant differences were found by race/ethnicity and age; 35.6 percent of women, 28.9 percent of transgender people, and 23.2 percent of men had experienced IPV. There were also significant differences based on gender and sexual identity. Although women overall experienced the highest prevalence of IPV, bisexual women experienced the highest proportion (51.5 percent) compared with all gender and sexual identity groups.

Overall, 4.4 percent of people with HIV had experienced IPV in the last 12 months. Statistically significant differences were found by sociodemographic characteristics, such as age and gender/sexual identify but not by race/ethnicity or gender identity. The study found that compared with individuals with HIV who did not experience IPV in the last 12 months, those who did engaged in riskier behavior such as binge drinking, use of injection drugs, and transactional sex. They were more likely to report not receiving additional needed services.

These findings suggest that screening people with HIV for IPV and linking them to services, not only during HIV testing but also during routine HIV care, is important. A higher proportion of individuals reporting IPV in the last 12 months were not receiving HIV medical care, were not taking antiretroviral therapy, and were more likely to miss HIV-related medical appointments. They were also more likely to have more than one emergency room visit or hospital admission in the past 12 months.

The study suggests that when IPV is identified, the safety and health of people with HIV can be improved with supportive services. IPV is preventable, especially when efforts begin early. The investigators note that most IPV and protection programs are tailored for heterosexual women. Given the extent to which the study found risk to other gender/sexual identity groups and racial/ethnic minorities, investigators suggest that programming should be tailored for marginalized groups.

Credit: 
Elsevier

The sea urchin microbiome

Sea urchins receive a lot of attention in California. Red urchins support a thriving fishery, while their purple cousins are often blamed for mowing down kelp forests to create urchin barrens. Yet for all the notice we pay them, we know surprisingly little about the microbiomes that support these spiny species.

Researchers at UC Santa Barbara led by geneticist Paige Miller sought to uncover the diversity within the guts of these important kelp forest inhabitants. Their results reveal significant differences between the microbiota of the two species, as well as between individuals living in different habitats. The study, which appears in Limnology and Oceanography Letters, represents the first step in understanding the function of urchins' microbial communities, including the possibility that urchins may be able to 'farm' microbes in their guts to create their own food sources.

California hosts two common species of sea urchin: red and purple. They generally consume algae, but are actually fairly opportunistic omnivores that will eat decaying plant and animal matter, microbial mats and even other urchins if need be. The microbiome in their guts might help urchins handle such a varied diet, but it hasn't been examined until now.

"It's very important to understand what animals eat and why," Miller said, "and we think the microbiome could play an important role in why species thrive despite all the variation in food availability that's out there in the ocean." However, scientists are only beginning to investigate the microbiota of ocean animals, let alone the function these microorganisms serve in their hosts.

To begin their investigation, Miller and her team collected red and purple urchins from three habitats in the Santa Barbara Channel. Some came from lush kelp forests; others from urchin barrens; and a few came from one of the channel's many hydrocarbon seeps, where they scratch a living feeding on mats of microbes that thrive off of petroleum compounds.

Key to this study's success was the researchers' stringent protocol. They used meticulous techniques to remove each specimen's stomach and guts in order to avoid contamination from microbes in the lab, elsewhere on the animal, and even in the sea water.

The researchers were then able to sequence a particular region of the genetic code that scientists commonly use to identify microbes. This enabled them to compare what they found with several comprehensive taxonomic databases that scientists use for genetic identification of microbial life.

The team found significant differences between the bacterial communities living within the two urchin species. However, they saw just as much variation between the microbiomes of individuals from the same species living in different habitats.

Purple sea urchin closeup

Comparing the microbiome of purple (pictured) and red sea urchins points toward differences between the similar species.

Photo Credit: KATIE DAVIS

"Our study is the first to examine the microbiome in these extremely common, and really ecologically important, species," said coauthor Robert (Bob) Miller, a researcher at the university's Marine Science Institute. "We're just scratching the surface here, but our study shows how complex these communities are."

One group of bacteria that was prevalent in both species is the same group that helps break down wood in the guts of termites, and could help the urchins digest algae. Previous research indicates that these microbes could potentially be autotrophic. "Some members of this group can create their own food, like photosynthetic plants, for example," explained Paige Miller, "only they don't use sunlight for energy, they use hydrogen."

Although the authors caution against jumping to conclusions, ascertaining whether urchins can produce their own food would be a huge revelation. "We know that the urchins can survive a long time without food," Bob Miller said. "And they can survive almost indefinitely in these barren areas that have very low food supplies. So, this could really help them out, if they have their own little farmed food supply in their gut."

The findings also stress the oversight of conflating these similar species. People often treat species like the red and purple sea urchins as equivalent when making decisions about resource use and management, Paige Miller explained. Even ecologists can fall into this line of reasoning. "But it's very important to look at how these things actually function," she noted. "And as we saw, the red and purple sea urchins are not necessarily functioning the same way, or eating the same things, if their microbiome is an indicator."

Understanding the makeup and function of microbiota could help researchers recognize the subtle differences between superficially similar species. "More recently, people have begun considering the microbiome as another trait that these species have," Bob Miller said. "We wanted to find out whether this is a hidden source of variation that's separating these two species."

This study provides a launch point for additional research. In the future, the Millers and their coauthors plan to further investigate the function of the different microbes in urchin guts. For now, there's still more work to do simply identifying what species reside in the prickly critters.

"This is a new subfield of ecology," said Paige Miller, "trying to understand what these microbiomes do and the role they play in the living organism out in the wild."

Credit: 
University of California - Santa Barbara

Leptin puts the brakes on eating via novel neurocircuit

image: Summary diagram of the modulatory effect of leptin on the mesolimbic DA system.

Image: 
Elsevier, 2021

Philadelphia, April 6, 2021 - Since the discovery of leptin in the 1990s, researchers have wondered, how does leptin, a hormone made by body fat, suppress appetite? Despite tremendous gains in the intervening three decades, many questions still remain. Now, a new study in mice describes novel neurocircuitry between midbrain structures that control feeding behaviors that are under modulatory control by leptin. The study appears in Biological Psychiatry, published by Elsevier.

John Krystal, MD, Editor of Biological Psychiatry, said of the findings, "Omrani and colleagues shed light on how, in non-obese animals, leptin puts the brakes on overeating."

Leptin acts as a critical link between the body and the brain, providing information about metabolic state and exerting control over energy balance. The importance of leptin is illustrated by the finding that animals deficient for leptin rapidly become obese without its regulatory stop on feeding behavior.

Roger Adan, PhD, of the Department of Translational Neuroscience, University Medical Center Utrecht and University Utrecht, the Netherlands, who led the study, said, "This process is shaped by communication between bodily fat storages (via a hormone called leptin) and the brain's dopamine reward system. This leptin-dopamine axis is critically important for body weight control, but its modes of action were not well understood."

Leptin suppresses eating by signaling to brain regions that control eating behaviors, but it also decreases the reward value inherent in foods, engaging the brain's dopamine (DA) reward system. That food-reward pathway was known to involve dopaminergic neurons of the ventral tegmental area (VTA) signaling to the nucleus accumbens (NAc), but most of those DA neurons do not contain receptors for leptin.

The work used a combination of powerful technologies, including optogenetics, chemogenetics and electrophysiology to map the new microcircuitry.

"Although leptin receptors are present on [some] dopamine neurons that signal food reward," said Professor Adan, also of the Department of Translational Neuroscience, University Medical Center Utrecht and University Utrecht, "we discovered that leptin receptors are also present on inhibitory neurons that more strongly regulate the activity of dopamine neurons. Some of these inhibitory neurons suppressed food seeking when [animals were] hungry, whereas others [did so] only when [animals were] in a sated state."

Dr. Krystal said of the study, "It turns out that leptin plays key modulatory roles in an elegant circuit that unites midbrain and limbic reward circuitry. By inhibiting hypothalamic neurons and ultimately suppressing the activity of dopamine neurons in the midbrain that signal reward and promote feeding, leptin reduces food intake in animals under conditions when caloric intake has exceeded energy use."

Ultimately, Professor Adan said, "Targeting these neurons may provide a new avenue for the treatment of anorexia nervosa and to support dieting in people with obesity."

Credit: 
Elsevier

A novel form of cellular logistics

Biophysicists from Ludwig-Maximilians-Universitaet (LMU) in Munich have shown that a phenomenon known as diffusiophoresis, which can lead to a directed particle transport, can occur in biological systems.

In order to perform their biological functions, cells must ensure that their logistical schedules are implemented smoothly, such that the necessary molecular cargoes are delivered to their intended destinations on time. Most of the known transport mechanisms in cells are based on specific interactions between the cargo to be transported and the energy-consuming motor proteins that convey the load to its destination. A group of researchers led by LMU physicist Erwin Frey (Chair of Statistical and Biological Physics) and Petra Schwille of the Max Planck Institute for Biochemistry has now shown for the first time that a form of directed transport of particles can take place in cells, even in the absence of molecular motors. Furthermore, this mechanism can sort the transported particles according to their size, as the team reports in the latest issue of Nature Physics.

The study focuses on the MinDE system from the bacterium E. coli, which is an established and important model for biological pattern formation. The two proteins MinD and MinE oscillate between the poles of the rod-shaped cell and their mutual interaction on the cell membrane ultimately restricts the plane of cell division to the center of the cell. In this case, the researchers reconstructed the pattern forming MinDE system in the test-tube, using the purified Min proteins and artificial membranes. As expected from previous experiments, when the energy-rich molecule ATP was added to this system, the Min proteins recapitulated the oscillatory behavior seen in bacterial cells. More importantly, the experimenters went on to demonstrate that many different kinds of molecules could be caught up in the oscillatory waves as they traversed the membranes - even molecules that have nothing to do with pattern formation and are not found in cells at all.

A sorting machine for DNA origami

In order to analyze the transport mechanism in greater detail, the team turned to cargoes that consisted of DNA origami, and could be anchored to the membrane. This strategy allows one to create molecular structures of varying sizes and shapes, based on programmable base-pairing interactions between DNA strands. "These experiments showed that this mode of transport depends on the size of the cargo, and that MinD can even sort structures on the basis of their size," says Beatrice Ramm, a postdoc in Petra Schwille's department and joint first author of the new study. With the aid of theoretical analyses, Frey's group went on to identify the underlying transport mechanism as diffusiophoresis - the directed motion of particles along a concentration gradient. In the Min system, the friction between the cargo and the diffusing Min proteins is responsible for the transport of the freight. Thus, the crucial factor in this context is not a specific set of biochemical interactions - as in the case of transport via motor proteins in biological cells - but the effective sizes of the particles involved. "Particles that are more strongly affected by friction, owing to their large size, are also transported further - that's what accounts for sorting on the basis of size," says Andriy Goychuk, also joint first author of the paper.

With these results, the team has demonstrated the involvement of a purely physical (as opposed to a biological) form of transport based on diffusiophoresis in a biological pattern-forming system. "This process is so simple and fundamental that it seems likely that it also plays a role in other cellular processes, and might even have been employed in the earliest cells at the origin of life," says Frey. "And in the future, it might also be possible to make use of it to position molecules at specific sites within artificial minimal cells," he adds.

Credit: 
Ludwig-Maximilians-Universität München

New perspective to understand and treat a rare calcification disease

image: As part of an international collaboration, researchers from ELTE Eötvös Loránd University developed a new animal model to study a rare genetic disease.

Image: 
Photo: Dániel Csete, Semmelweis University Institute of Physiology

As part of an international collaboration, researchers from ELTE Eötvös Loránd University developed a new animal model to study a rare genetic disease that can lead to blindness at the age of 40-50. The new model could open up new perspectives in our understanding of this metabolic disease and will also help to identify new potential drug candidates, according to the recent study published in Frontiers in Cell and Developmental Biology.

Pseudoxanthoma elasticum (PXE) is a rare genetic disease with symptoms that usually manifest in adolescence or in early adulthood. The symptoms are caused by the appearance of hydroxyapatite crystal deposits in the subcutaneous connective tissue and retina and later can also appear in the vascular system. Excessive calcification in the retina can lead to blindness, while the crystals in the walls of the blood vessels result in the loss of their elasticity and in the development of severe vascular diseases.

The researchers of the DanioLab Research Group at the Department of Genetics of the ELTE Eötvös Loránd University, supported by the Diagnostics and Therapy Excellence Program, collaborated with researchers from the ELKH-RCNS Institute of Enzymology, Semmelweis University Institute of Physiology, and the US National Human Genome Research Institute (NHGRI) to create a new model for PXE. The researcher used zebrafish (Danio rerio), a popular model of genetic research, to gain a better understanding of this rare genetic disease.

Why is zebrafish a good animal model?

PXE usually develops in patients carrying mutations in the ABCC6 gene, encoding a cell membrane transporter protein. The zebrafish genome harbors three variants (so-called paralogues) of this gene: abcc6a is located on chromosome 6, while abcc6b.1 and abcc6b.2 are located on chromosome 3. Closer examination of the three paralogues, using new sequencing methods, revealed that only abcc6a and abcc6b.1 have a protein-coding function. In contrast, abcc6b.2 has lost its active role and is present as a pseudogene in the genome of the zebrafish.

The research team at ELTE Eötvös Loránd University successfully created and characterized mutant lines in the two protein-coding ABCC6 paralogues using the CRISPR/Cas9 genome-editing system, to understand if they have synergistic effects in the fish. "To our surprise, only abcc6a homozygous mutant animals showed defects in calcification. These could be observed relatively early, already at larval stages, indicating that loss of function of this gene in zebrafish affects calcification similarly to that seen in human patients. By adulthood, the skeletal system of the mutant animals was severely distorted. The spine is spectacularly twisted due to the excessive calcification between the vertebrae" - said Máté Varga, the head of the DanioLab Research Group at ELTE Eötvös Loránd University.

The new model will provide new opportunities for a better understanding of this metabolic disease and could become an important asset for future clinical research. The mutant lines created in the project will be used to test drug candidates with the potential to ameliorate PXE symptoms.

Credit: 
Eötvös Loránd University

Dark Energy Survey physicists open new window into dark energy

image: A map of the sky showing the density of galaxy clusters, galaxies and matter in the universe over the part of the sky observed by the Dark Energy Survey. The left panel shows the galaxy density in that part of the sky, while the middle panel shows matter density and the right shows galaxy cluster density. Red areas are more dense, and blue areas are less dense, than average.

Image: 
Chun-Hao To/Stanford University, SLAC National Accelerator Laboratory

The universe is expanding at an ever-increasing rate, and while no one is sure why, researchers with the Dark Energy Survey (DES) at least had a strategy for figuring it out: They would combine measurements of the distribution of matter, galaxies and galaxy clusters to better understand what's going on.

Reaching that goal turned out to be pretty tricky, but now a team led by researchers at the Department of Energy's SLAC National Accelerator Laboratory, Stanford University and the University of Arizona have come up with a solution. Their analysis, published April 6 in Physical Review Letters, yields more precise estimates of the average density of matter as well as its propensity to clump together - two key parameters that help physicists probe the nature of dark matter and dark energy, the mysterious substances that make up the vast majority of the universe.

"It is one of the best constraints from one of the best data sets to date," says Chun-Hao To, a lead author on the new paper and a graduate student at SLAC and Stanford working with Kavli Institute for Particle Astrophysics and Cosmology Director Risa Wechsler.

An early goal

When DES set out in 2013 to map an eighth of the sky, the goal was to gather four kinds of data: the distances to certain types of supernovae, or exploding stars; the distribution of matter in the universe; the distribution of galaxies; and the distribution of galaxy clusters. Each tells researchers something about how the universe has evolved over time.

Ideally, scientists would put all four data sources together to improve their estimates, but there's a snag: The distributions of matter, galaxies, and galaxy clusters are all closely related. If researchers don't take these relationships into account, they will end up "double counting," placing too much weight on some data and not enough on others, To says.

To avoid mishandling all this information, To, University of Arizona astrophysicist Elisabeth Krause and colleagues have developed a new model that could properly account for the connections in the distributions of all three quantities: matter, galaxies, and galaxy clusters. In doing so, they were able to produce the first-ever analysis to properly combine all these disparate data sets in order to learn about dark matter and dark energy.

Improving estimates

Adding that model into the DES analysis has two effects, To says. First, measurements of the distributions of matter, galaxies and galaxy clusters tend to introduce different kinds of errors. Combining all three measurements makes it easier to identify any such errors, making the analysis more robust. Second, the three measurements differ in how sensitive they are to the average density of matter and its clumpiness. As a result, combining all three can improve the precision with which the DES can measure dark matter and dark energy.

In the new paper, To, Krause and colleagues applied their new methods to the first year of DES data and sharpened the precision of previous estimates for matter's density and clumpiness.

Now that the team can incorporate matter, galaxies and galaxy clusters simultaneously in their analysis, adding in supernova data will be relatively straightforward, since that kind of data is not as closely related with the other three, To says.

"The immediate next step," he says, "is to apply the machinery to DES Year 3 data, which has three times larger coverage of the sky." This is not as simple as it sounds: While the basic idea is the same, the new data will require additional efforts to improve the model to keep up with the higher quality of the newer data, To says.

"This analysis is really exciting," Wechsler said. "I expect it to set a new standard in the way we are able to analyze data and learn about dark energy from large surveys, not only for DES but also looking forward to the incredible data that we will get from the Vera Rubin Observatory's Legacy Survey of Space and Time in a few years."

Credit: 
DOE/SLAC National Accelerator Laboratory