Tech

Iron deficiency in corals?

image: A new study reveals that the microalgae that live within coral cells change how they take in other trace metals when iron is limited, which could have cascading effects on vital biological functions.

Image: 
Todd LaJeunesse, Penn State

When iron is limited, the tiny algae that live within coral cells -- which can provide the majority of a coral's nutritional needs -- change how they take in other trace metals, which could have cascading effects on vital biological functions. A new study in the journal Coral Reefs explores how different species of these microalgae rely on iron, whose already limited supply in oceans could decline with warming ocean waters, perhaps exacerbating the effects of climate change on corals.

"Iron deficiency is not just a problem for humans, but for other organisms as well," said Hannah Reich, a graduate student in biology at Penn State at the time of the research and author of the study. "Most organisms require a certain amount of iron and other trace metals to fulfill their basic physiological needs. Because warming ocean temperatures could alter iron availability, we wanted to know how microalgae in the family Symbiodiniaceae, which commonly live within coral cells, respond to limed amounts of iron to begin to understand how they might respond to a changing climate."

These microalgae maintain a symbiotic relationship with corals, producing energy from photosynthesis and providing up to 90 percent of the coral's daily nutritional needs. The changing climate can affect both the coral and the microalgae, and corals under stress due to warming waters often expel their symbionts in a process known as bleaching.

The researchers found that all five species of the microalgae they investigated grow poorly in cultures when iron was absent, and four of the species grew poorly when iron was present in very low concentrations. This is unsurprising, given the essential role of trace metals in basic physiologic functions like cellular respiration and photosynthesis. The researchers suggest that the fifth species may have less stringent iron requirements, which may help explain why it persists in some corals that are bleached or diseased.

The researchers also found that, when iron was limited, the microalgae acquired other trace metals in different quantities, in a way that was unique to each species. For example, one species had much greater uptake of manganese, while another had greater copper uptake.

"We believe these differences reflect the broad physiologies and ecologies of the species we investigated," said Todd LaJeunesse, professor of biology at Penn State and an author of the paper. "We found that species with similar ecological niches -- either found in similar habitats or with shared ecological abilities -- had similar metal profiles. If the microalgae are using trace metals in different amounts or in different ways, limitation of iron could have cascading effects on vital functions, like photosynthesis and whether they are able to take in other nutrients for survival."

Because changing temperature can affect iron availability, the researchers suggest that changing availability of iron due to warming waters could exacerbate the effects of thermal stress on corals. Next, the researchers plan to investigate how iron limitation and warm temperatures combine to impact the health of these microalgae.

"Our findings provide a foundation for understanding how iron availability affects cellular processes in Symbiodiniaceae and reveal that iron limitation, through its effects on the microalgae's growth and other trace metal usage, could exacerbate the effects of climate change on corals," said Reich.

Credit: 
Penn State

Reducing the carbon footprint of artificial intelligence

Artificial intelligence has become a focus of certain ethical concerns, but it also has some major sustainability issues.

Last June, researchers at the University of Massachusetts at Amherst released a startling report estimating that the amount of power required for training and searching a certain neural network architecture involves the emissions of roughly 626,000 pounds of carbon dioxide. That's equivalent to nearly five times the lifetime emissions of the average U.S. car, including its manufacturing.

This issue gets even more severe in the model deployment phase, where deep neural networks need to be deployed on diverse hardware platforms, each with different properties and computational resources.

MIT researchers have developed a new automated AI system for training and running certain neural networks. Results indicate that, by improving the computational efficiency of the system in some key ways, the system can cut down the pounds of carbon emissions involved -- in some cases, down to low triple digits.

The researchers' system, which they call a once-for-all network, trains one large neural network comprising many pretrained subnetworks of different sizes that can be tailored to diverse hardware platforms without retraining. This dramatically reduces the energy usually required to train each specialized neural network for new platforms -- which can include billions of internet of things (IoT) devices. Using the system to train a computer-vision model, they estimated that the process required roughly 1/1,300 the carbon emissions compared to today's state-of-the-art neural architecture search approaches, while reducing the inference time by 1.5-2.6 times.

"The aim is smaller, greener neural networks," says Song Han, an assistant professor in the Department of Electrical Engineering and Computer Science. "Searching efficient neural network architectures has until now had a huge carbon footprint. But we reduced that footprint by orders of magnitude with these new methods."

The work was carried out on Satori, an efficient computing cluster donated to MIT by IBM that is capable of performing 2 quadrillion calculations per second. The paper is being presented next week at the International Conference on Learning Representations. Joining Han on the paper are four undergraduate and graduate students from EECS, MIT-IBM Watson AI Lab, and Shanghai Jiao Tong University.

Creating a "once-for-all" network

The researchers built the system on a recent AI advance called AutoML (for automatic machine learning), which eliminates manual network design. Neural networks automatically search massive design spaces for network architectures tailored, for instance, to specific hardware platforms. But there's still a training efficiency issue: Each model has to be selected then trained from scratch for its platform architecture.

"How do we train all those networks efficiently for such a broad spectrum of devices -- from a $10 IoT device to a $600 smartphone? Given the diversity of IoT devices, the computation cost of neural architecture search will explode," Han says.

The researchers invented an AutoML system that trains only a single, large "once-for-all" (OFA) network that serves as a "mother" network, nesting an extremely high number of subnetworks that are sparsely activated from the mother network. OFA shares all its learned weights with all subnetworks -- meaning they come essentially pretrained. Thus, each subnetwork can operate independently at inference time without retraining.

The team trained an OFA convolutional neural network (CNN) -- commonly used for image-processing tasks -- with versatile architectural configurations, including different numbers of layers and "neurons," diverse filter sizes, and diverse input image resolutions. Given a specific platform, the system uses the OFA as the search space to find the best subnetwork based on the accuracy and latency tradeoffs that correlate to the platform's power and speed limits. For an IoT device, for instance, the system will find a smaller subnetwork. For smartphones, it will select larger subnetworks, but with different structures depending on individual battery lifetimes and computation resources. OFA decouples model training and architecture search, and spreads the one-time training cost across many inference hardware platforms and resource constraints.

This relies on a "progressive shrinking" algorithm that efficiently trains the OFA network to support all of the subnetworks simultaneously. It starts with training the full network with the maximum size, then progressively shrinks the sizes of the network to include smaller subnetworks. Smaller subnetworks are trained with the help of large subnetworks to grow together. In the end, all of the subnetworks with different sizes are supported, allowing fast specialization based on the platform's power and speed limits. It supports many hardware devices with zero training cost when adding a new device.

In total, one OFA, the researchers found, can comprise more than 10 quintillion -- that's a 1 followed by 19 zeroes -- architectural settings, covering probably all platforms ever needed. But training the OFA and searching it ends up being far more efficient than spending hours training each neural network per platform. Moreover, OFA does not compromise accuracy or inference efficiency. Instead, it provides state-of-the-art ImageNet accuracy on mobile devices. And, compared with state-of-the-art industry-leading CNN models , the researchers say OFA provides 1.5-2.6 times speedup, with superior accuracy.

"That's a breakthrough technology," Han says. "If we want to run powerful AI on consumer devices, we have to figure out how to shrink AI down to size."

"The model is really compact. I am very excited to see OFA can keep pushing the boundary of efficient deep learning on edge devices," says Chuang Gan, a researcher at the MIT-IBM Watson AI Lab and co-author of the paper.

"If rapid progress in AI is to continue, we need to reduce its environmental impact," says John Cohn, an IBM fellow and member of the MIT-IBM Watson AI Lab. "The upside of developing methods to make AI models smaller and more efficient is that the models may also perform better."

Credit: 
Massachusetts Institute of Technology

Researchers use 'hot Jupiter' data to mine exoplanet chemistry

ITHACA, N.Y. - After spotting a curious pattern in scientific papers - they described exoplanets as being cooler than expected - Cornell University astronomers have improved a mathematical model to accurately gauge the temperatures of planets from solar systems hundreds of light-years away.

This new model allows scientists to gather data on an exoplanet's molecular chemistry and gain insight on the cosmos' planetary beginnings, according to research published April 23 in Astrophysical Journal Letters.

Nikole Lewis, assistant professor of astronomy and the deputy director of the Carl Sagan Institute (CSI), had noticed that over the past five years, scientific papers described exoplanets as being much cooler than predicted by theoretical models.

"It seemed to be a trend - a new phenomenon," Lewis said. "The exoplanets were consistently colder than scientists would expect."

To date, astronomers have detected more than 4,100 exoplanets. Among them are "hot Jupiters," a common type of gaseous giant that always orbits close to its host star. Thanks to the star's overwhelming gravity, hot Jupiters always have one side facing their star, a situation known as "tidal locking."

Therefore, as one side of the hot Jupiter broils, the planet's far side features much cooler temperatures. In fact, the hot side of the tidally locked exoplanet bulges like a balloon, shaping it like an egg.

From a distance of tens to hundreds of light-years away, astronomers have traditionally seen the exoplanet's temperature as homogenous - averaging the temperature - making it seem much colder than physics would dictate.

Temperatures on exoplanets - particularly hot Jupiters - can vary by thousands of degrees, according to lead author Ryan MacDonald, a researcher at CSI, who said wide-ranging temperatures can promote radically different chemistry on different sides of the planets.

After poring over exoplanet scientific papers, Lewis, MacDonald and research associate Jayesh Goyal solved the mystery of seemingly cooler temperatures: Astronomers' math was wrong.

"When you treat a planet in only one dimension, you see a planet's properties - such as temperature - incorrectly," Lewis said. "You end up with biases. We knew the 1,000-degree differences were not correct, but we didn't have a better tool. Now, we do."

Astronomers now may confidently size up exoplanets' molecules.

"We won't be able to travel to these exoplanets any time in the next few centuries, so scientists must rely on models," MacDonald said, explaining that when the next generation of space telescopes get launched starting in 2021, the detail of exoplanet datasets will have improved to the point where scientists can test the predictions of these three-dimensional models.

"We thought we would have to wait for the new space telescopes to launch," said MacDonald, "but our new models suggest the data we already have - from the Hubble Space Telescope - can already provide valuable clues."

With updated models that incorporate current exoplanet data, astronomers can tease out the temperatures on all sides of an exoplanet and better determine the planet's chemical composition.

Said MacDonald: "When these next-generation space telescopes go up, it will be fascinating to know what these planets are really like."

Credit: 
Cornell University

Examining heart extractions in ancient Mesoamerica

image: Human heart sacrifices in Mesoamerica

Image: 
CINVESTAV Unidad Mérida

Sacrificial rituals featuring human heart extraction were a prevalent religious practice throughout ancient Mesoamerican societies. Intended as a means of appeasing and honoring certain deities, sacrifices served as acts of power and intimidation as well as demonstrations of devotion and gratitude. Human sacrifices were highly structured, complex rituals performed by elite members of society, and the ceremonies included a myriad of procedures imbued with symbolic significance.

The specific techniques performed, the instrumentation utilized, and the underlying mythology motivating sacrifices varied across civilizations. Given the diversity of sacrificial rituals throughout Mesoamerica, Vera Tiesler and Guilhem Olivier assert an interdisciplinary approach incorporating scientific and humanistic evidence is needed in order to gain more nuanced insights into the procedural elements and the religious implications of human sacrifice during the Classic and Postclassic periods.

In the study, "Open Chests and Broken Hearts: Ritual Sequences and Meanings of Human Heart Sacrifice in Mesoamerica," published in Current Anthropology, Tiesler and Olivier conduct an anatomical analysis of skeletal evidence and compare it with systematically checked historical sources and over 200 instances of ceremonial heart extraction in codices. Focusing on the location of openings created in the chest to allow for the removal of a victim's heart and blood, the authors examine the resulting fractures and marks in articulated skeletons to infer about the nature of the entry wound and the potential instrumentation used.

The breadth of source material and the multitude of disciplinary approaches has led to debate among scholars. While the archaeological record provides evidence of these ceremonies, less tangible elements of the rituals--such as the symbolism of these processes--may be harder to discern. Descriptions of human sacrifice and heart extraction can likewise be found in written witness testimonies and in Mesoamerican iconography. However, witness accounts were often inconsistent, especially concerning the position of the extraction site.

Utilizing forensic data in conjunction with an analysis of ethnohistorical accounts, the authors detail three distinct heart extraction methods: cutting directly under the ribs (subdiaphragmatic thoracotomy); making an incision between two ribs (intercostal thoracotomy); or by horizontally severing the sternum in order to access the heart (transverse bilateral thoracotomy). While previous research indicates subdiaphragmatic thoracotomy was a common practice, Tiesler and Olivier expand upon the existing literature by providing reconstructions of intercostal thoracotomy and transverse bilateral thoracotomy.

In addition to providing a more comprehensive understanding of extraction techniques and devices, the study reveals new interpretations of the relationship between thoracotomy procedures and conceptualizations of the human body as a source of "vitalizing matter," or food for the gods. Hearts and blood were offered as sustenance to deities representing the sun and the earth in recognition of their sacrifices during the creation of the universe. Data--including linguistic analysis of ancient Mesoamerican terminology--reinforce suggestions that these rites served as acts of obligation, reciprocation, and re-enactment.

The interdisciplinary nature of the study enables future research by offering a framework for analyzing sacrificial rituals in other ancient societies, including ancient civilizations in the Andes and India.

Credit: 
University of Chicago Press Journals

Game theory suggests more efficient cancer therapy

ITHACA, N.Y. - Cancer cells not only ravage the body - they also compete with each other.

Cornell mathematicians are using game theory to model how this competition could be leveraged, so cancer treatment - which also takes a toll on the patient's body - might be administered more sparingly, with maximized effect.

Their paper, "Optimizing Adaptive Cancer Therapy: Dynamic Programming and Evolutionary Game Theory," published April 22 in Proceedings of the Royal Society B: Biological Sciences.

"There are many game theoretic approaches for modeling how humans interact, how biological systems interact, how economic entities interact," said the paper's senior author, Alex Vladimirsky, professor of mathematics in the College of Arts and Sciences. "You could also model interactions between different types of cancer cells, which are competing to proliferate inside the tumor. If you know exactly how they're competing, you can try to leverage this to fight cancer better."

Vladimirsky and the paper's lead author, doctoral student Mark Gluzman, collaborated with oncologist and co-author Jacob Scott of the Cleveland Clinic. They used evolutionary game theory to model the interactions of three subpopulations of lung cancer cells that are differentiated by their relationship to oxygen: glycoltyic cells (GLY), vascular overproducers (VOP) and defectors (DEF).

In this model, previously co-developed by Scott, GLY cells are anaerobic (i.e., they do not require oxygen); VOP and DEF cells both use oxygen, but only VOP cells are willing to expend extra energy to produce a protein that will improve the vasculature and bring more oxygen to the cells.

Vladimirsky likens their competition to a game of rock, paper, scissors in which a million people are vying against each other. If the majority of participants choose to play rock, a greater number of players will be tempted to switch to paper. As the number of people switching to paper increases, fewer people will play rock and many more will shift to playing scissors. As the popularity of scissors grows, rock will become an attractive option again, and so on.

"So you have three populations, three competitive strategies, undergoing these cyclic oscillations," said Vladimirsky, who directs the Center for Applied Mathematics. "Without a drug therapy, the three subtypes of cancer cells may follow similar oscillating trajectories. Administering drugs can be viewed as temporarily changing the rules of the game.

"A natural question is how and when to change the rules to achieve our goals at a minimal cost - both in terms of the time to recovery and the total amount of drugs administered to the patient," he said. "Our main contribution is in computing how to optimally time these periods of drug treatment adaptively. We basically developed a map that shows when to administer drugs based on the current ratio of different subtypes of cancer."

In current clinical practice, cancer patients typically receive chemotherapy at the highest dosage their body can safely tolerate, and the side effects can be harsh. In addition, such a continuous treatment regimen often leads the surviving cancer cells to develop drug resistance, making further therapy far more difficult. The team's paper shows that a well-timed "adaptive" application could potentially lead to a patient's recovery with a greatly reduced amount of drugs.

But Vladimirsky cautions that, as is often the case in mathematical modeling, reality is much messier than theory. Biological interactions are complicated, often random, and can vary from patient to patient.

"Our optimization approach and computational experiments were all based on a particular simplified model of cancer evolution," he said. "In principle, the same ideas should also be applicable to much more detailed, and even patient-specific, models, but we are still a long way from there. We view this paper as a necessary early step on the road to practical use of adaptive, personalized drug-therapy. Our results are a strong argument for incorporating timing optimization into the protocol of future clinical trials."

Credit: 
Cornell University

MSU professor collaborates with international colleagues in Review of Modern Physics journal article

image: MSU Professor Alexandra Gade collaborated with international colleagues for a Review of Modern Physics article about shell evolution of exotic nuclei. The graphic displays the chart of nuclei, or proton vs. neutron number, and indicates the magic numbers that were shown to change for short-lived nuclei at the fringes of the chart. To understand the production of the elements in the Universe, the properties, including shell structure, of such nuclei have to be understood.

Image: 
Facility for Rare Isotope Beams

In an atomic nucleus, protons and neutrons, collectively called nucleons, are bound together by nuclear forces. These forces describe the interactions between nucleons, which cause them to occupy states grouped in shells, where each shell has a different energy and can host a certain number of nucleons. A nucleus is said to be magic when the neutron or protons happen to exactly fill up their respective shells up to the rim. Such magic nuclei are especially well bound and have properties that make them stand out. In fact, the variation of the properties of nuclei with nucleon number led to the formulation of the celebrated nuclear shell model some 70 years ago, with its magic numbers 2, 8, 20, 28, 50, 82 and 126, which has had spectacular success in describing many of the properties of the stable nuclei that make up the world around us.

With the advent of particle accelerator facilities, short-lived nuclei - so-called rare isotopes - that have, for example, many more neutrons than protons, can be produced and subjected to experimentation. Studies on such exotic nuclei revealed that the magic numbers are not as immutable as one might have expected from the rare isotope's stable cousins with less neutrons. New magic numbers were found and the ones known from stable nuclei can be absent for some short-lived nuclei. This is referred to as shell evolution.

On Earth, such exotic short-lived nuclei only exist for a fleeting moment produced at accelerator facilities. In the Universe, however, they are constantly formed in stars, e.g., in explosions on the surface of neutron stars, in supernovae, or in the violent collisions of neutron stars. In fact, the reactions and decays of the rare isotopes determine the elemental abundances observed in the Universe. If we ever want to understand how the visible matter around us came to be, we must understand and be able to model the properties of the exotic nuclei.

Michigan State University Professor Alexandra Gade collaborated with colleagues from Japan and France on an extensive review article in the prestigious Reviews of Modern Physics journal on the forces behind the observed shell evolution of exotic nuclei. The article reviews the state of the field and connects experimental observations to theoretical advancements in the description of rare isotopes.

In the future, advancements on the experimental and theoretical fronts are expected through new powerful laboratories, such as the Facility for Rare Isotope Beams at MSU, and high-performance computing, for example. The impact of understanding shell evolution stretches beyond nuclear astrophysics and extends to applications such as nuclear reactors, nuclear security, or nuclear medicine.

Credit: 
Michigan State University Facility for Rare Isotope Beams

Which foods do you eat together? How you combine them may raise dementia risk

MINNEAPOLIS - It's no secret that a healthy diet may benefit the brain. However, it may not only be what foods you eat, but what foods you eat together that may be associated with your risk of dementia, according to a new study published in the April 22, 2020, online issue of Neurology®, the medical journal of the American Academy of Neurology. The study looked at "food networks" and found that people whose diets consisted mostly of highly processed meats, starchy foods like potatoes, and snacks like cookies and cakes, were more likely to have dementia years later than people who ate a wider variety of healthy foods.

"There is a complex inter-connectedness of foods in a person's diet, and it is important to understand how these different connections, or food networks, may affect the brain because diet could be a promising way to prevent dementia," said study author Cécilia Samieri, PhD, of the University of Bordeaux in France. "A number of studies have shown that eating a healthier diet, for example a diet rich in green leafy vegetables, berries, nuts, whole grains and fish, may lower a person's risk of dementia. Many of those studies focused on quantity and frequency of foods. Our study went one step further to look at food networks and found important differences in the ways in which food items were co-consumed in people who went on to develop dementia and those who did not."

The study involved 209 people with an average age of 78 who had dementia and 418 people, matched for age, sex and educational level, who did not have dementia.

Participants had completed a food questionnaire five years previously describing what types of food they ate over the year, and how frequently, from less than once a month to more than four times a day. They also had medical checkups every two to three years. Researchers used the data from the food questionnaire to compare what foods were often eaten together by the patients with and without dementia.

Researchers found while there were few differences in the amount of individual foods that people ate, overall food groups or networks differed substantially between people who had dementia and those who did not have dementia.

"Processed meats were a "hub" in the food networks of people with dementia," said Samieri. "People who developed dementia were more likely to combine highly processed meats such as sausages, cured meats and patés with starchy foods like potatoes, alcohol, and snacks like cookies and cakes. This may suggest that frequency with which processed meat is combined with other unhealthy foods, rather than average quantity, may be important for dementia risk.
For example, people with dementia were more likely, when they ate processed meat, to accompany it with potatoes and people without dementia were more likely to accompany meat with more diverse foods, including fruit and vegetables and seafood."

Overall, people who did not have dementia were more likely to have a lot of diversity in their diet, demonstrated by many small food networks that usually included healthier foods, such as fruit and vegetables, seafood, poultry or meats.

"We found that more diversity in diet, and greater inclusion of a variety of healthy foods, is related to less dementia," said Samieri. "In fact, we found differences in food networks that could be seen years before people with dementia were diagnosed. Our findings suggest that studying diet by looking at food networks may help untangle the complexity of diet and biology in health and disease."

One limitation of the study was that participants completed a food questionnaire that relied on their ability to accurately recall diet rather than having researchers monitor their diets. Another limitation was that diets were only recorded once, years before the onset of dementia, so any changes in diet over time were unknown.

Credit: 
American Academy of Neurology

Promoting advantages of product category, such as e-cigarettes, can backfire

Industries often position products to tout the benefits of one category over another -- such as the higher-quality, traditional ingredients of a microbrew over mass-produced brewery beer. Researchers suggest that during the past decade, efforts to promote e-cigarettes as a healthier alternative to combustible cigarettes instead backfired, resulting in a product with a reputation as bad or worse than the existing cigarette category.

A review of press releases, news and retail coverage, research, and other documents on e-cigarettes between 2007 and 2018 found that the e-cigarette category of tobacco suffered in reputation over time.

"As the U.S. e-cigarette market has grown and producers have tried to increasingly differentiate from cigarettes, value-based distinctions between the two categories have eroded and social valuations of e-cigarettes as a whole have become increasingly negative," said Greta Hsu, professor at the University of California, Davis, Graduate School of Management, in her latest research. The paper, "The Double-Edged Sword of Opposition Category Positioning: A Study of the U.S. E-cigarette Category, 2007-2017," was published today in the journal Administrative Science Quarterly. The article is co-authored by Stine Grodal, professor at Boston University, Questrom School of Business.

Researchers collected more than 1,200 documents from different groups with interests in e-cigarettes, including producers, retailers, financial analysts, government and public health officials, and anti-tobacco organizations. They also conducted interviews with policymakers, public health advocates and industry leaders and analyzed over 2,000 abstracts of research papers on e-cigarettes.

E-cigarettes as a product category

Early e-cigarette entrepreneurs framed their product as a virtuous alternative to the older, existing category of cigarettes, in a manner similar to touting the benefits of wind power energy, organic produce and grass-fed meat compared to existing products, researchers said. Because consumers were unfamiliar with e-cigarettes and their use, producers made sure to emphasize that e-cigarettes were similar to cigarettes in function and experience. They did so by designing the new technology to physically resemble cigarettes, using the term "cigarette" in their labeling, describing the e-cigarette as closely replicating the cigarette smoking experience and emphasizing that the audience for this new product was existing cigarette smokers looking to reduce harm. They secondarily emphasized potential health and social benefits of using e-cigarettes relative to smoking.

Before 2012, when the industry consisted of small, independent producers, the media discussed e-cigarettes in relatively positive terms and even promoted them as a way to help smokers quit smoking. "E-cigarettes were initially introduced as a healthier alternative to combustible cigarettes," researchers said. Producers also portrayed e-cigarettes as a product that would "not offend non-smokers" and avoided "the social and health concerns that smoking entails."

Boundaries weakened

Yet, over time, the symbolic boundaries separating e-cigarettes from cigarettes weakened. Tobacco company diversification into the e-cigarette category after 2012 played a major role, Hsu said.

References to e-cigarettes in various media as "dangerous" or "unhealthy" increased from 2013 to 2014, peaking in 2014, after all three "Big Tobacco" companies (Altria, R.J. Reynolds, and Lorillard) had entered the e-cigarette category with nationally distributed e-cigarette brands. In 2014, reports suggested that e-cigarettes had become the most commonly used tobacco product among youth.

By mid-2014, researchers said, stakeholders widely referred to e-cigarettes as "a gateway to cigarette use among youths, and thus a danger to the health, well-being and longevity of a new generation of Americans." Critics, from academics to elected officials, expressed fears that e-cigarettes had "re-normalized" smoking, undoing much of the anti-smoking campaigns of previous years, according to researchers. The stigma associated with cigarettes became diffused, intensified and generalized across the entire category of e-cigarettes.

By 2018, the FDA called the use of JUUL and other e-cigarette products, including flavored tobacco, a teenage epidemic. A 2019 outbreak of lung disease cases related to vaping THC further intensified public alarm. In contrast, several prominent United Kingdom health organizations, including the Royal College of Physicians, Cancer Research UK and the British Medical Association, continued to endorse a more positive view of e-cigarettes as substantially less harmful than combustible cigarettes.

Drawing distinctions

Research in product categories has shown that businesses should ensure that their claims of distinction or tradition are both clear and legitimate, the authors said. "Studies of oppositional categories such as French nouvelle cuisine, biodynamic wine, wind energy and grass-fed beef have found that new category proponents expend considerable effort elaborating how a new category is different from and normatively superior to an existing category," the researchers said.

The lack of a clear distinction between cigarettes and e-cigarettes allowed tobacco companies to enter the e-cigarette market and grow their market share, but this lack of distinction also blurred boundaries between the two products, the researchers concluded.

Credit: 
University of California - Davis

From Voldemort to Vader, fictional villains may draw us to darker versions of ourselves

As people binge watch TV shows and movies during this period of physical distancing, they may find themselves eerily drawn to fictional villains, from Voldemort and Vader to Maleficent and Moriarty. Rather than being seduced by the so-called dark side, the allure of evil characters has a reassuringly scientific explanation.

According to new research published in the journal Psychological Science, people may find fictional villains surprisingly likeable when they share similarities with the viewer or reader.

This attraction to potentially darker versions of ourselves in stories occurs even though we would be repulsed by real-world individuals who have similarly immoral or unstable behaviors. One reason for this shift, the research indicates, is that fiction acts like a cognitive safety net, allowing us to identify with villainous characters without tainting our self-image.

"Our research suggests that stories and fictional worlds can offer a 'safe haven' for comparison to a villainous character that reminds us of ourselves," [RK1] says Rebecca Krause, a PhD candidate at Northwestern University and lead author on the paper. "When people feel protected by the veil of fiction, they may show greater interest in learning about dark and sinister characters who resemble them."

Academics have long suggested people recoil from others who are in many ways similar to themselves yet possess negative features such as obnoxiousness, instability, and treachery. Antisocial features in someone with otherwise similar qualities, the thinking goes, may be a threat to a person's image of themselves.

"People want to see themselves in a positive light," notes Krause. "Finding similarities between oneself and a bad person can be uncomfortable." In contrast, Krause and her coauthor and advisor Derek Rucker find that putting the bad person in a fictional context can remove that discomfort and even reverse this preference. In essence, this separation from reality attenuates undesirable and uncomfortable feelings.

"When you are no longer uncomfortable with the comparison, there seems to be something alluring and enticing about having similarities with a villain," explains Rucker.

"For example, people who see themselves as tricky and chaotic may feel especially drawn to the character of The Joker in the Batman movies, while a person who shares Lord Voldemort's intellect and ambition may feel more drawn to that character in the Harry Potter series," said Krause.

To test this idea, the researchers analyzed data from the website CharacTour, an online, character-focused entertainment platform that had approximately 232,500 registered users at the time of analysis. One of the site's features allows users to take a personality quiz and see their similarity to different characters who had been coded as either villainous or not. Villains included characters such as Maleficent, The Joker, and Darth Vader. Nonvillains included Sherlock Holmes, Joey Tribbiani, and Yoda.

The anonymous data from these quizzes allowed the researchers to test whether people were attracted toward or repulsed by similar villains, using nonvillains as a baseline. Not surprisingly, people were drawn to nonvillains as their similarity increased. However, the results further suggested that users were most drawn to villains who share similarities with them.

The researchers believe that similarities to story villains do not threaten the self in the way real-life villains would.

"Given the common finding that people are uncomfortable with and tend to avoid people who are similar to them and bad in some way, the fact that people actually prefer similar villains over dissimilar villains was surprising to us," notes Rucker. "Honestly, going into the research, we both were aware of the possibility that we might find the opposite."

The current data do not identify which behaviors or characteristics the participants found attractive. Further research is needed to explore the psychological pull of villains and whether people are drawn toward similar villains in fiction because people look for chances to explore their own personal dark side.

"Perhaps fiction provides a way to engage with the dark aspects of your personality without making you question whether you are a good person in general," concludes Krause.

Credit: 
Association for Psychological Science

Researchers use electrostatic charge to assemble particles into materials mimicking gemstones, salt

image: On the left, tiny crystals are imaged using a scanning electron microscope, distinguishing the individual building blocks, which consist of spherical polystyrene beads.
On the right, larger crystals are imaged with a regular iPhone camera, revealing bright colors similar to naturally occurring opals.

Image: 
Theodore Hueckel, Sacanna Lab at NYU

Using just electrostatic charge, common microparticles can spontaneously organize themselves into highly ordered crystalline materials--the equivalent of table salt or opals, according to a new study led by New York University chemists and published in Nature.

"Our research shines new light on self-assembly processes that could be used to manufacture new functional materials," said Stefano Sacanna, associate professor of chemistry at NYU and the study's senior author.

Self-assembly is a process in which tiny particles recognize each other and bind in a predetermined manner. These particles come together and assemble into something useful spontaneously, after a triggering event, or a change in conditions.

One approach to programming particles to assemble in a particular manner is to coat them with DNA strands; the genetic code instructs the particles on how and where to bind with one another. However, because this approach requires a considerable amount of DNA, it can be expensive and is limited to making very small samples.

In their study in Nature, the researchers took a different approach to self-assembly using a much simpler method. Instead of using DNA, they used electrostatic charge.

The process is similar to what happens when you mix salt into a pot of water, Sacanna explained. When salt is added to water, the tiny crystals dissolve into negatively charged chlorine ions and positively charged sodium ions. When the water evaporates, the positively and negatively charged particles recombine into salt crystals.

"Instead of using atomic ions like those in salt, we used colloidal particles, which are thousands of times bigger. When we mix the colloidal particles together under the right conditions, they behave like atomic ions and self-assemble into crystals," said Sacanna.

The process allows for making large quantities of materials.

"Using the particles' natural surface charge, we managed to avoid doing any of the surface chemistry typically required for such elaborate assembly, allowing us to easily create large volumes of crystals," said Theodore Hueckel, postdoctoral researcher at NYU and the study's first author.

In addition to creating salt-like colloidal materials, the researchers used self-assembly to create colloidal materials that mimic gemstones--in particular, opals. Opals are iridescent and colorful, a result of their inner crystalline microstructure and its interaction with light. In the lab, the researchers created their test-tube gemstones with very similar inner microstructures to opals.

"If you take a highly magnified image of an opal, you will see the same tiny spherical building blocks lined up in a regular fashion," added Hueckel.

Using electrostatic charge for self-assembly enables researchers to both mimic materials found in nature but also has advantages beyond what naturally occurs. For instance, they can adjust size and shape of the positively and negatively charged particles, which allows for a wide range of different crystalline structures.

"We're inspired by nature's ionic crystals, but we believe we'll move beyond their structural complexity by utilizing all of the design elements uniquely available to colloidal building blocks," said Hueckel.

Credit: 
New York University

Human uterus colonized by clones with cancer-driving mutations that arise early in life

Many cells in the inner lining of the uterus carry 'cancer-driving' mutations that frequently arise early in life, report scientists from the Wellcome Sanger Institute, the University of Cambridge and their collaborators. The research team conducted whole-genome sequencing of healthy human endometrium, providing a comprehensive overview of the rates and patterns of DNA changes in this tissue.

The work, published today (22 April 2020) in Nature, provides insights into the earliest stages of uterine cancer development.

The endometrium is the inner part of the uterus, more commonly known as the womb lining. It is regulated by hormones such as oestrogen and progesterone and enters different states during childhood, reproductive years, pregnancy and after menopause.

Uterine cancer is the fourth most common cancer in women in the UK, accounting for five per cent of all new female cancer cases. Around 9,400 new cases are diagnosed every year, leading to the death of 2,300 women. Most cases occur in the seventh and eighth decades. Since the early 1990s, the incidence of uterine cancer has risen by 55 per cent in the UK*.

All cancers occur due to changes in DNA, known as somatic mutations, which continuously occur in all of our cells throughout life. A tiny fraction of these somatic mutations can contribute to a normal cell turning into a cancer cell and are known as 'driver' mutations, which occur within a subset of 'cancer' genes.

This study used whole-genome sequencing to better understand the genetic changes in healthy endometrial tissue. The team developed technology to sequence the genomes of small numbers of cells from individual glands in the endometrial epithelium, the tissue layer that sheds and regenerates during a woman's menstrual cycle.

Laser-capture microscopy was used to isolate 292 endometrial glands from womb tissue samples donated by 28 women aged 19 to 81 years**, before DNA from each gland was whole-genome sequenced. The team then searched for somatic mutations in each gland by comparing them with whole genome sequences from other tissues from the same individuals.

The researchers found that a high proportion of cells carried driver mutations, even though they appeared completely normal under the microscope. Many of these driver mutations appear to have arisen early in life, in many cases during childhood.

Dr Luiza Moore, the lead researcher based at the Wellcome Sanger Institute, said: "Human endometrium is a highly dynamic tissue that undergoes numerous cycles of remodelling during female reproductive years. We identified frequent cancer driver mutations in normal endometrium and showed that many such events had occurred early in life, in some cases even before adolescence. Over time, these mutant stem cells accumulate further driver mutations."

Despite the early occurrence of the first cancer-driver mutations, it takes several decades for a cell to accumulate the remaining drivers that will lead to invasive cancer. Typically, three to six driver mutations in the same cell are required for cancer to develop. As such, the vast majority of normal cells with driver mutations never convert into invasive cancers. When an invasive cancer develops, it may have been silently evolving within us for most of our lifetime.

Dr Kourosh Saeb-Parsy, of the University of Cambridge and Director of the Cambridge Biorepository for Translational Medicine (CBTM), said: "Incidence of uterine cancers have been steadily rising in the UK for several decades, so knowing when and why genetic changes linked to cancer occur will be vital in helping to reverse this trend. This research is an important step and wouldn't have been possible without the individuals who gifted precious samples for this study, including transplant donors and their families."

Professor Sir Mike Stratton, Director of the Wellcome Sanger Institute, said: "New technologies and approaches to investigating DNA mutations in normal tissues are providing profound insights into the procession of genetic changes that convert a normal cell into a cancer cell. The results indicate that, although most cancers occur at relatively advanced ages, the genetic changes that underlie them may have started early in life and we may have been incubating the developing cancer for most of our lifetime."

Credit: 
Wellcome Trust Sanger Institute

Scientists have devised method for gentle laser processing of perovskites at nanoscale

image: Paper illustration.

Image: 
<em>Small</em>

Scientists of Far Eastern Federal University (FEFU) in partnership with colleagues from ITMO University, and universities in Germany, Japan, and Australia, have developed a method for precise, fast and high-quality laser processing of halide perovskites (CH3NH3PbI3), promising light-emitting materials for solar energy, optical electronics, and metamaterials. Structured by very short laser pulses (femtosecond laser) perovskites turned out to be functional nanoelements marked by unprecedented quality. A related article is published in Small.

Perovskites were discovered in the first half of the 19th century in the Ural (Russia) in the form of a mineral consisting of calcium, titanium and oxygen atoms. Today, due to unique properties, perovskites are up-and-coming materials for solar energy and the development of light-emitting devices for photonics, i.e. LEDs and microlasers. They hit the top of the most scrutinizing materials that attract the interest of scientific groups from all around the world.

The major drawback is complicated processing. Perovskites easily degrade under the influence of an electron beam, liquids or temperature, losing the properties scientists are so interested in. This significantly complicates the manufacturing of functional perovskite nanostructures by means of common methods as electron beam lithography.

Scientists from FEFU (Vladivostok, Russia) and ITMO University (St. Petersburg, Russia) teamed up with foreign colleagues and solved this problem by proposing a unique technology for the processing of organo-inorganic perovskites using femtosecond laser pulses. The output was high-quality nanostructures with controlled characteristics.

"It is very difficult to nanostructurize conventional semiconductors, such as gallium arsenide, using a powerful pulsed laser," says Sergey Makarov, a leading researcher at ITMO University's Faculty of Physics and Engineering, "The heat is scattered in all directions and all the thin, sharp edges are simply distorted by this heat. It's like if you try to make a miniature tattoo with fine details, but due to the paint spreading out under the skin, you will just get an ugly blue spot. Perovskite has poor thermal conductivity, so our patterns turned out very precise and very small."

Laser scribing of perovskite films into individual blocks is an important technological step of the modern solar cell production chain. So far the process was not very accurate and being rather destructive for the perovskite material as its outermost sections lost functional properties due to temperature degradation. The new technology can help to solve this problem allowing fabrication of high-performing solar cells.

"Perovskite represents a complex material consisting of organic and inorganic parts. We used ultrashort laser pulses for fast heating and targeted evaporation of the organic part of perovskite that proceeds at rather low temperature of 160 C0. Laser intensity was adjusted in such a way to produce melting/evaporation of the organic part leaving inorganic one unaffected. Such nondestructive processing allowed us to achieve an unprecedented quality of produced perovskite functional structures". Said one of the technology developers, Alexey Zhizhchenko, a researcher at the SEC "Nanotechnology" of FEFU School of Engineering.

Scientists of FEFU and ITMO University pointed to three areas where their development can give tangible results.

The first is the recording of information that the user can read under certain conditions only.

"We have demonstrated the relevance of our approach by producing diffraction gratings and microstrip lasers with the ultimately small width of only 400 nanometers. Such characteristic dimensions pave the way towards development of active elements of future optical communication chips and computers". said Alexey Zhizhchenko.

Secondly, with the help of a laser, one can change the visible color of a perovskite fragment with no dye applied. Material may come like yellow, black, blue, red, depending on the needs.

"This may be utilized to perform solar panels of all colors of the rainbow. The modern architecture allows covering the entire surface of the building by solar panels, the point is not all customers want plain black panels", Sergey Makarov said.

The third application is the manufacturing of nanolasers for optical sensors and optical chips which transmit information due to not the electrons flow but photons one.

Simple, fast and cost-effective production of such elements is to bring about a new era of computer technology working on the principles of controlled light. Processing of perovskites according to the proposed technology gives a chance to get thousands, even hundreds of thousands of nanolasers per minute. The introduction of the technology to the industry will draw the world closer to the development of optical computers.

"Another key feature of the proposed technology is that it allows layer-by-layer thinning of the perovskites. This opens the way to design and fabricate more complicated 3D microstructures from perovskite, for example, micro-scale vortex-emitting lasers, which are highly demanded for information multiplexing in next-generation optical communications. Importantly, such processing preserves and even improves the light-emitting properties of thinned layer passivated due to modification of chemical composition", said team member Aleksandr Kuchmizhak, research fellow at the FEFU Center for Neurotechnology, VR and AR.

This study gathered specialists from FEFU, ITMO University, IAPC FEB RAS, Joint Institute for High Temperatures of the RAS, The Ruhr-University Bochum (Germany), Tokai University (Japan), and Swinburne University of Technology (Australia).

Previously, in the spring of 2019, a team of scientists from FEFU, ITMO University, The University of Texas at Dallas and The Australian National University developed an effective, fast and cheap way to fabricate perovskite microdisk lasers as promising sources of intense coherent light irradiation for optical microchips and optical computers of the new generation.

Credit: 
ITMO University

Hungry galaxies grow fat on the flesh of their neighbours

image: Simulation showing distribution of dark matter density overlayed with the gas density. This image cleanly shows the gas channels connecting the central galaxy with its neighbours.

Image: 
Gupta et al/ASTRO 3D/ IllustrisTNG collaboration.

Galaxies grow large by eating their smaller neighbours, new research reveals.

Exactly how massive galaxies attain their size is poorly understood, not least because they swell over billions of years. But now a combination of observation and modelling from researchers led by Dr Anshu Gupta from Australia's ARC Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D) has provided a vital clue.

In a paper published in the Astrophysical Journal, the scientists combine data from an Australian project called the Multi-Object Spectroscopic Emission Line (MOSEL) survey with a cosmological modelling program running on some of the world's largest supercomputers in order to glimpse the forces that create these ancient galactic monsters.

By analysing how gases within galaxies move, Dr Gupta said, it is possible to discover the proportion of stars made internally - and the proportion effectively cannibalised from elsewhere.

"We found that in old massive galaxies - those around 10 billion light years away from us - things move around in lots of different directions," she said.

"That strongly suggests that many of the stars within them have been acquired from outside. In other words, the big galaxies have been eating the smaller ones."

Because light takes time to travel through the universe, galaxies further away from the Milky Way are seen at an earlier point in their existence. Dr Gupta's team found that observation and modelling of these very distant galaxies revealed much less variation in their internal movements.

"We then had to work out why 'older', closer big galaxies were so much more disordered than the 'younger', more distant ones," said second author ASTRO 3D's Dr Kim-Vy Tran, who like Dr Gupta, is based at the UNSW Sydney.

"The most likely explanation is that in the intervening billions of years the surviving galaxies have grown fat and disorderly through incorporating smaller ones. I think of it as big galaxies having a constant case of the cosmic munchies."

The research team - which included scientists from other Australian universities plus institutions in the US, Canada, Mexico, Belgium and the Netherlands - ran their modelling on a specially designed set of simulations known as IllustrisTNG.

This is a multi-year, international project that aims to build a series of large cosmological models of how galaxies form. The program is so big that it has to run simultaneously on several of world's most powerful supercomputers.

"The modelling showed that younger galaxies have had less time to merge with other ones," said Dr Gupta.

"This gives a strong clue to what happens during an important stage of their evolution."

Credit: 
ARC Centre of Excellence for All Sky Astrophysics in 3D (ASTRO 3D)

HKUST scientists discover how multiple RNA elements control MicroRNA biogenesis

image: The model of Microprocessor on pri-miRNA. Pri-miRNA is illustrated with ribbons. Microprocessor is a trimeric complex, consisting of DROSHA (green) and DGCR8 dimer (yellow). The background is the word cloud generated using our lab's publications since 2018.

Image: 
The Hong Kong University of Science and Technology

In humans (as well as all other organisms) genes encode proteins, which in turn regulate all the different specific cellular functions of the body. The genetic information found in our DNA is first converted into messenger RNA (mRNA) by a process called transcription. The mRNA acts as a template as it is read by intracellular organelles called ribosomes, which then create (or translate) the appropriate protein from the correct amino acid components. miRNAs are short noncoding RNAs that do not make any protein. However, they do play crucial roles in regulating the stability of mRNAs and translation of mRNAs to proteins. In addition, the abnormal expression of miRNAs is associated with a number of human diseases such as various types of cancer due to the defects of miRNAs leading to increased levels of oncogenic mRNAs or decreased levels of tumor suppressor mRNAs. For this reason, having the ability to manipulate the synthesis of miRNAs in cells is crucial for the development of therapies to correct the defects in their expression and thus treat these diseases.

To maintain normal functions in cells the expression of miRNAs must be strictly controlled. The human Microprocessor complex is responsible for cleaving pri-miRNAs to produce miRNA precursors (pre-miRNAs) and so as it ultimately determines the functions of miRNA, this is why it was focus of this research. A number of RNA elements located in some parts of the pri-miRNA structure (i.e., the apical loop, basal segment and lower stem) have previously been revealed to affect the processing of pri-miRNA by Microprocessor. However, a long-standing puzzle was whether there were any RNA elements in the upper stem region of pri-miRNA that might affect the cleavage ability of Microprocessor and hence subsequent cellular processes.

A research team led by Prof. Tuan Anh NGUYEN, Assistant Professor in the Division of Life Science at HKUST, recently discovered how RNA elements in the upper stem of pri-mRNAs affect the action of Microprocessor*. In this study, the team first established a reliable protein purification system using human cells, and then conducted high-throughput enzymology assays to investigate the catalytic mechanism of Microprocessor as it cleaved approximately 200,000 randomized pri-miRNAs. They then ran the assay results using next-generation sequencing technology, and the big data they obtained were analyzed using various bioinformatics tools. Finally, the findings resulting from the big data analysis were verified in human cells. Prof. Nguyen and his team discovered multiple RNA elements in the upper stem of pri-mRNAs, which are crucial for regulating the expression of various miRNAs in human cells. The presence of these elements meant that when the pri-mRNAs were cleaved with Microprocessor, then less products were made or various alternative cleavage sites were generated instead of the original sites. These affected the expression level of the miRNAs, which resulted in dysregulation of the subsequent production of proteins from mRNAs, thus leading to abnormal human cell activities.

"Our work has enhanced our understanding about the differential levels of miRNAs in diverse cellular processes. Our new results may help us interpret the cause of many miRNA-related human diseases. Also, by specifically targeting the particular RNA elements we've identified, we should be able to restore the normal levels of many abnormally-expressing miRNAs in many miRNA-related disorders, leading to the potential development of clinical miRNA-based diagnostics," Shaohua LI, one of the two co-first authors of the paper said. "Our next step is to develop our high-throughput enzymology assays by designing new RNA substrates, and engineering proteins to identify more RNAs elements that are critical for accurate and efficient miRNA production," Trung Duc NGUYEN, the other co-first author, added.

"Last month, my team and I published another paper, this time in Nucleic Acids Research**, where we discovered other new and exciting RNA elements that control miRNA levels. We are currently using different approaches, including gene-editing technology, to modify the RNA elements we discovered in these two studies, in order to more accurately control the level of miRNAs in cellular systems. The expected outcome from this study will be to lay the foundation for the future development of these RNA element-targeting therapeutics for miRNA-related diseases." Prof. NGUYEN said.

Credit: 
Hong Kong University of Science and Technology

Human-caused warming will cause more slow-moving hurricanes, warn climatologists

Hurricanes moving slowly over an area can cause more damage than faster-moving storms, because the longer a storm lingers, the more time it has to pound an area with storm winds and drop huge volumes of rain, leading to flooding. The extraordinary damage caused by storms like Dorian (2019), Florence (2018) and Harvey (2017) prompted Princeton's Gan Zhang to wonder whether global climate change will make these slow-moving storms more common.

Zhang, a postdoctoral research associate in atmospheric and oceanic sciences, decided to tackle the question by using a large ensemble of climate simulations. He worked with an international team of researchers from the Geophysical Fluid Dynamics Laboratory on Princeton University's Forrestal campus and the Meteorological Research Institute in Tsukuba, Japan. The results of this work appear in the April 22 issue of Science Advances.

Zhang and his colleagues selected six potential warming patterns for the global climate, then ran 15 different possible initial conditions on each of the six patterns, resulting in an ensemble of 90 possible futures. In all 90 simulations, they told the computers to assume that global carbon dioxide levels have quadrupled and the planet's average temperature has risen by about 4 degrees Celsius -- a level of warming that experts predict could be reached before the turn of the century, if no action is taken to curb fossil fuel use.

"Our simulations suggest that future anthropogenic warming could lead to a significant slowing of hurricane motion, particularly in some populated mid-latitude regions," Zhang said. His team found about the storms' forward motion would slow by about 2 miles per hour -- about 10 to 20% of the current typical speeds -- at latitudes near Japan and New York City.

"This is the first study we are aware of that combines physical interpretation and robust modeling evidence to show that future anthropogenic warming could lead to a significant slowing of hurricane motion," he said.

"Since the occurrence of Hurricane Harvey, there has been a huge interest in the possibility that anthropogenic climate change has been contributing to a slow down in the movement of hurricanes," said Suzana Camargo, the Marie Tharp Lamont Research Professor at Columbia University's Lamont-Doherty Earth Observatory, who was not involved in this research. "In a new paper, Gan Zhang and collaborators examined the occurrence of a slowdown of tropical cyclones in climate model simulations. They showed that in this model, there is a robust slowdown of tropical cyclone motion, but this occurs mainly in the mid-latitudes, not in the tropics."

Why would the storms slow down? The researchers found that 4 degrees of warming would cause the westerlies -- strong currents blowing through the midlatitudes -- to push toward the poles. That shift is also accompanied by weaker mid-latitude weather perturbations. These changes could slow down storms near populated areas in Asia (where these storms are called typhoons or cyclones, not hurricanes) and on the U.S. eastern seaboard.

Usually when people talk about hurricane speeds, they're referring to the winds whipping around the eye of the storm. Those wind speeds are what determine a storm's strength -- a Category 5 hurricane, for example, has sustained winds of more than 157 miles per hour. By contrast, Zhang and his colleagues are looking at the "translational motion," sometimes called the "forward speed" of a storm, the speed at which a hurricane moves along its path. (The term comes from geometry, where a figure is "translated" when it slides from one part of a graph to another.) No matter how fast its winds are, a storm is considered "slow-moving" if its translational speed is low. Hurricane Dorian, which battered Grand Bahama Island from Sept. 1 to 3, 2019, was a Category 5 hurricane with wind gusts reaching 220 miles per hour, but it had a translational speed of just 1.3 mph, making it one of the slowest hurricanes ever documented.

Are storms already slowing down?

Some researchers have suggested that tropical storm translation speeds have slowed over land regions in the United States since 1900. Zhang and his colleagues used their climate models to see if human-caused warming was responsible for the observed slowdown, but they couldn't find a compelling link, at least based on trends since 1950 in their simulations. In addition, they noted that observed slowing translational speeds reported in recent studies could arise primarily from natural variability rather than human-caused climate changes.

Zhang used the metaphor of dieting to explain the ambiguity of hurricane observations.

"If I go to the gym and eat fewer sweets," he said, "I would expect to lose weight. But if I'm only using a bathroom scale to weigh myself, I'm not going to get convincing data very soon, for many reasons including that my bathroom scale isn't the most accurate," he continued. "Assume after two weeks, I see some weak trend," he said. "I still can't tell whether it's due to exercise, diet or just randomness."

Similarly, the observed slowdown trend in hurricanes or tropical storms over the past century could be due to small-scale local changes or could just be random, he said.

"In the debate between 'Everything is caused by climate change' and 'Nothing is caused by climate change' -- what we are doing here is trying to offer that maybe not everything can be immediately attributed to climate change, but the opposite is not right, either," Zhang said. "We do offer some evidence that there could be a slowdown of translational motion in response to a future warming on the order of 4 degrees Celsius. Our findings are backed by physics, as captured by our climate models, so that's a new perspective that offers more confidence than we had before."

"Tropical Cyclone Motion in a Changing Climate," by Gan Zhang, Hiroyuki Murakami, Thomas Knutson, Ryo Mizuta and Kohei Yoshida, was published in the April 22 issue of Science Advances (DOI: 10.1126/sciadv.aaz7610). The research was supported by Princeton University's Cooperative Institute for Modeling the Earth System through the Predictability and Explaining Extremes Initiative.

Credit: 
Princeton University