Earth

Large marine parks can save sharks from overfishing threat

image: Caribbean reef shark.

Image: 
Andy Mann

'No-take' marine reserves - where fishing is banned - can reverse the decline in the world's coral reef shark populations caused by overfishing, according to an Australian study.

But University of Queensland, James Cook University (JCU) and University of Tasmania researchers found that existing marine reserves need to be much larger to be effective against overfishing.

UQ's Dr Ross Dwyer said the study estimated that no-take reserves that extend between 10 and 50 kilometres along coral reefs can achieve significant improvements in shark populations.

"Existing protected areas on coral reefs would need to be enforced as strict no-take reserves and be up to five times larger to effectively conserve reef sharks," Dr Dwyer said.

"Those in the Atlantic where reef sharks are generally less abundant would need to be on average 2.6 times larger than those in the Indian and Pacific Oceans."

Species such as grey reef sharks have experienced severe population declines across parts of their distribution, largely due to their low fecundity, late age at sexual maturity, and high susceptibility to fishing pressure.

They are listed as Near Threatened in the International Union for Conservation of Nature's (IUCN) Red List of Threatened Species.

The researchers combined large volumes of tracking data on five species of sharks found on coral reefs in the Indian, Pacific and Atlantic oceans, with video survey data from 36 countries.

"This allowed us to predict the conservation benefits no-take reserves of different sizes could generate," Dr Dwyer said.

JCU Professor Colin Simpfendorfer from James Cook University said shark populations were in trouble in most parts of the world.

"Finding ways to rebuild their populations is critical to ensuring our oceans remain healthy," Professor Simpfendorfer said.

"This project is providing options for managers of coral reefs to address declines in shark populations which scientists know have occurred in many areas."

Dr Nils Krueck from the University of Tasmania said researchers now have the ability to estimate conservation and fishery impacts of marine reserves much more precisely.

"Our results show that marine parks for reef sharks need to be large. But if reserves extend along 15 kilometres of coral reef, then fishing mortality can be reduced by fifty per cent," Dr Krueck said.

The study, funded by the Shark Conservation Fund, is published in the journal Current Biology. (DOI: 10.1016/j.cub.2019.12.005)

Credit: 
University of Queensland

Study reveals missing link in mechanisms underlying fight-or-flight response

We've all felt the effects of an adrenaline rush. Faced with danger, real or perceived, the heart beats faster, breathing quickens and muscles tighten as the body prepares to fight a threat or flee from it.

The role of adrenaline in triggering the fight-or-flight response is one of the most well-studied phenomena in biology. However, the precise molecular mechanisms for how the hormone stimulates cardiac function have remained unclear.

Now, researchers from Harvard Medical School and Columbia University have solved the long-standing mystery of how adrenaline regulates a key class of membrane proteins--voltage-gated calcium channels--that are responsible for initiating the contraction of heart cells.

Using a technique known as proximity proteomics, the team discovered that under normal conditions, a protein called Rad dampens the activity of calcium channels. When heart cells are exposed to a drug mimicking adrenaline, Rad releases from the channel, leading to greater activity and stronger beating of the heart.

The findings, published in Nature on Jan. 22, provide a mechanistic description of how adrenaline stimulates the heart and present new targets for cardiovascular drug discovery.

In particular, the authors say, the results could open new paths for the development of drugs as effective as, but potentially safer than, beta-blockers--a widely prescribed class of medications that block the effects of adrenaline to address cardiovascular issues such as high blood pressure.

"Under normal circumstances, calcium channels in the heart work efficiently, but they have a handbrake on in the form of the protein Rad," said Marian Kalocsay, instructor in systems biology and director of proteomics in the Laboratory of Systems Pharmacology at Harvard Medical School and co-corresponding author with Steven Marx, professor of medicine at Columbia University Vagelos College of Physicians and Surgeons.

"When we need full power, adrenaline releases this handbrake so that these channels open faster and give the boost needed to fight or flee from danger," Kalocsay said.

The findings, the authors noted, yield insights that could be of interest for researchers in other fields, especially neuroscience, as voltage-gated calcium channels play a central role in neuronal excitation.

As a primary driver of cardiac function, voltage-gated calcium channels are embedded in the membranes of cardiomyocytes--the cells that constitute heart muscle. These channels open and close to control the flow of calcium ions into the cell. When open, the influx of calcium initiates heart contraction.

Adrenaline stimulates voltage-gated calcium channels by activating a protein known as PKA, which in turn activates the channel. It was thought for decades that PKA does this by altering specific regions on the channel known as PKA phosphorylation sites, but a growing body of evidence indicated that this hypothesis was incorrect.

In the current study, the team genetically engineered mice with cardiomyocytes lacking PKA phosphorylation sites. They found that modified cells continued to respond when stimulated by an adrenaline-like drug, suggesting the presence of an unknown factor.

The researchers turned to proximity proteomics, a technique that allowed them to identify nearly every protein located near voltage-gated calcium channels, at a distance of around 20 nanometers, or 10 times the width of a strand of DNA. They profiled proteins in both mouse cardiomyocytes and intact, functional mouse hearts, before and after exposure to an adrenaline-like drug.

The analyses revealed that only one protein, Rad, consistently exhibited a major change in levels after adrenaline exposure, decreasing by around 30 percent to 50 percent in the neighborhood of the channels.

To further investigate, the researchers recreated this signaling system outside of heart cells, by expressing both Rad and voltage-gated calcium channels in human kidney cells, which normally do not contain either. When cells with both Rad and calcium channels were exposed to an adrenaline-like drug, channel activity increased dramatically. Cells without Rad had little to no response. Until now, it had not been possible to reproduce calcium channel modulation in this manner because Rad as a critical ingredient was missing, the authors said.

Additional experiments confirmed that Rad functions to dampen the activity of voltage-gated calcium channels. When given an adrenaline-like signal, PKA modifies regions of the Rad protein, which then dissociates from the channel to increase its activity.

The discovery opens new avenues of research as voltage-gated calcium channels play central roles in a wide range of organ functions.

In addition, the techniques used in the study, including quantitative mass spectrometry and tandem mass tagging--pioneered by study co-author Steven Gygi, professor of cell biology at the Blavatnik Institute at Harvard Medical School--allow researchers to probe protein biology and interactions with unprecedented precision, including protein behavior in functional, intact organs, as was the case in this study.

The findings can inform new therapeutic approaches, the authors said. For example, disrupting the interaction between Rad and the calcium channel could enhance heart function by increasing calcium flow into cells. Conversely, blocking the modification of Rad by PKA may represent an alternative, more specific, strategy than beta-blockers to reduce calcium influx into the heart.

"It's exciting to finally solve how the cardiac calcium channel is stimulated in the fight-or-flight response. This mystery has remained stubbornly elusive for over 40 years," said Marx. "In the end, the underlying mechanism turned out to be simple and elegant. With this information, we can potentially design novel therapeutics targeting this pathway for the treatment of cardiac diseases."

Credit: 
Harvard Medical School

UW research expands bilingual language program for babies

image: UW student instructors, trained on the I-LABS SparkLing method, work with a group of toddlers at one of the participating infant education centers in Madrid.

Image: 
UW I-LABS

Knowledge of multiple languages has long been shown to have lifelong benefits, from enhancing communication skills to boosting professional opportunities to staving off the cognitive effects of aging.

When researchers at the University of Washington found that even babies whose parents are monolingual could rapidly learn a second language in a small classroom environment, a new challenge was born:

How could they expand their program?

One answer, the UW team found, was to create software that would train language tutors online -- allowing the researchers' curriculum and method to be replicated anywhere in the world.

A new study by UW's Institute for Learning & Brain Sciences, or I-LABS, part of researchers' ongoing work with infant education centers in Spain, not only found that bilingual teaching led to sustained English-language comprehension and vocabulary-building, but also that the method could be scaled up to serve more, and more economically diverse, children.

"We knew our research-based method worked to boost second language skills rapidly in infants, without negatively affecting their first language, but the question was, how can we train people worldwide to use it? Here, we show that online training works," said Naja Ferjan Ramírez, the lead author of both studies who is a new assistant professor of linguistics at the UW and a former I-LABS research scientist.

The study, published online Jan. 22 in Mind, Brain, and Education, extends previous research that examined whether and how infants can learn a second language in the context of an early education center, if they don't get that exposure at home. That 2017 study involved 280 children at four infant education centers in Madrid, Spain, and showed the effects of an interactive, play-based English-language program, compared to the standard bilingual program already available in Madrid schools.

The new study used the same curriculum but trained tutors differently, using an online program called SparkLing developed by I-LABS researchers. By testing a remote form of teacher training and providing lessons to larger groups of children, researchers explored how to spread the benefits of bilingual education across a wider population.

The I-LABS bilingual curriculum emphasizes social interaction, play and quality and quantity of language from teachers. The approach uses parentese, a slow, clear speaking style that often involves exaggerated vowels and intonation. Researchers created the SparkLing software in order to reach language tutors wherever they live. In the 2017 study, for example, tutors were trained at I-LABS. But to bring this method to entire schools or communities online training was essential, researchers said.

In the new study, over 800 children in 13 infant education centers participated. The team grouped children, from ages 9 months to 33 months, into age-specific classes and focused on schools with much lower socioeconomic populations than were tested in the previous study.

"One of the most exciting aspects of the study is that we did our work in some of the very poorest neighborhood schools in Madrid, and we're thrilled to show that these children learn as impressively as those from more affluent neighborhoods. All children, given the right stimulation at the right time, can learn," said Patricia Kuhl, co-director of I-LABS and co-author of the paper.

Children's Spanish and English skills were assessed at the beginning of the study, midway through the school year and at the end of the year. Older children used a touch-screen based word-comprehension assessment tool in Spanish and English, matching words and pictures, and answering questions. All of the children also wore special vests outfitted with lightweight recorders to record any English words uttered by the infants during the 45-minute, daily language sessions.

At the midpoint of the school year, children who received the I-LABS method scored significantly higher in comprehension and word production than their control group peers: an average of nearly 50 words per child, per hour, compared to an average of about 14 words per child, per hour, in the control group.

About half of the children continued their lessons for an additional 18 weeks. At the end of that period, assessments showed that children who continued the lessons also continued to rapidly advance their second-language comprehension and production skills, while the group that returned to the original classroom maintained the English skills acquired after the first 18 weeks.

"Parents worldwide have a common problem: They want their children to speak a second language, but many don't speak that language themselves. We know that zero to 5 is a critical age, a window of opportunity for second language learning, and our newest study shows that when teachers in early education classrooms are trained online to use our method and curriculum, children's learning seems almost magical," said Kuhl, who is also a UW professor or speech and hearing sciences.

The researchers now hope to begin using this method in the United States, where about one-quarter of children are raised in homes where a language other than English is spoken.

Credit: 
University of Washington

NASA finds wind shear affected new Tropical Cyclone 09S   

image: On Jan. 23, 2020 at 4:35 a.m., EST (0935 UTC) the MODIS instrument that flies aboard NASA's Aqua satellite provided a visible image of Tropical Storm 09S in the Southern Indian Ocean.

Image: 
NASA Worldview

Tropical Cyclone 09S formed on Jan. 22 in the Southern Indian Ocean despite being affected by vertical wind shear and one day later, wind shear caused its demise. The end of 09S was caught by NASA's Aqua satellite.

On Jan. 23 at 4:35 a.m. EST (0935 UTC), the Moderate Imaging Spectroradiometer or MODIS instrument that flies aboard NASA's Aqua satellite provided a visible image of 09S. Forecasters at the Joint Typhoon Warning Center (JTWC) noted that satellite imagery shows "The system has severely degraded as the central convection collapsed and sheared eastward, exposing a ragged and weak low level circulation."

In general, wind shear is a measure of how the speed and direction of winds change with altitude. Tropical cyclones are like rotating cylinders of winds. Each level needs to be stacked on top each other vertically in order for the storm to maintain strength or intensify. Wind shear occurs when winds at different levels of the atmosphere push against the rotating cylinder of winds, weakening the rotation by pushing it apart at different levels. Wind shear pushing from the west against Tropical Cyclone 09S is moving the bulk of clouds and showers east of the center.

On Jan. 23 at 0300 UTC (Jan. 22 at 10 p.m. EST) JTWC noted that Tropical Cyclone 09S was located near latitude 22.3 degrees south and longitude 71.0 degrees east, about 449 nautical miles south-southeast of Mauritius. Maximum sustained winds 35 knots (40 mph).

The final warning on 09S came at 10 a.m. EST (1500 UTC), when maximum sustained winds had dropped to 25 knots making it a depression.  At that time, 09S was located near 25.5 degrees south latitude and 71.9 degrees east longitude. That is about 845 nautical miles south-southeast of Mauritius.

Forecasters at the JTWC said the storm had rapidly deteriorated because of the increasing wind shear and movement into cooler waters.

Credit: 
NASA/Goddard Space Flight Center

Data from behind enemy lines: How Russia may have used Twitter to seize Crimea

image: Online discourse by users of social media suggests it can be used by governments as a source of military intelligence to estimate prospective casualties and costs incurred from occupying foreign territories.

Image: 
Courtesy of nyaberku

Online discourse by users of social media can provide important clues about the political dispositions of communities. New research suggests it can even be used by governments as a source of military intelligence to estimate prospective casualties and costs incurred from occupying foreign territories.

In a new University of California San Diego study, researchers examine data from Twitter during the 2014 conflict between Russia and Ukraine. The Russian television narrative, which is that a fascist coup had taken place, did not "catch on" in Ukrainian Russian-speaking communities. The only exception was Crimea. This could explain why Russia's forces did not advance further than Crimea's borders, as Russian analysts may have observed overt signals, including some from social media, that they would have faced strong and violent resistance.

"If you're a conservative Russian military planner, you only send special forces to places where you are fairly certain they will be perceived as liberators, not occupiers," said they study's first author Jesse Driscoll, associate professor of political science at the UC San Diego School of Global Policy and Strategy. "A violent occupation of Russian-speaking communities that didn't want the Russian soldiers to be there would have been a public relations disaster for Putin, so estimating occupation costs prospectively would have been a priority."

The study, published in Post-Soviet Affairs, does not present evidence that Russian analysts used Twitter data - only evidence consistent with its potential to be repurposed using the methods in the paper. Through reconstructing how the Russian-state narrative was received by Russian-speakers living in Ukraine, the researchers were able to determine the areas where it would have been safest for Russia to send special forces. This bore an eerie resemblance to the map of where Russian soldiers actually went - Crimea and a few probes in the far East, but no further.

How Twitter could have been used by the Kremlin to determine if Russian soldiers would be welcomed as liberators or invaders

In the study, the data from Twitter was collected in real time beginning in August 2013. The researchers compiled tweets with GPS coordinates of social media users who had their locations services turned on. Though data was collected from all over the world (roughly 940,000,000 tweets), the researchers filtered the data by time (the 188 days from February to August 2014), location (Ukraine) and language (Russian).

"We were most interested in Russian-speakers in Ukraine because that is the population which might have considered sedition," said Driscoll.

The researchers then created two dictionaries to identify key words associated with the two polarized and competing narratives of the news cycles at the time.

"All of this started with an event that the Kremlin still calls a 'coup' and Western governments call 'The Revolution of Dignity' - very different narratives there," Driscoll said. "The framing language of 'terrorism,' was prominent in anti-Kremlin users and 'fascism' was popular among pro-Kremlin tweets. These two narratives were frequently employed in news coverage during the six months in the study, including on Russian and Western television news programs."

The authors used the twitter data to measure narrative uptake as a window into which storyline was favored in Russian-speaking communities. After manually screening for automated accounts ("bots"), this process yielded 5,328 tweets from 1,339 accounts, which were interpreted by a team of Russian-speakers in Ukraine who read each tweet to identify their political affiliation. Machine-learning algorithms were then used to create a much larger sample for analysis. With further filtering, the team identified 58,689 tweets as pro-Kremlin and 107,041 as anti-Kremlin. The researchers then mapped out the data over each Ukrainian state, or oblast, comparing the percentage of tweets in each of the two narrative categories.

A new source of intelligence to gauge the potential support for foreign military intervention

Though there was some pro-Kremlin sentiment expressed on Twitter in every oblast, the spatial visualization of the data showed Crimea as an outlier based on its high pro-Kremlin percentage.

"If Russian strategists were likely considering expansion beyond Crimea, they would have been able to use social media information to assess, with a great deal of precision and in real time, the reception that they would likely receive," Driscoll and his co-author wrote. "Our data shows that further expansion beyond Crimea could have resulted in an ethnic bloodbath."

Though other studies have focused on how polarized media "bubbles" allow conflicting coverage of the same events, the departure in the Post-Soviet Affairs paper is its emphasis on the potential for social media data to be repurposed for crisis decision-making.

"Our conjecture is that these planners would have been eager for information on social attitudes of Ukrainians," said Driscoll "Our claim is not that social media is the only way to get this information - the Kremlin has lots of eyes on the ground there - but it does provide a granular picture that analysts from different countries can observe in real time, even from a great distance."

It is easy to imagine military crisis-bargaining applications of these methods. Mainland Chinese analysts may be hungry for real-time updates on Taiwanese public opinion. U.S. analysts may be interested in the opinions of youth groups in Iran. Social media is a new frontier in this space.

Driscoll and his co-author concluded, "We favor the analogy between information warfare techniques and airplanes at the start of the First World War. Conventional militaries are just beginning to explore the ways that emergent information technologies can shape battlefields. As techniques for real-time data mining become commodified, they will be integrated into best practices for counterinsurgency and, more generally, into military planning. This paper has shown one way in which they could have been useful."

Credit: 
University of California - San Diego

Warmer, dryer, browner

The western United States has experienced such intense droughts over the past decade that technical descriptions are becoming inadequate. In many places, conditions are rocketing past "severe," through "extreme," all the way to "exceptional drought."

The 2018 Four Corners drought -- centered on the junction between Arizona, Utah, Colorado and New Mexico -- put the region deep in the red. An abnormally hot spring and summer indicated that climate change was clearly at work, but that was about as much as most people could say of the situation at the time.

Climate scientists from UC Santa Barbara's geography department have now distilled just how strong an effect human-induced warming had on that event. Their findings appear in the Bulletin of the American Meteorological Society's annual issue dedicated to explaining extreme weather events during the previous year. The team found that 60 to 80% of the region's increased potential for evaporation stemmed from human-induced warming alone, which caused additional warming of 2 degrees Celsius.

"I was really stunned at how big an effect we found with just a 2-degree warming," said Chris Funk, director of the university's Climate Hazards Center, a U.S. Geological Survey scientist and one of the study's coauthors.

"The results were much more pronounced than we had expected," added lead author Emily Williams, a doctoral student in the Department of Geography. Her work focuses on end-to-end attribution, which determines exactly how much a specific natural event was exacerbated by climate change, and then links those changes to both the sources of greenhouse gasses and the impacts of warming on people and ecosystems. It's a challenging task that requires sophisticated climate models, comprehensive databases and the development of exciting new science.

Williams wanted to determine to what extent the Four Corners drought was exacerbated by climate change. To make the task more manageable, she limited the scope of her investigation to rising temperatures. Williams ran two simulations on leading climate models; the first calculated temperatures under the climate regime predating the industrial revolution, while the second did so under current, human-induced climate change. Subtracting the averages from the two simulations gave her a temperature difference she could attribute to human-induced warming.

"We found that pretty much all of what made this the hottest year on record in that region was due to climate change," she said.

With that knowledge, Williams then set out to determine how higher temperatures affected aridity. For this, she looked at the local hydrology -- mainly snowpack and surface runoff -- and agropastoral conditions -- circumstances related to growing crops and raising livestock. She used the vegetation's greenness to judge the agropastoral conditions of the region.

Warmer air can hold more moisture than cold air. So, as temperatures rise, the air becomes thirstier, Williams explained. And thirstier air can suck more moisture out of the ground.

The difference between the amount of water the air can absorb and the amount the land can provide is what scientists call the vapor pressure deficit. When the land can supply more than the air can hold -- as when the air temperature drops to a predawn low -- you get condensation, like the early morning dew. On the other hand, when the air is thirstier than the amount of water the ground can provide, it pulls moisture from the earth drying it out. Warmer air over dry soils will be thirstier, leading to more rapid browning of fields and fodder.

The scientists used a statistical model to relate green vegetation to the vapor pressure deficit. Intuitively, warmer temperatures cause dryer conditions, which turn the vegetation brown. And that it did. The results indicated that the landscape would have been about 20% greener that year in the absence of climate change. This browner vegetation translated into poor harvests and low-quality fodder; impacts associated with more than $3 billion in economic losses, and disruptions in the lives and livelihoods of hundreds of thousands of Native Americans who were settled in reservations in the area.

2018 was a dry year for the entire Western U.S., but the Four Corners region was hit particularly hard. The worst of the drought sat squarely on the Navajo Nation reservation.

The team also looked at how much the region's snowpack was influenced by the higher temperatures. A robust snowpack ensures water availability late into the summer as it slowly melts away. Higher temperatures affect snowpack in two ways: They cause more precipitation to fall as rain rather than snow, and they make snow melt faster and earlier in the year, explained Shraddhanand Shukla, an associate researcher in the geography department and another of the paper's coauthors.

The scientists simulated snow conditions in the region with and without climate change while keeping the total precipitation constant. They found that the March snowpack would have been 20% greater in the absence of climate change, even though this was the lowest precipitation year on record for the area. They expect the effect would have been even more pronounced in a wetter year.

These findings are a conservative estimate of climate change's influence on the drought, according to Williams. For one, the study only considered the impact human-induced warming had on temperatures. Climate change may also have influenced the region's low rainfall.

What's more, there are strong feedback cycles between the atmosphere and the land, which the study left out. When the air soaks up all the available moisture in the soil, evaporative can no longer cool the ground. The result is a dramatic spike ground temperature, which exacerbates the situation.

However, by keeping the methodology simple, the team made the task more manageable, the insights more understandable and the technique more transferable. They plan to apply the approach on more typical years in the Four Corners region, as well as to regions in Eastern Africa experiencing similar distress.

The researchers stressed the urgency of understanding these systems. "This isn't projected climate change, it's current," said Williams.

"There are things that are absolutely certain to happen due to climate change," Funk said. "And one thing that's absolutely certain is that saturation vapor pressure is going to go up."

"And this actually increases the intensity of both droughts and floods," he added.

When a region like the Four Corners experiences a drought, a warmer, thirstier atmosphere can wick away limited soil moisture more quickly. On the other hand, when a humid region, like Houston, experiences an extreme rainfall event, the warm, wet winds feeding it can hold more water, potentially causing catastrophic floods.

"The moment that everybody in the world understands this is the moment that we will probably start doing something about climate change," Funk said.

Credit: 
University of California - Santa Barbara

Astronomers detect large amounts of oxygen in ancient star's atmosphere

image: Artistic image of the supernova explosions of the first massive stars that formed in the milky way. The star j0815+4729 was formed from the material ejected by these first supernovae.

Image: 
Gabriel Pérez, SMM (IAC)

Maunakea, Hawaii - An international team of astronomers from the University of California San Diego, the Instituto de Astrofísica de Canarias (IAC), and the University of Cambridge have detected large amounts of oxygen in the atmosphere of one of the oldest and most elementally depleted stars known - a "primitive star" scientists call J0815+4729.

This new finding, which was made using W. M. Keck Observatory on Maunakea in Hawaii to analyze the chemical makeup of the ancient star, provides an important clue on how oxygen and other important elements were produced in the first generations of stars in the universe.

The results are published in the January 21, 2020 edition of the The Astrophysical Journal Letters.

"This result is very exciting. It tells us about some of the earliest times in the universe by using stars in our cosmic back yard," said Keck Observatory Chief Scientist John O'Meara. "I look forward to seeing more measurements like this one so we can better understand the earliest seeding of oxygen and other elements throughout the young universe."

Oxygen is the third most abundant element in the universe after hydrogen and helium, and is essential for all forms of life on Earth, as the chemical basis of respiration and a building block of carbohydrates. It is also the main elemental component of the Earth's crust. However, oxygen didn't exist in the early universe; it is created through nuclear fusion reactions that occur deep inside the most massive stars, those with masses roughly 10 times the mass of the Sun or greater.

Tracing the early production of oxygen and other elements requires studying the oldest stars still in existence. J0815+4729 is one such star; it resides over 5,000 light years away toward the constellation Lynx.

"Stars like J0815+4729 are referred to as halo stars," explained UC San Diego astrophysicist Adam Burgasser, a co-author of the study. "This is due to their roughly spherical distribution around the Milky Way, as opposed to the more familiar flat disk of younger stars that include the Sun."

Halo stars like J0815+4729 are truly ancient stars, allowing astronomers a peek into element production early in the history of the universe.

The research team observed J0815+4729 using Keck Observatory's High-Resolution Echelle Spectrometer (HIRES) on the 10m Keck I telescope. The data, which required more than five hours of staring at the star over a single night, were used to measure the abundances of 16 chemical species in the star's atmosphere, including oxygen.

"The primitive composition of the star indicates that it was formed during the first hundreds of millions of years after the Big Bang, possibly from the material expelled from the first supernovae of the Milky Way," said Jonay González Hernández, Ramón y Cajal postdoctoral researcher and lead author of the study.

Keck Observatory's HIRES data of the star revealed a very unusual chemical composition. While it has relatively large amounts of carbon, nitrogen, and oxygen - approximately 10, 8, and 3 percent of the abundances measured in the Sun - other elements like calcium and iron have abundances around one millionth that of the Sun.

"Only a few such stars are known in the halo of our galaxy, but none have such an enormous amount of carbon, nitrogen, and oxygen compared to their iron content," said David Aguado, a postdoctoral researcher at the University of Cambridge and co-author of the study.

The search for stars of this type involves dedicated projects that sift through hundreds of thousands of stellar spectra to uncover a few rare sources like J0815+4729, then follow-up observations to measure their chemical composition. This star was first identified in data obtained with the Sloan Digital Sky Survey (SDSS), then characterized by the IAC team in 2017 using the Grand Canary Telescope in La Palma, Spain.

"Thirty years ago, we started at the IAC to study the presence of oxygen in the oldest stars of the Galaxy; those results had already indicated that this element was produced enormously in the first generations of supernovae. However, we could not imagine that we would find a case of enrichment as spectacular as that of this star," noted Rafael Rebolo, IAC director and co-author of the study.

Credit: 
W. M. Keck Observatory

Technique reveals whether models of patient risk are accurate

CAMBRIDGE, MA -- After a patient has a heart attack or stroke, doctors often use risk models to help guide their treatment. These models can calculate a patient's risk of dying based on factors such as the patient's age, symptoms, and other characteristics.

While these models are useful in most cases, they do not make accurate predictions for many patients, which can lead doctors to choose ineffective or unnecessarily risky treatments for some patients.

"Every risk model is evaluated on some dataset of patients, and even if it has high accuracy, it is never 100 percent accurate in practice," says Collin Stultz, a professor of electrical engineering and computer science at MIT and a cardiologist at Massachusetts General Hospital. "There are going to be some patients for which the model will get the wrong answer, and that can be disastrous."

Stultz and his colleagues from MIT, the MIT-IBM AI Lab, and the University of Massachusetts Medical School have now developed a method that allows them to determine whether a particular model's results can be trusted for a given patient. This could help guide doctors to choose better treatments for those patients, the researchers say.

Stultz, who is also a professor of health sciences and technology, a member of MIT's Institute for Medical Engineering and Sciences and Research Laboratory of Electronics, and an associate member of the Computer Science and Artificial Intelligence Laboratory, is the senior author of the new study. MIT graduate student Paul Myers is the lead author of the paper, which appears today in Digital Medicine.

Modeling risk

Computer models that can predict a patient's risk of harmful events, including death, are used widely in medicine. These models are often created by training machine-learning algorithms to analyze patient datasets that include a variety of information about the patients, including their health outcomes.

While these models have high overall accuracy, "very little thought has gone into identifying when a model is likely to fail," Stultz says. "We are trying to create a shift in the way that people think about these machine-learning models. Thinking about when to apply a model is really important because the consequence of being wrong can be fatal."

For instance, a patient at high risk who is misclassified would not receive sufficiently aggressive treatment, while a low-risk patient inaccurately determined to be at high risk could receive unnecessary, potentially harmful interventions.

To illustrate how the method works, the researchers chose to focus on a widely used risk model called the GRACE risk score, but the technique can be applied to nearly any type of risk model. GRACE, which stands for Global Registry of Acute Coronary Events, is a large dataset that was used to develop a risk model that evaluates a patient's risk of death within six months after suffering an acute coronary syndrome (a condition caused by decreased blood flow to the heart). The resulting risk assessment is based on age, blood pressure, heart rate, and other readily available clinical features.

The researchers' new technique generates an "unreliability score" that ranges from 0 to 1. For a given risk-model prediction, the higher the score, the more unreliable that prediction. The unreliability score is based on a comparison of the risk prediction generated by a particular model, such as the GRACE risk-score, with the prediction produced by a different model that was trained on the same dataset. If the models produce different results, then it is likely that the risk-model prediction for that patient is not reliable, Stultz says.

"What we show in this paper is, if you look at patients who have the highest unreliability scores -- in the top 1 percent -- the risk prediction for that patient yields the same information as flipping a coin," Stultz says. "For those patients, the GRACE score cannot discriminate between those who die and those who don't. It's completely useless for those patients."

The researchers' findings also suggested that the patients for whom the models don't work well tend to be older and to have a higher incidence of cardiac risk factors.

One significant advantage of the method is that the researchers derived a formula that tells how much two predictions would disagree, without having to build a completely new model based on the original dataset.

"You don't need access to the training dataset itself in order to compute this unreliability measurement, and that's important because there are privacy issues that prevent these clinical datasets from being widely accessible to different people," Stultz says.

Retraining the model

The researchers are now designing a user interface that doctors could use to evaluate whether a given patient's GRACE score is reliable. In the longer term, they also hope to improve the reliability of risk models by making it easier to retrain models on data that include more patients who are similar to the patient being diagnosed.

"If the model is simple enough, then retraining a model can be fast. You could imagine a whole suite of software integrated into the electronic health record that would automatically tell you whether a particular risk score is appropriate for a given patient, and then try to do things on the fly, like retrain new models that might be more appropriate," Stultz says.

Credit: 
Massachusetts Institute of Technology

Preventing metastasis by stopping cancer cells from making fat

image: Olivier Feron, a UCLouvain researcher, studies how cancer spreads through the body via metastasis.
His major discovery was that cancer cells multiply by using lipids as food. His latest discovery, published in the scientific journal Nature Communications, is that lipid storage promotes cancer invasiveness.
A new drug currently being tested to treat obesity may also help fight metastasis.

Image: 
copy University of Louvain

Olivier Feron, a researcher at the University of Louvain (UCLouvain) Institute of Experimental and Clinical Research, seeks to understand how metastases form from a tumour. He already demonstrated that the most aggressive cancer cells use significant amounts of lipids as energy sources. Now Prof. Feron has discovered that cancer cells store lipids in small intracellular vesicles called 'lipid droplets'. Cancer cells loaded with lipids are more invasive and therefore more likely to form metastases. Prof. Feron and his team sought to identify the link between lipid storage and metastatisation.

They identified a factor called TGF-beta2 as the switch responsible for both lipid storage and the aggressive nature of cancer cells. Moreover, it appeared that the two processes were mutually reinforcing. In fact, by accumulating lipids, more precisely fatty acids, cancer cells build up energy reserves, which they can then use as needed throughout their metastatic course.

Already known was that the acidity found in tumours promotes cancer cells' invasion of healthy tissue. The process requires the detachment of the cancer cell from its original anchor site and the ability to survive under such conditions (which are fatal to healthy cells).

The new finding: UCLouvain researchers demonstrated that this acidity promotes, via the same TGF-beta2 'switch', the invasive potential and formation of lipid droplets. These provide the invasive cells with the energy they need to move around and withstand the harsh conditions encountered during the metastatisation process. It's like a mountaineer who takes the food and equipment necessary to reach the summit in spite of complex weather conditions.

Concretely, this UCLouvain research opens up new therapeutic avenues thanks to the discovery of the different actors involved in metastasis. These actors can thus be targeted and combated. Prof. Feron and his team show that it is possible to reduce tumour invasiveness and prevent metastases using specific inhibitors of TGF-beta2 expression but also compounds capable of blocking the transport of fatty acids or the formation of triglycerides. Among the latter are new drugs that are being evaluated to treat obesity. Their indications could therefore be rapidly extended to counter the development of metastases, which is the major cause of death among cancer patients.

The findings are published in the prestigious scientific journal Nature Communications. The research was carried out with funding from the Belgian Cancer Foundation, the Belgian National Fund for Scientific Research, the Télévie telethon, and a Wallonia Brussels Federation joint research grant (ARC).

Credit: 
Université catholique de Louvain

Bending with the wind, coral spawning linked to ocean environment

image: Mass spawning of the reef-building coral Acropora tenuis.

Image: 
Masayuki Hatta

During the early summer, corals simultaneously release tiny balls composed of sperms and eggs, known as bundles, that float to the ocean surface. Here the bundles open, allowing the sperm to fertilize the eggs where they eventually settle on the seafloor and become new coral on the reef.

This spectacular annual event is known as "mass-spawning," and usually occurs at night. Although the occurrence of mass-spawning happens around the time of a full moon, it is difficult to predict precisely when. Now, a research team from Tohoku University, Ochanomizu University, and the National Institute for Basic Biology have utilized modeling analysis to indicate that environmental factors act as a determinant in the timing of mass spawning.

"Coral spawning is a complex phenomenon," says Shinchiro Maruyama, an assistant professor at Tohoku University. "It is too complicated to model all the factors involved in a spawning event, so we decided to focus on which day they spawn. Although we know that they spawn a few hours after the dusk, the days can differ according to regions, and even within the same reef."

Maruyama and his team utilized a multidisciplinary approach to address the role of environmental factors, such as temperature; wind speed; and sunlight, to determine the night of spawning, teaming up with specialists in ecology, statistics, physiology, developmental biology, and evolutionary biology. Drawing upon field research, satellite data and literature reviews, they discovered that corals changed the nights of spawning according to the environmental conditions for a period of time before 'the big night.'

Maruyama adds that, "Such fine-tuning for the night of spawning might be advantageous for corals to maximize their chances or meeting future partners in the vast expanses of the ocean."

Coral reefs are a natural treasure of biodiversity in the ocean and understanding mass-spawning gives us further insight into their behavior. Identifying that environmental changes play a role in the mass-spawning timing provides a building block for scientists to address corals behavior going forward.

Credit: 
Tohoku University

First treatment for pain using human stem cells a success

Researchers at the University of Sydney have used human stem cells to make pain-killing neurons that provide lasting relief in mice, without side effects, in a single treatment.

The next step is to perform extensive safety tests in rodents and pigs, and then move to human patients suffering chronic pain within the next five years.

If the tests are successful in humans, it could be a major breakthrough in the development of new non-opioid, non-addictive pain management strategies for patients, the researchers said.

"We are already moving towards testing in humans," said Associate Professor Greg Neely, a leader in pain research at the Charles Perkins Centre and the School of Life and Environmental Sciences.

"Nerve injury can lead to devastating neuropathic pain and for the majority of patients there are no effective therapies. This breakthrough means for some of these patients, we could make pain-killing transplants from their own cells, and the cells can then reverse the underlying cause of pain."

Published today in the peer-reviewed journal Pain, the team used human induced pluripotent stem cells (iPSC) derived from bone marrow to make pain-killing cells in the lab, then put them into the spinal cord of mice with serious neuropathic pain. The development of iPSC won a Nobel Prize in 2012.

"Remarkably, the stem-cell neurons promoted lasting pain relief without side effects," co-senior author Dr Leslie Caron said. "It means transplant therapy could be an effective and long-lasting treatment for neuropathic pain. It is very exciting."

John Manion, a PhD student and lead author of the paper said: "Because we can pick where we put our pain-killing neurons, we can target only the parts of the body that are in pain. This means our approach can have fewer side effects."

The stem cells used were derived from adult blood samples.

The total cost of chronic pain in Australia in 2018 was estimated to be $139.3 billion.

Credit: 
University of Sydney

Turtle tracking reveals key feeding grounds

video: Julia Haywood explains the study

Image: 
Muddy Duck Productions

Loggerhead turtles feed in the same places year after year - meaning key locations should be protected, researchers say.

University of Exeter scientists used satellite tracking and "stable isotope ratios" - a chemical signature also used by forensic scientists - to track female loggerheads from two rookeries (nesting beaches) in the Mediterranean.

The study identified three main feeding areas - the Adriatic region, the Tunisian Plateau and the eastern Mediterranean.

"We show where the majority of nesting female turtles spend the most of their life, meaning that in addition to their nesting beaches we can also protect important marine habitats where they feed," said lead author Julia Haywood, of the University of Exeter.

"Nearly half of the Cyprus nesting population feeds on the Tunisian Plateau, an area known to have some of the highest turtle bycatch (accidental catch by humans fishing) in the world.

"Therefore, we support recommendations that this area should be conserved."

The study tracked turtles from rookeries in Greece and Cyprus using data from 1993 to 2018.

"By studying these turtles for so long we show these females stay in the same feeding area over decades, which means if these habitats are damaged or have high fishing activities then the turtles will unfortunately not move," Haywood said.

"This work shows the importance of combining satellite tracking and stable isotopes to help understand these elusive animals."

The work was carried out in collaboration with the local conservation groups: the Society for the Protection of Turtles in North Cyprus (SPOT) and ARCHELON, the Sea Turtle Protection Society of Greece.

Robin Snape, of SPOT, said: "Mediterranean loggerheads lay their nests in the European Union, just at the time of year when hundreds of thousands of European tourists arrive on holiday.

"For the rest of the year, many female loggerheads are growing and foraging in the waters off Africa, where mortality in industrialised fisheries and even direct consumption of turtles are still big concerns.

"Each year at least 10,000 turtles die as accidental bycatch off North Africa, while illegal trade in turtle meat persists.

"This research allows prioritisation of conservation resources to specific threats in specific areas."

Credit: 
University of Exeter

Skin-to-skin contact do not improve interaction between mother and preterm infant

Following a premature birth it is important that the parents and the infant quickly establish a good relationship. Researchers at Linköping University have studied the relationship between mothers and infants who have continuous skin-to-skin contact during the entire period from birth to discharge from the hospital. The results show that continuous skin-to-skin contact does not lead to better interaction between the mother and the infant. The study is published in the scientific journal Advances in Neonatal Care.

Every year some 15 million infants worldwide are born prematurely. Because the infants often require intensive care, it is common that they are separated from their parents, which can negatively affect the attachment between mother and infant.

For the parents, this separation can result in guilt and a sense of emptiness at not being able to be close to their newborn child. For the infant, losing closeness to the parents is one of the largest stress factors in early life. But skin-to-skin care against the parent's chest, instead of care in an incubator, can reduce stress.

"Skin-to-skin contact between parent and infant has proved to have positive effects for the infant's development - but there are no clear results regarding the effect on the interaction between mother and infant. Which is why we wanted to study this", says Charlotte Sahlén Helmer, doctoral student at Linköping University, Sweden.

In the study, the researchers investigated the interaction between mothers and infants born prematurely - between weeks 32 and 36. The study was carried out at two Swedish hospitals, where the parents are able to be with their infant around the clock. Thirty-one families took part. The families were split into two groups: one where the mother was to give the infant continuous SSC from birth until discharge, and one where the mother was to give the infant as much or as little SSC as she wanted to, or was able to.

After four months, the researchers followed up how the mothers interacted with their preterm infants. They found no significant differences in interaction between the continuous and the intermittent skin-to-skin contact groups. As regards the mother's attachment to the infant, the researchers could not see that skin-to-skin contact had any effect in terms of e.g. the mother's acceptance of or sensitivity to the infant. Nor was there a correlation between the number of hours of skin-to-skin contact and the quality of the interaction.

"Some people say that skin-to-skin contact automatically results in good attachment between mother and infant. Our study shows that this may not be the case. It may be a relief for the parents who are not able to keep their infant against their skin around the clock, to know that they can still have good interaction. But these results must be followed up with further studies", says Charlotte Sahlén Helmer.

The study is part of a larger project investigating the effects of skin-to-skin contact in preterm infants.

Credit: 
Linköping University

Global warming could have a negative impact on biodiversity generation processes

image: Carex.

Image: 
©pablodeolavide

In the current climate change scenario, an international team led by researchers from Pablo de Olavide University (UPO) and the Autonomous University of Madrid (UAM) has carried out research that suggests global warming could have a negative impact on the processes that generate biodiversity. This is one of the conclusions of a study, recently published in the international scientific Journal of Systematics and Evolution, that focuses on the causes of the evolutionary success of Carex, one of the worlds' three largest genera of flowering plants. The results suggest that this success is linked to the relatively cold climate of the planet during the last 10 million years, which favoured the colonization of new territories and ecosystems.

Carex is a group of herbs belonging to the sedge family (Cyperaceae), a family that includes well-known species as papyrus or tiger nut. More than 2000 Carex species are known throughout the world and inhabit a wide variety of ecosystems, from the poles to the tropics and from the coasts to the highest mountains, although always linked to areas with temperate and cold climates. In many regions, especially in the northern hemisphere, their species are part of certain types of dominant vegetation and play a fundamental ecological role in habitats as varied as tundra, grasslands, wetlands, peat bogs, river and lake borders, or forest understories. In addition, these plants are an important food source for numerous waterfowl and herbivorous mammals, and some of them exhibit medicinal or nutritional properties exploited by humans.

The study was focused on the analysis of the causes for the enormous diversity of Carex species, concluding that climate cooling was a key factor behind their speciation. "The study is the first to deal with global distribution patterns and diversification of a megadiverse genus of plants and suggests that not only is climate warming causing the extinction of species, but also could negatively affect the processes that generate them," says Santiago Martín-Bravo, researcher at UPO's Botany Area and one of the study's main co-authors.

In this study genetic and fossil information was combined to unravel the causes of Carex global diversification. The work shows that this genus originated in Asia, from where it has been able to colonize regions around the world and remarkably different ecological niches. During this process, Carex has been clearly favoured by the cold global climate sustained for the past 10 million years. This is evidenced by the concurrence of regional cooling events such as the freezing of Antarctica or Pleistocene glaciations and the massive appearance of Carex species in regions affected by these climatic changes, e.g. North America or New Zealand.

The conclusions of this work are of broad general interest to understand how, when and why species are generated, as well as the causes of their uneven distribution, and especially the role of the global climate as a driver of the genesis of biodiversity. "These questions are particularly significant in the current context of climate crisis and mass extinction of species, which emphasizes the need to know and understand how nature responds to the climate if we are to preserve and manage it in a sustainable way," argues Pedro Jiménez-Mejías, researcher of UAM's Botany Area and another of the main co-authors of the work.

The study featured on the cover of November issue of the international scientific journal Journal of Systematics and Evolution. It represents the culmination of more than a decade's work initiated with Jiménez-Mejías' postdoctoral project, developed in the United States, and has enabled international collaboration between a group of evolutionary biologists and botanists from institutions in ten countries, among which Spain (with researchers from the universities Pablo de Olavide, Autonomous of Madrid and Seville, as well as the Royal Botanic Gardens in Madrid) and the United States stand out.

Credit: 
Universidad Pablo de Olavide UPO

Multimorbidity leads to general practitioners suffering burnout

There is a difference between seeing a patient with a catalogue of two or more serious chronic diseases and a healthy patient who just needs a prescription to treat a case of cystitis.

A new Danish study from the Research Unit for General Practice at Aarhus University, Denmark, shows that having many patients with multiple chronic diseases - known as multimorbidity - places general practitioners under mental strain. And to such a degree that they risk burnout. What's more, other studies have shown that the patients with multimorbidity also receive worse treatment in the healthcare system.

The study is based on previous research showing that there are increasing numbers of patients with multimorbidity, meaning that the prevalence of GPs with symptoms of a lack of well-being and a risk of burnout is also increasing.

This is the conclusion of Anette Fischer Pedersen, who is senior researcher at the Research Unit for General Practice and an associate professor at the Department of Clinical Medicine at Aarhus University. She is heading the current study, which has just been published in the scientific journal British Journal of General Practice.

"One of our findings in the study is that among the quarter of general practitioners who had the fewest number of patients with multimorbidity in 2016, seven per cent had what we call full burnout syndrome. This contrasts with the figure of twelve per cent among the quarter who had the highest number of patients with multimorbidity.

"She believes that the result shows the importance of looking closely at the working conditions of general practitioners.

"As things are today in the context of general practitioners' time and remuneration, there is often no difference between treating a patient with a long and complex history of illness and a normally healthy patient who is there to get treatment for an uncomplicated illness. This puts general practitioners under a lot of pressure," says Anette Fischer Pedersen.

One of the focal points of the study is that it documents the correlation between the number of patients with multimorbidity and the well-being of GPs - or lack of well-being, which was pronounced when the proportion of patients with multimorbidity was high.

According to Anette Fischer Pedersen, the challenge cannot only be solved by general practitioners allocating more time to complex patients within the framework that general practice works under today.

"It's no secret that there are areas in Denmark where there's a lower level of public health than in others. This may mean it will be difficult to get GPs to work in areas where the need for competent medical treatment is highest, simply because working there is an unattractive proposition. If we're determined to work towards reducing the risk of burnout among general practitioners, we will also help to prevent inequality in health," she says.

Since 2004, the Research Unit for General Practice at the Department of Public Health at Aarhus University has carried out a number of research studies into burnout among general practitioners. In 2016, the researchers were responsible for a major study in collaboration with the Organization of General Practitioners in Denmark, in which the degree of burnout among general practitioners was measured using the Maslach Burnout Inventory method. At the time, 50.7 per cent of Denmark's 3,350 general practitioners participated in the study.

Credit: 
Aarhus University