Earth

Fishery length, angler effort: How they relate

image: Estimates of harvest of red snapper (Lutjanus campechanus) by private recreational anglers who used 6 public access boat
launches on the coast of Alabama during the federal seasons of the fishery for red snapper from 2012 through 2017. Standard
errors of the mean are given in parentheses after mean values. In 2017, anglers could fish 2 seasons, one long and
another short.

Image: 
Fisheries Ecology Lab

A new study suggests reducing the number of fishing days in a season doesn't reduce catch as much as some would predict. The publication, Compression and relaxation of fishing effort in response to changes in length of fishing season for red snapper (Lutjanus campechanus) in the northern Gulf of Mexico, was released by NOAA in the November 2018 National Fishery Bulletin.

In the publication, Dr. Sean Powers and Kevin Anson compared fishery season lengths and angler response from 2012 to 2017.

Dr. Powers is a Senior Marine Scientist at the Dauphin Island Sea Lab and chair of the Department of Marine Sciences at the University of South Alabama. Anson works with the Alabama Department of Conservation and Natural Resources as the Chief of Fisheries in the Marine Resources Division.

Recreational fishing of red snapper is a hot topic along coastal Alabama. In an effort to rebuild stocks, agency managers reduced the number of days in a red snapper fishing season during the last decade. This is a common practice in the fisheries industry when a stock is labeled as overfished.

The belief is the shortened fishery window will decrease the harvest which will allow the stock to rebuild over time. Until now, data referencing the recreational angler response to this method was lacking.

Powers and Anson compiled data assessing the effect of variable season length on the daily fishing effort between 2012 and 2017. The data produced from reviewing video observations measures the number of boat launches per day, the number of anglers per boat, and the number of anglers per day.

Since 2012, law enforcement has monitored six public boat launches in coastal Alabama which have the highest offshore fishing trip activity. During the federal recreational fishery for red snapper, video recordings are archived and available for analysis of angler effort. These recordings were used to create the dataset presented in this publication.

Fishing for Red Snapper

Restrictive regulations on the fishing of red snapper by NOAA Fisheries began nearly a decade ago in the private recreational fishery. This group routinely exceeded its share of the annual catch limit.

Each year, the annual catch limit is split between commercial and recreational sectors first. Then, the recreational sector is split between federal for-hire and private recreational, including state licensed only for-hire vessels, sub-sectors.

The recreational fishery of red snapper has decreased from 194 days in 2007 to 11 days in 2016. Public frustration reached its high point in 2017 when the National Marine Fisheries Service initially set the season at just three days.

An appeal by state management agencies from all five Gulf of Mexico states to the Secretary of the U.S. Department of Commerce resulted in a second season of 39 days for 2017 with a few restrictions. The extended federal season was open on Friday, Saturdays, Sundays and federal holidays from June 16 to September 4. In exchange for the extension, most of the Gulf states agreed to close state waters outside of the federal season for the remainder of 2017.

The two seasons of 2017 provided a unique opportunity for researchers to assess angler effort during two seasons of different lengths. Anglers had no expectation of a second season until days before the announcement, taking out the anticipation of more days to fish for red snapper recreationally beyond the three days originally set for the season.

Analyzing the Data

With video footage in hand and analysts trained and in place to watch the video, protocols were put in place based on season length and random observation. Five-minute intervals were randomly chosen from the footage recorded between 5 a.m. and 10:59 p.m. The number of intervals analyzed was based on the length of the season.

When watching the video, analysts counted boat launches and anglers per boat at each public boat ramp and categorized the fishing vessels based on size, design, and type of fishing gear. For a boat launch to be counted, the analyst had to observe the boat coming off the trailer during the 5-minute interval. Analysts could observe the boat outside the 5-minute interval to count the number of potential anglers.

All videos were viewed by two analysts and the observations were averaged and were then transformed into hourly estimates for each day.

Angler Effort: In Conclusion

The analyses recorded in this publication indicate that rebuilding a fishery can become more challenging when the seasons are shortened. During a compressed season, anglers will increase daily effort. The team of analysts observed the highest daily effort during the shortest season on record of three days in 2017.

The last-minute addition of the long season, 39 days, in 2017 offered a chance to see if angler effort would relax with more days available. Daily effort did decrease in the number of boat launches and anglers per day, as you can see referenced in the table above. Analysts also found that higher wind speeds could decrease daily effort.

In a nutshell, the results of this study indicate that as annual catch limits increase, and longer seasons are warranted, effort compression may be relaxed and afford even longer seasons.

A longer season would also provide safety for anglers compared to short seasons, as they are more likely to feel less pressure to fish on bad weather days.

Credit: 
Dauphin Island Sea Lab

Hypoxic dead zones found in urban streams, not just at the coast

image: A new study finds that hypoxic dead zones, where dissolved oxygen levels in water drop so low that fish and other animals suffocate, occur in urban streams such as this one in Raleigh, N.C.

Image: 
Duke Univ.

DURHAM, N.C. -- Hypoxic dead zones, which occur when dissolved oxygen levels in water drop so low that fish and other aquatic animals living there suffocate, are well-documented problems in many coastal waters.

Now, a new Duke University-led study reveals they also occur in freshwater urban streams.

"We were surprised to find these dead zones are happening in our own backyards, not just in rivers and coastal waters downstream of major point sources of nutrient pollution," said study leader Joanna Blaszczak, a 2018 doctoral graduate of Duke's Nicholas School of the Environment.

Blaszczak and her colleagues published their peer-reviewed study Dec. 3 in the journal Limnology & Oceanography.

To conduct the study, they measured dissolved oxygen concentrations, light levels, water chemistry and stream flow in six streams draining urban watersheds in Durham and Raleigh, N.C., from 2015 to 2017.

They used the data to model the growth of algae and oxygen-consuming bacteria in the streams and examine the frequency at which dissolved oxygen concentrations dropped below two milligrams per liter -- the danger point for fish and other aquatic organisms.

"Streams draining developed areas are subject to intense, erosive storm flows when roads and stormwater pipes rapidly route runoff into streams during storms, without allowing the water to infiltrate into the soil," Blaszczak said.

"We found that erosion caused by these intense flows changed the shape of some stream channels to such an extent that water essentially stopped flowing in them during late summer. They became a series of pools containing high levels of nutrient runoff and organic matter, including nitrogen from leaking sewer pipes, fertilizer and pet waste."

The elevated nutrient levels spurred greater consumption of dissolved oxygen by bacteria in the water, causing the pools to become hypoxic until the next storm flushed them out.

Some streams were found to be more vulnerable than others, depending on their underlying geology.

"Channels that are more susceptible to erosion can become impounded by newly exposed bedrock outcroppings and culverts, leading to the formation of the between-storm pools that are so prone to hypoxia," Blaszczak explained.

"We found that growth rates of algae that support stream food webs was slower in streams with more frequent intense storm flows. Together with the occurrence of hypoxia, this paints a bleak and stressful picture for freshwater organisms that are trying to survive in these urban streams," she said.

While the study was conducted only in small streams draining urban watersheds, its findings are broadly applicable, Blaszczak noted, because pools are ubiquitous features of rivers, made even more so by the long-term legacies of dam building and dam removal.

"Hypoxia is not commonly assumed to occur in streams and rivers because of stream flow, which typically moves water fast enough to prevent the drawdown of dissolved oxygen by bacteria to hypoxic levels," she said. "However, dam building and other human alterations that stop the flow of water make these freshwater ecosystems particularly vulnerable to hypoxia with negative implications for biodiversity, especially in rivers already burdened with high nutrient pollution."

Credit: 
Duke University

Racial inequity among adolescents receiving flu vaccine

image: This is co-author and FSU doctoral student, Benjamin Dowd-Arrow.

Image: 
(FSU Photo/Bruce Palmer)

Black adolescents living in the United States tend to receive the influenza vaccine at significantly lower rates than their white and Hispanic counterparts, according to Florida State University researchers.

A new study, led by former FSU graduate student Noah Webb, along with current graduate student Benjamin Dowd-Arrow and Associate Professors of Sociology Miles Taylor and Amy Burdette, was recently published in Public Health Reports.

"Our findings are important because black adolescents and young adults consistently have worse health profiles than white and Hispanic adolescents and young adults," Dowd-Arrow said. "The black population is also more likely to reside in multigenerational homes, where there is a very real threat of unvaccinated teenagers spreading the flu to unvaccinated children and grandparents."

Although disparities exist among the three racial/ethnic groups examined, the team also identified low influenza vaccination rates in adolescents across the board when compared with other age groups.

"Our research highlights that we're not doing enough for any group," he said. "We should still be trying to address all adolescents out there because they consistently have the lowest vaccination rates among children 18 and under in the United States," he said.

In the paper, the scholars note that recent research suggests achieving an 80 percent increase in influenza vaccination among children and adolescents would likely result in a 91 percent reduction in the total number of influenza illness cases on a population-wide basis.

"Vaccinating more adolescents could strengthen herd immunity, which could ultimately protect vulnerable populations," he said.

Researchers used a study sample of 117,273 adolescents, ages 13 to 17, after analyzing provider-reported vaccination histories from 2010-2016 from the teen portion of the National Immunization Survey.

"Since the passing of the Affordable Care Act in 2010, we wondered if increased access to health care and preventive health services would increase, reduce or even eliminate flu vaccination disparities by race/ethnicity," Dowd-Arrow said. "We found that disparities between white and Hispanic adolescents have waned over time, but disparities between white and black adolescents have emerged in recent years."

Compared with white adolescents, Hispanic adolescents had higher odds of vaccination, while black adolescents had lower odds.

"We found that, after controlling for key demographic characteristics, Hispanics had higher influenza vaccination rates than white adolescents for much of the study period," Dowd-Arrow said. "However, that advantage tapered roughly midway through the study period and flu vaccine rates among white and Hispanic adolescents became similar."

Researchers also found that although the rates varied slightly during the initial study period, a disparity in the rates began emerging in 2014.

The 2014-2015 flu season marked one of the highest years on record of the influenza virus, with about 710,000 hospitalizations and 80,000 flu or flu-related deaths.

"It was also a time when Congress began to cut access and states refrained from expanding Medicaid," he said.

By 2016, black adolescents received influenza vaccinations at significantly lower rates than their white counterparts.

"The people most affected by the highest rates of illness and death are also the people who are most likely to be affected by poverty," Dowd-Arrow said. "The consequences of not getting vaccinated further marginalizes and places a burden on people who really can't afford that."

Researchers said targeted interventions are needed to improve influenza vaccination rates and reduce racial/ethnic disparities in adolescent vaccination coverage.

"Parents are hesitant to vaccinate their children and adolescents because of lack of information, concerns about side effects, lack of access of health care due to cost or inadequate transportation," Dowd-Arrow said. "These are areas that, if addressed by public health officials, could ultimately have great public health as well as economic impacts."

Scholars suggest future research should examine variations among Hispanic adolescents, such as Cuban or Puerto Rican youth. Another avenue of study could specifically focus on parental hesitations or concerns about vaccinations that lead to vaccination noncompliance.

Credit: 
Florida State University

Women having a heart attack wait longer than men to get help

Sophia Antipolis, 11 December 2018: Women are being urged to call an ambulance immediately if they have heart attack symptoms, following research showing they wait longer than men to get help. The study is published today in European Heart Journal: Acute Cardiovascular Care, a publication of the European Society of Cardiology (ESC).1

Ischaemic heart disease is the leading cause of death in women and men.2 There is a misconception that heart attacks are a 'man's problem' but they are just as common in women. On average, women are about 8-10 years older than men when they have a heart attack and they tend to experience different symptoms. But women benefit equally from fast treatment.

Study author Dr Matthias Meyer, a cardiologist at Triemli Hospital, Zurich, Switzerland, said women may wait longer due to the myth that heart attacks usually occur in men and because pain in the chest and left arm are the best known symptoms. "Women and men have a similar amount of pain during a heart attack, but the location may be different," he said. "People with pain in the chest and left arm are more likely to think it's a heart attack, and these are usual symptoms for men. Women often have back, shoulder, or stomach pain."

In heart attacks caused by acute blockage of an artery supplying blood to the heart, rapid reopening of the vessel by inserting a stent is critical. Faster restoration of blood flow translates into more salvaged heart muscle and less dead tissue, less subsequent heart failure, and a lower risk of death. During the last 10-15 years, multiple strategies have been employed within heart attack treatment networks to reduce the time delay between symptoms and treatment. This study investigated whether delays have reduced in women and men.

The study was a retrospective analysis of all 4,360 patients (967 women and 3,393 men) with acute ST-segment elevation myocardial infarction (STEMI) treated at Triemli Hospital, the second largest percutaneous coronary intervention (PCI) centre in Switzerland, between 2000 and 2016.

The primary outcomes of interest were changes in patient delay (the time from symptom onset to contact with a hospital, emergency medical service, or general practitioner), and system delay (the subsequent time until reopening of the vessel). The secondary outcome of interest was in-hospital mortality.

During the 16-year period, women and men had equal reductions in system delays. Dr Meyer said: "We found no gender difference in the timely delivery of care by health professionals, with both men and women receiving a stent more quickly after contacting the medical services than they did in the past."

However, patient delay decreased slightly in men over the 16-year period but did not change in women. Women wait approximately 37 minutes longer than men before contacting medical services. Clinical signs of persistent chest discomfort were associated with shorter patient delays in men but not women. "Women having a heart attack seem to be less likely than men to attribute their symptoms to a condition that requires urgent treatment," said Dr Meyer.

In-hospital mortality was significantly higher in women (5.9%) than men (4.5%) during the study period. Delays were not associated with in-hospital mortality after correcting for multiple factors. Dr Meyer said: "As expected, the acute complications of a heart attack drive in-hospital mortality rather than delays. But we do know from previous studies that delays predict long-term mortality."

He concluded: "Every minute counts when you have a heart attack. Look out for moderate to severe discomfort including pain in the chest, throat, neck, back, stomach or shoulders that lasts for more than 15 minutes. It is often accompanied by nausea, cold sweat, weakness, shortness of breath, or fear."

Credit: 
European Society of Cardiology

NASA-NOAA satellite sees Tropical Cyclone Owen's remnants reorganizing

image: On Dec. 10, 2018 the VIIRS instrument aboard NASA-NOAA's Suomi NPP satellite captured a visible image of the remnants of Tropical Cyclone Owen in the Gulf of Carpentaria, just west of Queensland.

Image: 
NASA/NRL

The remnants of Tropical Cyclone have been lingering in the Southern Pacific Ocean for days. On Dec. 10, the storm finally appeared more organized on satellite imagery providing forecasters with a strong indication that it may be reborn as a tropical cyclone. NASA-NOAA's Suomi NPP satellite passed over the Gulf of Carpentaria and saw the storm.

On Dec. 10 at 0100 UTC (Dec. 9 at 8 p.m. EST), Owen's remnants were located near 16.1 degrees south latitude and 144.6 degrees east longitude, approximately 282 nautical miles east-northeast of Mornington Island, Australia, and just west of the northern tip of Queensland.

On Dec. 10 at 0410 UTC (Dec. 9 at 11:10 p.m. EST, the Visible Infrared Imaging Radiometer Suite (VIIRS) instrument aboard NASA-NOAA's Suomi NPP satellite analyzed the remnants. VIIRS revealed a consolidating low level circulation center with deep convection building over the center.

The Joint Typhoon Warning Center noted: "Sea surface temperatures in the Gulf of Carpentaria are conducive for future tropical cyclone development. Multiple [computer forecast] models indicate development over the next 24-36 hours with a westward trajectory."

Credit: 
NASA/Goddard Space Flight Center

UNH researchers find unexpected impact of hurricanes on Puerto Rico's watershed

image: Aquatic sensors are used in streams like this one, Quebrada Sonadora, which is one of the study sites in the Luquillo Mountains of Puerto Rico where researchers monitored nitrate levels before and after Hurricanes Irma and Maria.

Image: 
William McDowell/UNH

DURHAM, N.H. - Researchers at the University of New Hampshire have found unprecedentedly high levels of nitrate, an essential plant nutrient, in streams and watersheds of Puerto Rico for a year after two consecutive major hurricanes in 2017. This high amount of nitrate may have important climate change implications that could harm forest recovery and threaten ecosystems along Puerto Rico's coastline by escalating algal blooms and dead zones.

"Nitrate is important for plant growth but this is a case where you can have too much of a good thing," said William McDowell, professor of environmental science at UNH. "The levels of nitrate we were seeing were unusually high. Over the last three decades, we've noticed elevated levels of nitrate right after a hurricane, but after these back-to-back major storms, the wheels came off the bus. We saw an increase in the nitrate levels that still has not fully recovered."

Researchers used aquatic sensors in streams in the tropical Luquillo Mountains of Puerto Rico to obtain readings every 15 minutes to follow weekly stream chemistry after both Hurricane Irma (August 2017) and Hurricane Maria (September 2017). They compared this new data to weekly stream chemistry results compiled over the last 35 years - the longest record of tropical stream chemistry in the world. As expected from past hurricanes, nitrate concentrations increased for a few months after each storm, peaking at around four months. The findings, reported at the 2018 fall meeting of the American Geophysical Union (AGU) in Washington, D.C., revealed that unlike past hurricanes, the increase was still evident nine months after Hurricane Maria and did not return to previous base levels. The base readings remained higher and each time it rained the nitrate levels spiked, even after small rainstorms, likely reflecting major biotic processes (leaf and tree decomposition and vegetation regrowth) that control nitrate fluxes.

"After Hurricanes Irma and Maria, there seems to be a "new normal" for the base level of nitrate," said McDowell. "If this continues and the mountain streams transport these higher levels of nitrate to the ocean it could disrupt the coastal ecosystem, possibly endangering coral and other sea life."

Also of concern is forest productivity. Based on previous studies and observations at the Luquillo study site, the historical frequency of major hurricane direct hits on the island was estimated to be every 50-60 years. But recent records show that it is now happening once every 10 years. With this increase in frequency and storm strength, much greater export of nitrate to nitrogen-limited coastal waters can be expected than previously estimated, which could deplete the standing stocks of nitrate in the forest and have uncertain effects on forest productivity and regrowth.

Credit: 
University of New Hampshire

Lifespan extension at low temperatures is genetically controlled

IMAGE: This is female rotifer (Brachionus), a model system for aging research.

Image: 
Michael Shribak and Kristin Gribble

WOODS HOLE, Mass. -- Why do we age? Despite more than a century of research (and a vast industry of youth-promising products), what causes our cells and organs to deteriorate with age is still unknown.

One known factor is temperature: Many animal species live longer at lower temperature than they do at higher temperatures. As a result, "there are people out there who believe, strongly, that if you take a cold shower every day it will extend your lifespan," says Kristin Gribble, a scientist at the Marine Biological Laboratory (MBL).

But a new study from Gribble's lab indicates that it's not just a matter of turning down the thermostat. Rather, the extent to which temperature affects lifespan depends on an individual's genes.

Gribble's study, published in Experimental Gerentology, was conducted in the rotifer, a tiny animal that has been used in aging research for more than 100 years. Gribble's team exposed 11 genetically distinct strains of rotifers (Brachionus) to low temperature, with the hypothesis that if the mechanism of lifespan extension is purely a thermodynamic response, all strains should have a similar lifespan increase.

However, the median lifespan increase ranged from 6 percent to 100 percent across the strains, they found. They also observed differences in mortality rate.

Gribble's study clarifies the role of temperature in the free-radical theory of aging, which has dominated the field since the 1950s. This theory proposes that animals age due to the accumulation of cellular damage from reactive oxidative species (ROS), a form of oxygen that is generated by normal metabolic processes.

"Generally, it was thought that if an organism is exposed to lower temperature, it passively lowers their metabolic rate and that slows the release of ROS, which slows down cellular damage. That, in turn, delays aging and extends lifespan," Gribble says.

Her results, however, indicate that the change in lifespan under low temperature is likely actively controlled by specific genes. "This means we really need to pay more attention to genetic variability in thinking about responses to aging therapies," she says. "That is going to be really important when we try to move some of these therapies into humans."

Credit: 
Marine Biological Laboratory

Potential seen for tailoring treatment for acute myeloid leukemia

image: An artist's conception of chemotherapy in a hospital infusion room. The image combines photography and photo enhancement.

Image: 
Alice C, Gray

Advances in rapid screening of leukemia cells for drug susceptibility and resistance are bringing scientists closer to patient-tailored treatment for acute myeloid leukemia (AML).

Research on the drug responses of leukemia stem cells may reveal why some attempts to treat are not successful or why initially promising treatment results are not sustained.

AML is a serious disorder of certain blood-forming cells. In this disease, certain early precursor cells in the bone marrow that usually develop into white blood cells don't mature properly. They remain frozen as primitive cells called blasts, unable to further differentiate and mature. These can accumulate and cause low blood counts that reduce the ability to fight infections, and low platelet counts that cause risk of life threatening hemorrhage.

Leukemia stem cells - the progenitors for the immature, cancerous blood cells - propagate AML, and also play a role in the cancer returning after treatment. Cancer researchers are interested in how genes are expressed in this cell population, because this data may hold clues to resistance to standard therapies and answers to why some patients relapse.

A study presented at the 60th Annual Meeting of the American Society of Hematology in San Diego looked at the drug response patterns of stem cells and blast cells taken from individual patients diagnosed with acute myeloid leukemia. The information was gathered through high throughput screening, a state of the art method for quickly evaluating and testing many samples.

The researchers found that leukemia stem cells and blast cells diverged in their drug susceptibility patterns, and also that these patterns differed from patient to patient.

For example, blast cells s responded in the test to the drugs most commonly used to treat patients, but none were effective against leukemia stem cells. The researchers did find 12 drugs from eight classes that seemed to preferentially target leukemia stem cells, compared to blast cells. Many of them are not often used in patients with this type of cancer.

The multidisciplinary team on the project included stem cell biologists, hematologists, medical oncologists, pathologists, computer scientists, drug developers and others.

The senior researcher was hematologist Dr. Pamela Becker, professor of medicine at the UW School of Medicine. She is also a scientist at the Fred Hutchinson Cancer Research Center and the UW Medicine Institute for Stem Cells and Regenerative Medicine, and sees patients with blood disorders at the Seattle Cancer Care Alliance.

In the laboratory study, the researchers compared the drug sensitivity of blast cell and stem cell populations taken from the same six patients. In doing so, they tested a custom panel of drugs, targeted agents and drug combinations on the cells, and did genetic analyses for 194 mutations. The panel included both FDA approved and investigational drugs.

The unique drug susceptibility patterns observed in leukemia stem cells and blast cells are leading the scientists to hope that patient-specific approaches could be developed against acute myeloid leukemia, with the goal of improving the outcomes for people with this form of blood cancer.

Credit: 
University of Washington School of Medicine/UW Medicine

Disability among India's elderly much higher than census estimates

New estimates of disability among India's elderly population, based on the ability to carry out three basic living activities - walking, dressing, and toileting - show that the scale of the problem is much larger than suggested by the Indian national census.

A new paper coauthored by IIASA researcher Nandita Saikia found that 17.91% of males and 26.21% of females aged 60 and above, experience disability in these areas, equating to 9m elderly men and 14m elderly women. The most recent census, from 2011, suggests that just 5% of the elderly population suffer from a disability. The prevalence of disability is much higher among widowed women, and among the poor and illiterate.

Saikia and Mukesh Parmar from Jawaharlal Nehru University also found a statistically significant connection between chronic morbidity, or long-term health conditions, and disability. They studied three such conditions - diabetes, high blood pressure, and heart disease. Diabetes had the highest correlation to disability, followed by high blood pressure and heart disease.

"We found that the likelihood of disability is always the highest among diabetes patients, whereas the disability rate is the lowest among elderly persons with heart disease. This may be due to high mortality among heart patients," says Saikia. "Diabetes patients, on the other hand, may live for longer periods with disability. These results are helpful for both patients and healthcare providers in terms of taking preventive measures at the onset of morbidities."

Previous studies of morbidity and disability in India were carried out using primary sample surveys, limiting them to small areas of India with a small sample size. They can therefore not be used to gain a generalized picture across the whole nation, as India is so large and varied. In addition, they tend to focus on the association between a specific type of morbidity and a specific disability, so cannot give a broader picture.

Saikia and Parmar took a different approach to cover the whole country and give a broad picture for the first time. They used data from the second round of the Indian Human Development Survey which was carried out by the University of Maryland, US, and the National Council of Applied Economic Research, India. This was a survey covering more than 42,000 households across India, selected using a stratified random sampling technique, and covered various topics including health, employment, economy, and education. The second round of the survey also included questions about chronic morbidity and disability.

The researchers defined disability as difficulty or inability to perform three specific activities of daily living- walking 1 km, going to the toilet without help, and dressing without help, and looked at the data for people aged over 60. In the survey, respondents could answer "no difficulty", "can do with difficulty", and "unable to do it". Each answer was assigned a score, which allowed Saikia and Parmar to calculate what is known as the Katz Index of Independence, which takes into account multiple disabilities. As the survey also asked questions about long-term health conditions, the researchers were able to connect the disabilities to specific conditions.

The researchers say that acting in a timely way to address chronic morbidity will help to minimize the huge associated burden of disability.

"Due to improved socioeconomic conditions, there is a steady increase in life expectancy and consequently aging among Indians. However, this may not translate into a healthy aging, particularly when they suffer chronic diseases like diabetes. In order to prepare for a healthy old age, a social environment should be created for early detection and postponing the onset of morbidity as the later stages of life approaches, by focusing on a healthy lifestyle from the beginning of adulthood," says Saikia.

She adds that policymakers should look at ways to promote healthy lifestyles among India's adult population, such as providing sufficient transport and infrastructure, increasing taxes on tobacco and alcohol, and raising awareness of the benefits of healthy diets and physical activity. The government should also consider offering more assistance to families with elderly members, particularly as family structures and society values change. All stakeholders, including the government, community health workers, and society as a whole must be involved.

Credit: 
International Institute for Applied Systems Analysis

Focusing on the negative is good when it comes to batteries

Imagine not having to charge your phone or laptop for weeks. That is the dream of researchers looking into alternative batteries that go beyond the current lithium-ion versions popular today. Now, in a new study appearing in the journal Science, chemists at several institutions, including Caltech and the Jet Propulsion Laboratory, which is managed by Caltech for NASA, as well as the Honda Research Institute and Lawrence Berkeley National Laboratory, have hit on a new way of making rechargeable batteries based on fluoride, the negatively charged form, or anion, of the element fluorine.

"Fluoride batteries can have a higher energy density, which means that they may last longer--up to eight times longer than batteries in use today," says study co-author Robert Grubbs, Caltech's Victor and Elizabeth Atkins Professor of Chemistry and a winner of the 2005 Nobel Prize in Chemistry. "But fluoride can be challenging to work with, in particular because it's so corrosive and reactive."

In the 1970s, researchers attempted to create rechargeable fluoride batteries using solid components, but solid-state batteries work only at high temperatures, making them impractical for everyday use. In the new study, the authors report at last figuring out how to make the fluoride batteries work using liquid components--and liquid batteries easily work at room temperature.

"We are still in the early stages of development, but this is the first rechargeable fluoride battery that works at room temperature," says Simon Jones, a chemist at JPL and corresponding author of the new study.

Batteries drive electrical currents by shuttling charged atoms--or ions--between a positive and negative electrode. This shuttling process proceeds more easily at room temperature when liquids are involved. In the case of lithium-ion batteries, lithium is shuttled between the electrodes with the help of a liquid solution, or electrolyte.

"Recharging a battery is like pushing a ball up a hill and then letting it roll back again, over and over," says co-author Thomas Miller, professor of chemistry at Caltech. "You go back and forth between storing the energy and using it."

While lithium ions are positive (called cations), the fluoride ions used in the new study bear a negative charge (and are called anions). There are both challenges and advantages to working with anions in batteries.

"For a battery that lasts longer, you need to move a greater number of charges. Moving multiply charged metal cations is difficult, but a similar result can be achieved by moving several singly charged anions, which travel with comparative ease," says Jones, who does research at JPL on power sources needed for spacecraft. "The challenges with this scheme are making the system work at useable voltages. In this new study, we demonstrate that anions are indeed worthy of attention in battery science since we show that fluoride can work at high enough voltages."

The key to making the fluoride batteries work in a liquid rather than a solid state turned out to be an electrolyte liquid called bis(2,2,2-trifluoroethyl)ether, or BTFE. This solvent is what helps keep the fluoride ion stable so that it can shuttle electrons back and forth in the battery. Jones says his intern at the time, Victoria Davis, who now studies at the University of North Carolina, Chapel Hill, was the first to think of trying BTFE. While Jones did not have much hope it would succeed, the team decided to try it anyway and were surprised it worked so well.

At that point, Jones turned to Miller for help in understanding why the solution worked. Miller and his group ran computer simulations of the reaction and figured out which aspects of BTFE were stabilizing the fluoride. From there, the team was able to tweak the BTFE solution, modifying it with additives to improve its performance and stability.

"We're unlocking a new way of making longer-lasting batteries," says Jones. "Fluoride is making a comeback in batteries."

Credit: 
California Institute of Technology

Yin and yang: Opposites in nature, fluoride and lithium, compete for higher energy batteries

image: Researchers discovered a new molecule, BTFE, that helps fluoride dissolve at room temperature for building higher energy batteries.

Image: 
Purdue University image/Brett Savoie

WEST LAFAYETTE, Ind. -- The chemical element that makes up most of today's batteries, lithium, may soon be challenged by its polar opposite on the Periodic Table: fluoride. Yes, the same stuff in toothpaste.

The two elements would be in competition for helping electronics last longer on a charge, such as electric cars that need to travel more miles than is possible with lithium-ion batteries on the market.

Researchers are one step closer to equipping fluoride-based batteries for battle with improvements that allow the technology to operate at room temperature. Up until this point, fluoride had been limited to building high-temperature batteries that are impractical for our electronic devices.

A team of researchers at the Jet Propulsion Laboratory, the California Institute of Technology (Caltech), the Honda Research Institute, Inc. and Lawrence Berkeley National Laboratory - including a Caltech postdoctoral researcher who is now an assistant professor at Purdue University - has secured two U.S. patents for the improvements and published findings in the Dec. 6 issue of the journal Science.

Fluoride has long been in the running to trump lithium because of its potential for better energy storage in electrodes, which ions move between to charge a battery.

"Fluoride-based battery electrodes can store more ions per site than typical lithium-ion electrodes, which means that this technology has the capability to be much more energy dense," said Brett Savoie, a Purdue assistant professor of chemical engineering.

Lithium and fluoride share a yin-and-yang relationship: lithium is the most electropositive element on the Periodic Table, meaning that it likes to lose electrons, while fluoride is the most electronegative element, only wanting to acquire electrons. Giving lithium electrons it doesn't want stores energy, while taking electrons away from fluoride also stores energy.

To build a battery, the ions of elements like fluorine and lithium must dissolve into the battery's electrolyte, a solution that helps them to travel between electrodes. The problem is that fluoride ions have only been able to dissolve well into solid electroyltes, limiting their use to high-temperature batteries.

For fluoride-based batteries to operate at room temperature, fluoride ions would need to dissolve better into a liquid electrolyte, like lithium ions do.

The technology could then move towards unseating lithium, a cation-based battery, as the first high-performing, anion-based rechargeable battery.

Researchers at the Jet Propulsion Laboratory discovered a liquid electrolyte, a synthesized molecule called BTFE, which allows fluoride to dissolve at room temperature. Savoie helped to make this discovery by simulating how BTFE and other related solvents successfully dissolve fluoride.

BTFE is made up of several chemical groups that are arranged to give the molecule two positively charged regions that strongly interact with fluoride, since opposites attract. Simulations showed how these charged regions lead BTFE molecules to surround fluoride and dissolve it at room temperature.

Savoie's simulations also provided a mechanism for testing other solvents on fluoride, such as "glyme" molecules that expand the voltage and stability window of BTFE. This means that the battery would be less likely to fail at higher voltages.

The next step in beefing up fluoride-based batteries is extending the lifetimes of the positive and negative electrodes, called the cathode and anode. The team has already made some headway with this by stabilizing the copper cathode so that it doesn't dissolve into the electrolyte.

Battery testing is underway. The work was supported by the Resnick Sustainability Institute and the Molecular Materials Research Center, both at Caltech, the National Science Foundation, the Department of Energy Office of Science and the Honda Research Institute.

This research also aligns with Purdue's Giant Leaps celebration, acknowledging the university's global advancements made toward a sustainable economy and planet as part of Purdue's 150th anniversary. This is one of the four themes of the yearlong celebration's Ideas Festival, designed to showcase Purdue as an intellectual center solving real-world issues.

Credit: 
Purdue University

Learning from mistakes

video: Researchers in the Adolphs laboratory at Caltech have discovered that certain types of neurons called error neurons are more active when we make a mistake. Take the Stroop test and see how you fare.

Image: 
Caltech

Everyone makes little everyday mistakes out of habit--a waiter says, "Enjoy your meal," and you respond with, "You, too!" before realizing that the person is not, in fact, going to be enjoying your meal. Luckily, there are parts of our brains that monitor our behavior, catching errors and correcting them quickly.

A Caltech-led team of researchers has now identified the individual neurons that may underlie this ability. The work provides rare recordings of individual neurons located deep within the human brain and has implications for psychiatric diseases like obsessive-compulsive disorder.

The work was a collaboration between the laboratories of Ralph Adolphs (PhD '93), Bren Professor of Psychology, Neuroscience, and Biology, and the Allen V. C. Davis and Lenabelle Davis Leadership Chair and director of the Caltech Brain Imaging Center of the Tianqiao and Chrissy Chen Institute for Neuroscience; and Ueli Rutishauser (PhD '08), associate professor of neurosurgery, neurology, and biomedical sciences, and Board of Governors Chair in Neurosciences at Cedars-Sinai Medical Center.

"Many people know the feeling of making a mistake and quickly catching oneself--for example, when you are typing and press the wrong key, you can realize you made a mistake without even needing to see the error on the screen," says Rutishauser, who is also a visiting associate in Caltech's Division of Biology and Biological Engineering. "This is an example of how we self-monitor our own split-second mistakes. Now, with this research, we know which neurons are involved in this, and we are starting to learn more about how the activity of these neurons helps us change our behavior to correct errors."

In this work, led by Caltech graduate student Zhongzheng (Brooks) Fu, the researchers aimed to get a precise picture of what happens on the level of individual neurons when a person catches themselves after making an error. To do this, they studied people who have had thin electrodes temporarily implanted into their brains (originally to help localize epileptic seizures). The work was done in collaboration with neurosurgeon Adam Mamelak, professor of neurosurgery at Cedars-Sinai, who has conducted such electrode implantations for clinical monitoring of epilepsy for over a decade and closely collaborated on the research studies.

While neural activity was measured in their medial frontal cortex (MFC), a brain region known to be involved in error monitoring, the epilepsy patients were given a so-called Stroop task to complete. In this task, a word is displayed on a computer screen, and the patients are asked to identify the color of the text. Sometimes, the text and the color are the same (the word "green" for example, is shown in green). In other cases, the word and the color are different ("green" is shown in red text). In the latter case, the correct answer would be "red," but many people make the error of saying "green." These are the errors the researchers studied.

The measurements allowed the team to identify specific neurons in the MFC, called self-monitoring error neurons, that would fire immediately after a person made an error, well before they were given feedback about their answer.

For decades, scientists have studied how people self-detect errors using electrodes placed on the surface of the skull that measure the aggregate electrical activity of thousands of neurons. These so-called electroencephalograms reveal that one particular brainwave signature, called the error-related negativity (ERN), is commonly seen on the skull over the MFC right after a person makes an error. In their experiments, Fu and his colleagues simultaneously measured the ERN as well as the firing of individual error neurons.

They discovered two fundamental new aspects of the ERN. First, an error neuron's activity level was positively correlated with the amplitude of the ERN: the larger the ERN for a particular error, the more active were the error neurons. This finding reveals that an observation of the ERN--a noninvasive measurement--provides information about the level of activity of error neurons found deep within the brain. Second, they found that this ERN-single-neuron correlation, in turn, predicted whether the person would change their behavior--that is, if they would slow down and focus more to avoid making an error on their next answer. If the error neurons fired but the brain-wide ERN signature was not seen or was weak, the person might still recognize that they made an error, but they would not modify their behavior for the next task. This suggests that the error neurons need to communicate their error detection to a large brain network in order to influence behavior.

The researchers found further specific evidence for parts of the circuit involved.

"We found error neurons in two different parts of the MFC: the dorsal anterior cingulate cortex (dACC) and the pre-supplementary motor area (pre-SMA)," says Fu. "The error signal appeared in the pre-SMA 50 milliseconds earlier than in the dACC. But only in the dACC was the correlation between the ERN and error neurons predictive of whether a person would modify their behavior. This reveals a hierarchy of processing--an organizational structure of the circuit at the single-neuron level that is important for executive control of behavior."

The research could also have implications for understanding obsessive-compulsive disorder, a condition in which a person continuously attempts to correct perceived "errors." For example, some individuals with this condition will feel a need to repeatedly check, in a short time period, if they have locked their door. Some people with obsessive-compulsive disorder have been shown to have an abnormally large ERN potential, indicating that their error-monitoring circuitry is overactive. The discovery of error neurons might facilitate new treatments to suppress this overactivity.

The researchers next hope to identify how the information from error neurons flows through the brain in order to produce behavioral changes like slowing down and focusing. "So far, we have identified two brain regions in the frontal cortex that appear to be part of a sequence of processing steps, but, of course, the entire circuit is going to be much more complex than that," says Adolphs. "One important future avenue will be to combine studies that have very fine resolution, such as this one, with studies using fMRI [functional magnetic resonance imaging] that give us a whole-brain field of view."

Credit: 
California Institute of Technology

New PET tracer identified for imaging Tau in Alzheimer's disease patients

video: Dr. Dean F. Wong from Johns Hopkins reports on the identification of a promising second-generation positron emission tomography (PET) tracer for imaging and measuring tau pathology, contributing to understanding of Alzheimer's and related dementias. The research is featured in the December 2018 issue of The Journal of Nuclear Medicine (http://jnm.snmjournals.org).

Image: 
The <em>Journal of Nuclear Medicine</em>

In the diagnosis of Alzheimer's disease and the search for effective treatments, tau tangles in the brain have joined amyloid build-up as markers of the disease and potential therapy targets. In the December issue of The Journal of Nuclear Medicine, the featured article of the month reports on the identification of a promising second-generation positron emission tomography (PET) tracer for imaging and measuring tau pathology.

"We compared three novel tau-specific radiopharmaceuticals--11C-RO-963, 11C-RO-643, and 18F-RO-948--that showed pre-clinical in vitro and in vivo promise for use in imaging human tau (Honer et al., JNM, April 2018)," explains Dean F. Wong, MD, PhD, Johns Hopkins University professor of radiology, neurology, psychiatry and neurosciences and director of the Division of Nuclear Medicine's Section of High Resolution Brain PET Imaging.

In this first human evaluation of these novel radiotracers, healthy humans and patients with Alzheimer's disease (AD) were studied using an innovative study design to perform head-to-head comparisons of the three compounds in a pairwise fashion. Wong states, "This design allowed us to select one radioligand, 18F-R0-948, as the most promising second-generation tau radiopharmaceutical for larger scale use in human PET tau imaging."

Over all brain regions and subjects, the trend was for 18F-RO-948 to have the highest standardized uptake value (SUVpeak), followed by 11C-RO-963 and then 11C-RO-643. Regional analysis of SUV ratio and total distribution volume for 11C-RO-643 and 18F-RO-948 clearly discriminated the AD group from the healthy control groups. Compartmental modeling confirmed that 11C-RO-643 had lower brain entry than either 11C-RO-963 or 18F-RO-948 and that 18F-RO-948 showed better contrast between areas of high versus low tau accumulation.

Subsequent analysis therefore focused on 18F-RO-948. Both voxelwise and region-based analysis of 18F-RO-948 binding in healthy controls versus AD subjects revealed multiple areas where AD subjects significantly differed from healthy controls. Voxelwise analysis also revealed a set of symmetric clusters where AD subjects had higher binding than healthy controls.

"Importantly, this new tracer appears to have much less off-target binding than was reported for existing tau tracers," notes Wong. "Especially, it has less binding to the choroid plexus adjacent to the hippocampus, which has confounded interpretation of mesial temporal tau measured by first generation PET Tau tracers."

He points out, "The significance of this research and the companion research reported in Kuwabara et al. in this same issue is that they describe in detail the selection and quantification of a second-generation tau PET imaging as a complement to amyloid imaging, allowing us to accurately measure tau pathology in living people and contributing to our understanding of the pathophysiology of Alzheimer's and related dementias. Better Tau PET radiopharmaceuticals also provide the promise of improved target engagement and monitoring of anti-tau treatments in future Alzheimer's clinical trials."

Collaboration has been key to this research process. Wong emphasizes, "These findings demonstrate the impact of the complementary strengths of preclinical, translational and clinical research with university PET and memory experts, NIH aging experts and dedicated imaging neuroscientists in the pharmaceutical industry to approach one of the greatest global public health challenges--i.e., Alzheimer's disease, where there is still no definitive cure. Improved biomarkers such as PET imaging of tau and, in the future, other dementia-implicated proteins are vital to reducing the enormous costs of drug development (typically $1B-$2B a year per drug) and eventually understanding and treating Alzheimer's."

Credit: 
Society of Nuclear Medicine and Molecular Imaging

On the trail of the Higgs Boson

For the physics community, the discovery of new particles like the Higgs Boson has paved the way for a host of exciting potential experiments. Yet, when it comes to such an elusive particle as the Higgs Boson, it's not easy to unlock the secrets of the mechanism that led to its creation. The experiments designed to detect the Higgs Boson involve colliding particles with sufficiently high energy head-on after accelerating them in the Large Hadron Collider (LHC) at CERN in Geneva, Switzerland. In a quest to understand the production mechanisms for the Higgs Boson, Silvia Biondi from the National Institute of Nuclear Physics, Bologna, Italy investigated the traces of a rare process, called ttH, in which the Higgs Boson is produced in association with a pair of elementary particles referred to as top quarks. Her findings can be found in a recent study published in EPJ Plus. Future LHC experiments are expected to yield even more precise measurements of the Higgs Boson's ability to couple with particles that physicists are already familiar with.

Biondi first looked at data from the initial experiments performed in 2010, 2011 and 2012. Unfortunately, that data did not prove to be statistically significant enough to yield a suitable measurement of the processes leading to the Higgs Boson's creation. However, more recent LHC experiments, such as the ATLAS experiment dating back to 2015 and 2016, attained the requisite level of precision to study the ttH creation mechanisms.

In turn, she devised a method for reconstructing the signals that could stem from Higgs particles for each set of collision data. In this way, she enhanced the ability to discriminate between an actual Higgs Boson, background noise, and particles that are in the same energy state, but which do not have the characteristics of the Higgs Boson. She then performed a procedure to compare the expected theoretical measurement of the probability that a Higgs Boson will appear, with the probability of the ttH process taking place.

Credit: 
Springer

Sensors to detect and measure cancer's ability to spread developed

image: A tumor cell that has acquired high metastatic potential during chemotherapy lights up with high FRET biosensor readout, whereas the cells that are sensitive to chemotherapy (and hence, low potential) stays dark.

Image: 
UC San Diego Health

The spread of invasive cancer cells from a tumor's original site to distant parts of the body is known as metastasis. It is the leading cause of death in people with cancer. In a paper published online in iScience, University of California San Diego School of Medicine researchers reported engineering sensors that can detect and measure the metastatic potential of single cancer cells.

"Cancer would not be so devastating if it did not metastasize," said Pradipta Ghosh, MD, professor in the UC San Diego School of Medicine departments of Medicine and Cellular and Molecular Medicine, director of the Center for Network Medicine and senior study author.

"Although there are many ways to detect metastasis once it has occurred, there has been nothing available to 'see' or 'measure' the potential of a tumor cell to metastasize in the future. So at the Center for Network Medicine, we tackled this challenge by engineering biosensors designed to monitor not one, not two, but multiple signaling programs that drive tumor metastasis; upon sensing those signals a fluorescent signal would be turned on only when tumor cells acquired high potential to metastasize, and therefore turn deadly."

Cancer cells alter normal cell communications by hijacking one of many signaling pathways to permit metastasis to occur. As the tumor cells adapt to the environment or cancer treatment, predicting which pathway will be used becomes difficult. By comparing proteins and protein modifications in normal versus all cancer tissues, Ghosh and colleagues identified a particular protein and its unique modification called tyrosine-phosphorylated CCDC88A (GIV/Girdin) that are only present in solid tumor cells. Comparative analyses indicated that this modification could represent a point of convergence of multiple signaling pathways commonly hijacked by tumor cells during metastasis.

The team used novel engineered biosensors and sophisticated microscopes to monitor the modification on GIV and found that, indeed, fluorescent signals reflected a tumor cell's metastatic tendency. They were then able to measure the metastatic potential of single cancer cells and account for the unknowns of an evolving tumor biology through this activity. The result was the development of Fluorescence Resonance Energy Transfer (FRET) biosensors.

Although highly aggressive and adaptive, very few cancer cells metastasize and that metastatic potential comes and goes, said Ghosh. If metastasis can be predicted, this data could be used to personalize treatment to individual patients. For example, patients whose cancer is not predicted to metastasize or whose disease could be excised surgically might be spared from highly toxic therapies, said Ghosh. Patients whose cancer is predicted to spread aggressively might be treated with precision medicine to target the metastatic cells.

"It's like looking at a Magic 8 Ball, but with a proper yardstick to measure the immeasurable and predict outcomes," said Ghosh. "We have the potential not only to obtain information on single cell level, but also to see the plasticity of the process occurring in a single cell. This kind of imaging can be used when we are delivering treatment to see how individual cells are responding."

The sensors need further refinement, wrote the authors, but have the potential to be a transformative advance for cancer cell biology.

Credit: 
University of California - San Diego