Culture

Research shows that older patients with untreated sleep apnea need greater medical care

BALTIMORE, MD - Obstructive sleep apnea (OSA) is a common and costly medical condition leading to a wide range of health risks such as cardiovascular disease, stroke, depression, diabetes and even premature death. Researchers at the University of Maryland School of Medicine (UMSOM) found that the medical costs are substantially higher among older adults who go untreated for the disorder.

The research, which was published in the Journal of Clinical Sleep Medicine, involved a review of a national sample of Medicare claims data. The researchers measured the health care costs over a year among Medicare beneficiaries who were 65 years and older and were ultimately diagnosed with OSA. They found that patients who went undiagnosed with OSA over a 12-month period had more doctor's appointments, emergency room visits, and hospital stays prior to being treated for the disorder. These patients on average had nearly $20,000 more in costs a year than those who were diagnosed and treated for OSA, the research found.

"Sleep disorders represent a massive economic burden on the U.S. health care system," said Emerson Wickwire, PhD, Associate Professor of Psychiatry and Medicine at UMSOM and Director of the Insomnia Program at the University of Maryland Medical Center, Midtown Campus. Dr. Wickwire, who was the Principal Investigator for this research, explained that economic aspects of diseases are increasingly recognized as important drivers of health decisions by patients, those paying for services, policymakers and ultimately the taxpayers.

Medical costs among those untreated for OSA will continue to rise, Dr. Wickwire cautioned, highlighting the importance of early detection and treatment among older adults..

"We conducted the largest economic analysis of sleep apnea among older adults to date," said Dr. Wickwire. "Medicare beneficiaries with obstructive sleep apnea cost taxpayers an additional $19,566 per year and utilized more outpatient, emergency, inpatient, prescription, and overall health care services. It's important to realize that costs associated with untreated sleep disorders are likely to continue to accrue year after year, which is why our group focuses on early recognition and treatment."

Researchers also observed that Medicare patients with OSA were more likely to suffer from other ailments more so than those individuals without the sleep disorder. For example, OSA is linked to an increased risk for high blood pressure, diabetes, heart disease, stroke and depression. The study authors suggest that insurers, legislators, and health systems leaders consider routine screening for OSA in older patients, especially those with medical and psychiatric comorbidities, to better contain treatment costs.

"The good news is that highly effective diagnostic and treatment strategies are available. Our team is currently using big data approaches as well as highly personalized sleep disorders treatments to improve outcomes and reduce costs associated with sleep disorders, "said Dr. Wickwire.

The research is critical as OSA affects up to 70% of elderly nursing home residents, and these individuals are at higher risk of death.

A 2016 report by the American Academy of Sleep Medicine estimated that undiagnosed OSA among U.S. adults costs $149.6 billion annually. While the report projected it would cost the health care system nearly $50 billion to diagnosis and treat every American adult with OSA, treatment would produce savings of $100 billion. The current study in JCSM is the largest analysis to date of the economic burden of untreated OSA among older adult Medicare beneficiaries. Dr. Wickwire's research was funded by RedMed as an investigator-initiated grant.

"Early detection and treatment for disorders such as obstructive sleep apnea is critical, particularly among older adults who face the risk of the most serious illnesses such as cardiovascular disease, hypertension, stroke, and diabetes," said UMSOM Dean E. Albert Reece, MD, PhD, MBA, who is also Executive Vice President for Medical Affairs, UM Baltimore, and the John Z. and Akiko K. Bowers Distinguished Professor, University of Maryland School of Medicine.

Credit: 
University of Maryland School of Medicine

January Alzheimer's & Dementia highlights: Sleep, race/ethnicity, artificial intelligence

image: January 2020 Journal Cover for Alzheimer's & Dementia: The Journal of the Alzheimer's Association

Image: 
Credit Alzheimer's Association

CHICAGO, January 16, 2020 - A sleep medication already approved to treat insomnia in the general population may also be beneficial for individuals with Alzheimer's, according to a new study published online today by Alzheimer's & Dementia: The Journal of the Alzheimer's Association.

The drug, suvorexant, was found to improve total sleep time for individuals with probable Alzheimer's disease dementia and insomnia as compared to a placebo. Sleep disturbances are common among individuals with Alzheimer's and other dementias. The researchers note that there is a potential for making cognitive declines worse when treating sleep problems in individuals with Alzheimer's who may also be taking other medications for symptoms or behavioral problems related to their dementia. Surovexant did not worsen the underlying cognitive impairment in the study participants.

W. Joseph Herring, M.D., Ph.D., from Merck & Co., Inc. and colleagues conducted a four-week, randomized phase 3 clinical trial with 277 participants. The researchers found that total sleep time improved from baseline by 73 minutes for the suvorexant group and 45 minutes for the placebo group. The study authors note that the suvorexant-mediated sleep effects are maintained over the course of the night, unlike other currently available sleep drugs.

Link: "Polysomnographic assessment of suvorexant in patients with probable Alzheimer's disease dementia and insomnia: A randomized trial"

The December print issue of Alzheimer's & Dementia includes articles that add to the growing body of research suggesting risk and progression of Alzheimer's varies by race. There are also differences found within the broader terms of Hispanic or Latino depending on a person's family background. These studies illustrate the need for more diversity in research studies to better understand the racial and ethnic disparities in the risk and prevalence of Alzheimer's. By understanding how race may alter Alzheimer's risk and genetics, researchers may be able to create better diagnostic tools and treatment strategies that are effective in all communities. Key findings include:

Researchers found nearly 10% of middle-age and older Latinos have mild cognitive impairment (MCI), a decline in memory and thinking skills. Prevalence ranges from 12.9% for individuals with Puerto Rican backgrounds to 8.0% among individuals with Cuban backgrounds.

Older age, high cardiovascular disease risk and depression symptoms were significantly associated with MCI diagnosis.

The studies include:

Link: "Sex/gender differences in cognitive trajectories vary as a function of race/ethnicity"

Link: "Local ancestry at APOE modifies Alzheimer's disease risk in Caribbean Hispanics"

Link: "Prevalence and correlates of mild cognitive impairments among diverse Hispanics/Latinos: Study of Latinos-Investigation of Neurocognitive Aging results"

Artificial intelligence may be useful to identify individuals at high risk for developing dementia by sorting through hospital discharge summaries, according to a study available online by Alzheimer's & Dementia: The Journal of the Alzheimer's Association.

Thomas H. McCoy, Jr., M.D., Massachusetts General Hospital, Boston created computer software to analyze the cognitive and behavioral symptoms documented in discharge notes in electronic health records of more than 500,000 individuals from two academic medical centers. McCoy and colleagues found that their machine learning system may help predict the risk of a healthy person developing dementia up to eight years in advance of a diagnosis.

According to the Alzheimer's Association, early detection may identify individuals at high risk so they can get further clinical evaluation. An earlier and more accurate diagnosis enables access to approved drug treatments, and allows individuals to participate in planning for their future.

Link: "Stratifying risk for dementia onset using large-scale electronic health record data: a retrospective cohort study"

These articles and the rest of the January and December issues of Alzheimer's & Dementia: The Journal of the Alzheimer's Association are available online.

Credit: 
Alzheimer's Association

Head/neck cancer diagnosis, time to treatment after ACA Medicaid expansions

Bottom Line: Researchers for this observational study examined the association between the expansion of Medicaid coverage in some states after the Patient Protection and Affordable Care Act (ACA) was passed and the diagnosis and treatment of patients with head and neck squamous cell carcinoma (HNSCC). The analysis included nearly 91,000 adults with newly diagnosed HNSCC who were identified from the National Cancer Database. Researchers report that between the pre-ACA (2010-2013) and post-ACA (2014-1026) periods, the percentage of uninsured patients with HNSCC decreased but those decreases didn't differ significantly between expansion and nonexpansion states. The percentage of patients diagnosed with localized (stage I or II) HNSCC decreased from the pre-ACA to post-ACA periods in both expansion and nonexpansion states but the decreases were smaller in Medicaid expansion states so that resulted in a small relative increase in patients diagnosed with localized disease in Medicaid expansion states. The average time to beginning treatment didn't differ overall between expansion and nonexpansion states but was improved for patients with nonoropharyngeal HNSCC ?in expansion states relative to nonexpansion states. Limitations of the study include possible misclassification of insurance coverage.

To access the embargoed study: Visit our For The Media website at this link https://media.jamanetwork.com/

Authors: Evan M. Graboyes, M.D., Medical University of South Carolina, Charleston, and coauthors.

(doi:10.1001/jamaoto.2019.4310)

Editor's Note: The article includes conflict of interest and funding/support disclosures. Please see the article for additional information, including other authors, author contributions and affiliations, financial disclosures, funding and support, etc.

#  #  #

Media advisory: To contact corresponding author Evan M. Graboyes, M.D., email Montez Seabrook at seabromo@musc.edu">seabromo@musc.edu. The full study is linked to this news release.

Embed this link to provide your readers free access to the full-text article This link will be live at the embargo time https://jamanetwork.com/journals/jamaotolaryngology/fullarticle/10.1001/jamaoto.2019.4310?guestAccessKey=7d558789-063d-4e77-a908-e043b73f939b&utm_source=For_The_Media&utm_medium=referral&utm_campaign=ftm_links&utm_content=tfl&utm_term=011620

Credit: 
JAMA Network

Bartonella bacteria found in hemangiosarcoma tumors from dogs

Researchers from North Carolina State University have found a very high prevalence of Bartonella bacteria in tumors and tissues - but not blood samples - taken from dogs with hemangiosarcoma, a cancer of the blood vessels. The work further supports the connection between persistent infection and some types of cancer and adds to the evidence that Bartonella can remain and thrive, undetected, within tissue.

Hemangiosarcoma (HSA) is an aggressive, deadly cancer that arises from cells lining the blood vessels. It is responsible for two-thirds of all heart or splenic tumors in dogs, and is most common in medium-sized and middle-aged dogs. Since HSA usually cannot be diagnosed without major abdominal surgery, most HSA remains undetected until it has reached an advanced stage, resulting in a one-year survival rate of only 12 to 20%.

"There are clear precedents for the involvement of bacterial infections in tumor development," says Ed Breitschwerdt, Melanie S. Steele Distinguished Professor of Medicine at NC State's College of Veterinary Medicine and corresponding author of a paper describing the work. "Given the established links between chronic inflammation and cancer, we wanted to determine whether chronic infection of blood vessels due to bacteria could be a contributing cause of this cancer."

Breitschwerdt and colleagues from NC State looked at tumor tissue, non-tumor tissue and blood samples from 110 dogs with HSA from across the U.S. They screened both the tissues and the blood for Bartonella, Babesia, and Mycoplasma, three bacteria associated specifically with blood infections.

Bartonella DNA was amplified and sequenced from 80 of the dogs with HSA: it was present in 34% of tumor tissue and 63% of non-tumor tissue, but appeared in none of the blood samples. Mycoplasma DNA was only amplified from 5 of the dogs and Babesia wasn't detected in any dog.

"Research in recent years has confirmed that persistent infection with or inflammation caused by stealth pathogens is a risk factor for developing cancer later in life," Breitschwerdt says. "With the exception of Helicobacter pylori, the emphasis on evaluating the relationship between infection and cancer has focused on viruses. But intracellular bacterial pathogens such as Bartonella may also play an important and previously uninvestigated role.

"Bartonella is a stealth pathogen - it can 'hide' in the cells that line blood vessel walls, which is part of what makes it so difficult to detect," Breitschwerdt says. "This work adds more evidence to the connection between infection and cancer risk, and demonstrates that molecular testing of whole blood samples does not rule out the tissue presence of this pathogen. Future studies are needed to investigate whether Bartonella infection can be a cause of HSA. Our team will be focusing on creating more sensitive diagnostic testing as part of this effort."

Credit: 
North Carolina State University

'Living fossil' may upend basic tenet of evolutionary theory

The field of evolutionary biology has seen its share of spirited debates. But if there's one principle that virtually every expert in the field agrees on, it's that natural selection occurs at the level of the genome.

But now, a UC San Francisco-led research team has discovered the first conclusive evidence that selection may also occur at the level of the epigenome -- a term that refers to an assortment of chemical "annotations" to the genome that determine whether, when and to what extent genes are activated -- and has done so for tens of millions of years. This unprecedented finding subverts the widely accepted notion that over geologic timescales, natural selection acts exclusively on variation in the genome sequence itself.

In a study published Jan. 16, 2020 in the journal Cell, the researchers show that Cryptococcus neoformans -- a pathogenic yeast that infects people with weakened immune systems and is responsible for about 20 percent of all HIV/AIDS-related deaths -- contains a particular epigenetic "mark" on its DNA sequence, which, based on their lab experiments and statistical models, should have disappeared from the species sometime during the age of the dinosaurs.

But the study shows that this methylation mark -- so named because it's created through a process that attaches a molecular tag called a methyl group to the genome -- has managed to stick around for at least 50 million years -- maybe as long as 150 million years -- past its predicted expiration date. This amazing feat of evolutionary tenacity is made possible by an unusual enzyme and a hefty dose of natural selection.

"What we've seen is that methylation can undergo natural variation and can be selected for over million-year time scales to drive evolution," explained Hiten Madhani, MD, PhD, professor of biochemistry and biophysics at UCSF and senior author of the new study. "This is a previously unappreciated mode of evolution that's not based on changes in the organism's DNA sequence."

Though not seen in all life forms, DNA methylation isn't uncommon either. It's found in all vertebrates and plants, as well as many fungi and insects. In some species, however, methylation is nowhere to be found.

"Methylation has a patchy evolutionary presence," said Madhani, who is also a member of the UCSF Helen Diller Family Comprehensive Cancer Center and a Chan-Zuckerberg Biohub investigator. "Depending on what branch of the evolutionary tree you look at, different epigenetic mechanisms have been maintained or not maintained."

Many model organisms that are staples of the modern molecular biology lab -- including the baker's yeast S. cerevisiae, the roundworm C. elegans, and the fruit fly D. melanogaster -- lack DNA methylation entirely. These species are descended from ancient ancestors that lost enzymes that were, until this study was published, thought to be essential for propagating methylation for generation upon generation. How C. neoformans managed to avoid the same fate was a mystery up to now.

In the new study, Madhani and his collaborators show that hundreds of millions of years ago, the ancestor of C. neoformans had two enzymes that controlled DNA methylation. One was what's known as a "de novo methyltransferase," which was responsible for adding methylation marks to "naked" DNA that had none. The other was a "maintenance methyltransferase" that functioned a bit like a molecular Xerox. This enzyme copied existing methylation marks, which had been put in place by the de novo methyltransferase, onto unmethylated DNA during DNA replication. And like every other species with an epigenome that includes methylation, the ancestor of C. neoformans had both types of methyltransferase.

But then, sometime during the age of the dinosaurs, the ancestor of C. neoformans lost its de novo enzyme. Its descendants have been living without one since then, making C. neoformans and its closest relatives the only species alive today known to have DNA methylation without a de novo methyltransferase. "We didn't understand how methylation could still be in place since the Cretaceous period without a de novo enzyme," said Madhani.

Though the maintenance methyltransferase was still available to copy any existing methylation marks -- and the new study clearly demonstrates that this enzyme is unique among such enzymes for a number of reasons, including its ability to propagate existing methylation marks with exceptionally high fidelity -- the study also shows that unless natural selection were acting to preserve methylation, the ancient loss of the de novo methyltransferase should have resulted in the rapid demise and eventual disappearance of DNA methylation in C. neoformans.

That's because methylation marks can be randomly lost, which means that no matter how exquisitely a maintenance methyltransferase copies existing marks onto new strands of DNA, the accumulated loss of methylation would eventually leave the maintenance enzyme with no template to work from. Though it's conceivable that these loss events might occur at a sluggish pace, experimental observations allowed the researchers to determine that each methylation mark in C. neoformans was likely to disappear from half of the population after just 7500 generations. Even assuming that for some reason C. neoformans might reproduce 100 times more slowly in the wild than in the lab, this would still be the equivalent of only 130 years.

The rare and random acquisition of new methylation marks can't account for the persistence of methylation in C. neoformans either. The researchers' lab experiments demonstrated that new methylation marks arise by chance at a rate 20 times slower than methylation losses. Over evolutionary timescales, the losses would clearly predominate, and without a de novo enzyme to compensate, methylation would have vanished from C. neoformans around the time when dinosaurs disappeared had it not been for selection pressures favoring the marks.

In fact, when the researchers compared a variety of C. neoformans strains that were known to have diverged from one another nearly 5 million years ago, they found that not only did all the strains still have DNA methylation, but the methylation marks were coating analogous regions of the genome, a finding which suggests that methylation marks at specific genomic sites confer some sort of survival advantage that's being selected for.

"Natural selection is maintaining methylation at much higher levels than would be expected from a neutral process of random gains and losses. This is the epigenetic equivalent of Darwinian evolution," said Madhani.

Asked why evolution would select for these particular marks, Madhani explained that "one of methylation's major functions is genome defense. In this case we think it's for silencing transposons."

Transposons, also known as jumping genes, are stretches of DNA that are able to extract themselves from one part of the genome and insert themselves into another. If a transposon were to insert itself into the middle of a gene needed for survival, that gene may no longer function and the cell would die. Therefore, transposon-silencing methylation provides an obvious survival advantage, which is exactly what's needed to drive evolution.

However, it remains to be seen how common this unappreciated form of natural selection is in other species.

"Previously, there was no evidence of this kind of selection happening over these time scales. This is an entirely novel concept," Madhani said. "But now the big question is 'Is this happening outside of this exceptional circumstance, and if so, how do we find it?'"

Credit: 
University of California - San Francisco

Mortality rate is cut in half by a lung rescue team at Massachusetts General

BOSTON - A specialized Lung Rescue Team established by clinicians at Massachusetts General Hospital (MGH) to evaluate and treat patients with obesity receiving mechanical ventilation [MV] due to acute respiratory failure (ARF) has significantly reduced the risk of mortality compared to standard treatment. In a paper published in the journal Critical Care, MGH investigators reported that by individualized treatment for patients in the intensive care unt the Lung Rescue Team reduced by half the risk of death for up to a year in patients with acute respiratory failure.

"Our extensive research over the past 10 years has shown that standard protocols for treating patients with obesity and acute respiratory failure requiring ventilator support were inadequate to provide oxygenation because excessive tissue increased pressure on the lungs, resulting in their failure to expand," says Lorenzo Berra, MD, investigator in the Department of Anesthesia, Critical Care and Pain Medicine at MGH, and corresponding author of the study. "The Lung Rescue Team carefully assesses the respiratory, pulmonary and cardiac physiology of each patient. And based on those findings, it's able to implement a ventilation titration strategy that counteracts the detrimental effects of increased pleural pressure, resulting in lung re-expansion."

The Lung Rescue Team was created in 2014 as a joint effort between MGH Respiratory Care Services and critical care physicians. The dedicated team consists of a critical care physician and two critical care fellows trained in cardio-pulmonary physiology who are asked to consult on cases involving patients with obesity and ARF within 24 hours of ICU admission. The intervention tools they employ include esophageal manometry to determine the intrapleural pressure inside the chest; trans-thoracic echocardiography to determine cardiac function during mechanical ventilation manipulation; and electrical impedance tomography (EIT) to measure the regional distribution of ventilation and assess the degree of lung collapse and overdistension.

Despite the rapidly growing incidence of obesity in the U.S., the MGH study is the first to evaluate personalized treatment of ARF in this population. Over the five-year trial, ventilator settings in the ICU for 50 patients with severe obesity (known as class III, with BMI, or body mass index, greater than 40 Kg/m2) were determined by the Lung Rescue Team, while ventilator settings for 70 other patients with class III obesity were based on standard protocols emanating from the ARDSnet trial in 2000, which excluded patients with obesity from its population. Despite this omission, the trial results have been applied to patients of all weight groups. The MGH study found that the ARDSnet protocol-based cohort had a 31 percent death rate at 28 days compared to 16 percent for patients treated by the Lung Rescue Team. At three months, the mortality rate was 41 percent for the ARDSnet standard protocol cohort compared to 22 percent for the Lung Rescue Team group. The mortality rates did not change for the two groups at one year.

The success of the Lung Rescue Team has prompted interest from other institutions and health systems around the country, and a multicenter trial is being considered. "Intervention by the Lung Rescue Team is responaible for a remarkable improvement in respiratory mechanics and oxygenation for individuals with obestity and acute respiratory failure," emphaszes Berra. "By responding to the unique needs of this population, the Lung Rescut Team is helping to save lives."

Credit: 
Massachusetts General Hospital

Quantum physics: Controlled experiment observes self-organized criticality

Writing in Nature, researchers describe the first-time observation of 'self-organized criticality' in a controlled laboratory experiment. Complex systems exist in mathematics and physics, but also occur in nature and society. The concept of self-organized criticality claims that without external input, complex systems in non-equilibrium tend to develop into a critical state far away from a stable equilibrium. That way, they reinforce their own non-equilibrium.

Systems that are at first glance quite different, like the dissemination of information in social networks or the spread of fire or disease, may have similar characteristics. One example is an avalanche-like behaviour that reinforces itself instead of coming to a standstill. However, these complex systems are very difficult to study under controlled lab conditions.

For the first time, researchers from the European Centre for Quantum Sciences (CESQ) in Strasbourg, in collaboration with researchers from the universities of Cologne and Heidelberg and the California Institute of Technology, have succeeded in observing the most important features of self-organized criticality in a controlled experiment - including universal avalanche behavior.

The team worked with a gas consisting of potassium atoms, which they prepared at very low temperatures, close to absolute zero. 'In this state, the gas is easier to control, which makes it more suitable for studying the fundamental quantum properties of atoms,' said Professor Shannon Whitlock at the Institute of Supramolecular Science and Technology at the University of Strasbourg.

By stimulating gas atoms with lasers, the team was able to influence the interactions between these atoms. 'When stimulated, the atoms can either generate new secondary stimulations or discharge spontaneously', explained Tobias Wintermantel, a doctoral researcher in Whitlock's team.

When the laser was switched on, many atoms initially escaped very quickly. Their remaining number in the gas stabilized at the same value. Also, the number of remaining particles depended on the intensity of the laser. 'Comparing our lab results with a theoretical model, we saw that these two effects have the same origin,' said the theoretical physicist Professor Sebastian Diehl from the University of Cologne. This was a first indication of the phenomenon of self-organized criticality.

'The experiments showed that some systems develop by themselves up to their critical point of phase transition', Diehl added. This is surprising: in a typical phase transition, like boiling water turing from liquid to gas, there is only one critical point. In boiling water, self-organized criticality would mean that the system would automatically remain in a state of suspension between liquid and gas at the critical transition point - even if the temperature was changed. So far, this concept has never been verified and tested in such a highly controllable physical system.

After the experiment, the team returned to the lab to confirm another striking feature of self-organized criticality: a self-sustaining behavior of atomic decay, similar to that of continuously replenished avalanches. Similar characteristics have already been qualitatively observed in the past in other cases - such as earthquakes or solar eruptions. 'For the first time, we observed the key elements of self-organized criticality quantitatively in the lab. We were able to establish a highly controllable atomic experimental system', said Shannon Whitlock.

In further steps, the scientists now want to investigate how the quantum nature of atoms influences the self-organization mechanism. 'In the long term, this might contribute to creating new quantum technologies or to solving some computation problems that are difficult for normal computers', Diehl concluded.

The phenomenon of self-organized criticality was first developed for avalanches in 1987 by physicists Per Bak, Chao Tang and Kurt Wiesenfeld. Further models by other researchers for evolution, forest fires and earthquakes followed. So far, no general conditions that trigger self-organized criticality have been identified.

Credit: 
University of Cologne

Organized cybercrime -- not your average mafia

image: Organized cybercrime differs from other types of criminal networks -- making trails to track them more challenging.

Image: 
Photo by Mika Baumeister on Unsplash

EAST LANSING, Mich. - Does the common stereotype for "organized crime" hold up for organizations of hackers? Research from Michigan State University is one of the first to identify common attributes of cybercrime networks, revealing how these groups function and work together to cause an estimated $445-600 billion of harm globally per year.

"It's not the 'Tony Soprano mob boss type' who's ordering cybercrime against financial institutions," said Thomas Holt, MSU professor of criminal justice and co-author of the study. "Certainly, there are different nation states and groups engaging in cybercrime, but the ones causing the most damage are loose groups of individuals who come together to do one thing, do it really well - and even for a period of time - then disappear."

In cases like New York City's "Five Families," organized crime networks have historic validity, and are documented and traceable. In the online space, however, it's a very difficult trail to follow, Holt said.

"We found that these cybercriminals work in organizations, but those organizations differ depending on the offense," Holt said. "They may have relationships with each other, but they're not multi-year, multi-generation, sophisticated groups that you associate with other organized crime networks."

Holt explained that organized cybercrime networks are made up of hackers coming together because of functional skills that allow them to collaborate to commit the specific crime. So, if someone has specific expertise in password encryption and another can code in a specific programming language, they work together because they can be more effective - and cause greater disruption - together than alone.

"Many of these criminals connected online, at least initially, in order to communicate to find one another," Holt said. "In some of the bigger cases that we had, there's a core group of actors who know one another really well, who then develop an ancillary network of people who they can use for money muling or for converting the information that they obtained into actual cash."

Holt and lead author E. R. Leukfeldt, researcher at the Netherlands Institute for the Study of Crime and Law Enforcement, reviewed 18 cases from the Netherlands in which individuals were prosecuted for cases related to phishing. Data came directly from police files and was gathered through wire and IP taps, undercover policing, observation and house searches.

Beyond accessing credit cards and banking information, Holt and Leukfeldt found that cybercriminals also worked together to create fake documents so they could obtain money from banks under fraudulent identities.

The research, published in International Journal of Offender Therapy and Comparative Criminology, also debunks common misconceptions that sophisticated organized criminal networks - such as the Russian mafia - are the ones creating cybercrime.

Looking ahead as law enforcement around the world takes steps to crack down on these hackers, Holt hopes his findings will help guide them in the right direction.

"As things move to the dark web and use cryptocurrencies and other avenues for payment, hacker behaviors change and become harder to fully identify, it's going to become harder to understand some of these relational networks," Holt said. "We hope to see better relationships between law enforcement and academia, better information sharing, and sourcing so we can better understand actor behaviors."

Credit: 
Michigan State University

New optical technique captures real-time dynamics of cement setting

image: For real-time characterization of cement setting, the researchers combined diffuse reflection measurements with an optical model.

Image: 
José Ortiz-Lozano

WASHINGTON -- Researchers have developed a nondestructive and noninvasive optical technique that can determine the setting times for various types of cement paste, which is used to bind new and old concrete surfaces. The new method could aid in the development of optimized types of cement with less impact on the environment.

"Our noninvasive optical method characterizes and determines the setting time of cement, which is a very important parameter for the construction industry," said José Ortiz-Lozano, a member of the research team from Universidad Autónoma de Aguascalientes, Tecnológico Nacional de México and Centro de Investigaciones en Óptica, in Mexico. "It can also precisely assess the cement hydration process in real-time. This information is crucial for both the study of physical chemistry and the quantitative characterization of the nanomechanical properties of cement-based materials."

In the Optical Society's (OSA) journal Applied Optics, the researchers describe the new method, which combines laser-based technology with an optical model to calculate the dynamic behavior of the cement paste. The researchers show that their approach can accurately calculate both the initial setting time -- the time available for mixing the cement and placing it in position -- and the final setting time, when the cement reaches its full strength.

"Our group is trying to enhance the performance of cement-based materials, such as cement pastes, mortars and concrete," said Ortiz-Lozano. "New material characterization methods, such as the one we report here, can be used to improve the behavior and performance of cement by optimizing its constituents. This could lead to new types of cement that use less water and raw materials like limestone and clay, which would make them more environmentally friendly."

Studying cement with light

Although a variety of techniques exist to study the dynamics of setting cement, they come with various drawbacks such as being destructive, invasive or influenced by human factors.
The new method uses the optical properties of cement paste to directly calculate the initial and final cement setting time by measuring the diffuse light that reflects off the cement.

As the cement sets, the diffuse light reflection changes as it reacts with water and the spaces between the cement particles change. The amount of water present and the protective surface layer at each setting stage also influence the diffuse reflection properties. The researchers combined the diffuse reflection measurements with the Kubelka-Munk model, which is used to describe diffuse reflection of opaque samples.

"This new optical method was developed using tools, components and materials common among the optical industry," said Ortiz-Lozano. "It would be, therefore, quite simple and economic to implement in cement quality control laboratories. It can be applied to any type of cement once the appropriate calibration is performed with the Kubelka-Munk model."

The researchers applied the new technique to six cement samples and found that the results for all the samples were repeatable and agreed well with measurement techniques commonly used today.

"This laser-based technique gives continuous and accurate assessment of cement hydration process with high repeatability and reproducibility, showing its potential for studying the physical chemistry properties of cement," said Ortiz-Lozano.

Next, the researchers plan to acquire more data using more types of cement, mortars, concretes as well as additional water to cement ratios and cement pastes that contain chemical and/or mineral admixtures. They are also planning to perform the work required to normalize the method as a standard.

Credit: 
Optica

How anti-sprawl policies may be harming water quality

UNIVERSITY PARK, Pa. -- Urban growth boundaries are created by governments in an effort to concentrate urban development -- buildings, roads and the utilities that support them -- within a defined area. These boundaries are intended to decrease negative impacts on people and the environment. However, according to a Penn State researcher, policies that aim to reduce urban sprawl may be increasing water pollution.

"What we were interested in was whether the combination of sprawl -- or lack of sprawl -- along with simultaneous agriculture development in suburban and rural areas could lead to increased water-quality damages," said Douglas Wrenn, a co-funded faculty member in the Institutes of Energy and the Environment.

These water quality damages were due to pollution from nitrogen, phosphorus and sediment, three ingredients that in high quantities can cause numerous environmental problems in streams, rivers and bays. As a part of the EPA's Clean Water Act (CWA), total maximum daily loads (TMDL) govern how much of these pollutants are allowed in a body of water while still meeting water-quality standards.

According to Wrenn, an associate professor in Penn State's College of Agricultural Sciences, one of the reasons anti-sprawl policies can lead to more water pollution is because higher-density development has more impervious surfaces, such as concrete. These surfaces don't absorb water but cause runoff. The water then flows into bodies of water, bringing sediment, nitrogen and phosphorus with it.

Secondly, agriculture creates considerably more water pollution than low-density residential areas. And when development outside of the boundaries that could replace agriculture is prevented, the amount of pollution that could be reduced is lost.

"If you concentrate development inside an urban growth boundary and allow agriculture to continue business as usual," Wrenn said, "then you could actually end with anti-sprawl policies that lead to an increase in overall water quality damages."

Wrenn said it is important for land-use planners in urban areas and especially in urbanizing and urban-fringe counties to understand this.

The EPA's water quality regulation is divided between point source and nonpoint source polluters. Point source polluters include wastewater treatment facilities, big factories, consolidated animal feeding operations and stormwater management systems. Nonpoint sources are essentially everything else. And the CWA does not regulate nonpoint sources, which includes agriculture.

"When it comes to meeting TMDL regulations, point source polluters will always end up being responsible," he said. "They are legally bound to basically do it all."

Wrenn said point source polluters are very interested in getting nonpoint source polluters, specifically agriculture, involved in reducing pollution because their cost of reduction is usually far less expensive and often times more achievable.

"What our research has shown is that land-use regulation where land-use planners have some ability to manage where and when land-use development takes place, this gives some indication that land-use policy can be a helper or a hinderance to meeting these TMDL regulations," Wrenn said.

This research was published in the November 2019 issue of Resource and Energy Economics. In addition to Wrenn, the project included H. Allen Klaiber of The Ohio State University and David Newburn of the University of Maryland.

Credit: 
Penn State

A secreted signature of aging cells

image: SASP Atlas: A Comprehensive Resource for Senescence-Associated Secretory Phenotypes. SASP Atlas is a curated and freely available database of the secretomes of senescent cells, including both the soluble and exosome SASP, that can be used to identify SASP components or biomarker candidates for senescence burden, aging and related diseases.

Image: 
Birgit Schilling

Senescent cells undergo an irreversible and permanent arrest of cell division and are hallmarks of both the aging process and multiple chronic diseases. Senescent cells - and more importantly the factors they secrete, known collectively as the senescence-associated secretory phenotype (SASP) - are widely accepted as drivers of aging and multiple age-related diseases.

A new study publishing on January 16 in the open-access journal PLOS Biology from Drs. Nathan Basisty, Judith Campisi, Birgit Schilling (Buck Institute for Research on Aging) and colleagues extensively profiles the SASP in human cells. They show that a core secreted protein "signature" of senescent cells is enriched with aging biomarkers found in human plasma.

The study utilizes a comprehensive and unbiased technique called mass spectrometry combined with bioinformatics to develop secreted protein signatures of senescent cells. The researchers' results show that the SASP is about ten-fold more complex than is currently appreciated, allowing them to propose new signatures of senescent cells - both 'core' signatures shared across all senescent cells and signatures that identify specific subsets of senescent cells.

Mouse studies have demonstrated that the targeted removal of senescent cells has beneficial effects on cardiac, vascular, metabolic, neurological, renal, pulmonary and musculoskeletal functions. Therefore, the selective elimination of senescent cells or inhibition of the SASP that they secrete are promising therapeutic approaches to treat age-related diseases in humans. Development of drugs that eliminate senescent cells, known as senolytics, or drugs that inhibit the SASP, known as senomorphics, requires molecular markers to assess the abundance of senescent cells. However, there are currently no simple reliable secreted biomarkers to measure the senescent cell burden in humans.

"We hope that these biomarker signatures will help us measure the burden of senescent cells in human biofluids, such as plasma, to aid the translation of senescence-targeted therapies into the clinic," says Dr Basisty. "We believe that the proteins secreted by senescent cells will also be important biomarkers for aging, neurodegenerative diseases, and other diseases marked by the presence of senescent cells."

Along with this study, the researchers launched the SASP Atlas, a curated database of proteins secreted by senescent cells. This resource can be used by others in the research community to identify proteins originating from senescent cells in their own research.

Credit: 
PLOS

Study finds billions of quantum entangled electrons in 'strange metal'

image: Junichiro Kono (left) and Qimiao Si in Kono's Rice University laboratory in December 2019.

Image: 
Photo by Jeff Fitlow/Rice University

HOUSTON -- (Jan. 16, 2020) -- In a new study, U.S. and Austrian physicists have observed quantum entanglement among "billions of billions" of flowing electrons in a quantum critical material.

The research, which appears this week in Science, examined the electronic and magnetic behavior of a "strange metal" compound of ytterbium, rhodium and silicon as it both neared and passed through a critical transition at the boundary between two well-studied quantum phases.

The study at Rice University and Vienna University of Technology (TU Wien) provides the strongest direct evidence to date of entanglement's role in bringing about quantum criticality, said study co-author Qimiao Si of Rice.

"When we think about quantum entanglement, we think about small things," Si said. "We don't associate it with macroscopic objects. But at a quantum critical point, things are so collective that we have this chance to see the effects of entanglement, even in a metallic film that contains billions of billions of quantum mechanical objects."

Si, a theoretical physicist and director of the Rice Center for Quantum Materials (RCQM), has spent more than two decades studying what happens when materials like strange metals and high-temperature superconductors change quantum phases. Better understanding such materials could open the door to new technologies in computing, communications and more.

The international team overcame several challenges to get the result. TU Wien researchers developed a highly complex materials synthesis technique to produce ultrapure films containing one part ytterbium for every two parts rhodium and silicon (YbRh2Si2). At absolute zero temperature, the material undergoes a transition from one quantum phase that forms a magnetic order to another that does not.

At Rice, study co-lead author Xinwei Li, then a graduate student in the lab of co-author and RCQM member Junichiro Kono, performed terahertz spectroscopy experiments on the films at temperatures as low as 1.4 Kelvin. The terahertz measurements revealed the optical conductivity of the YbRh2Si2 films as they were cooled to a quantum critical point that marked the transition from one quantum phase to another.

"With strange metals, there is an unusual connection between electrical resistance and temperature," said corresponding author Silke Bühler-Paschen of TU Wien's Institute for Solid State Physics. "In contrast to simple metals such as copper or gold, this does not seem to be due to the thermal movement of the atoms, but to quantum fluctuations at the absolute zero temperature."

To measure optical conductivity, Li shined coherent electromagnetic radiation in the terahertz frequency range on top of the films and analyzed the amount of terahertz rays that passed through as a function of frequency and temperature. The experiments revealed "frequency over temperature scaling," a telltale sign of quantum criticality, the authors said.

Kono, an engineer and physicist in Rice's Brown School of Engineering, said the measurements were painstaking for Li, who's now a postdoctoral researcher at the California Institute of Technology. For example, only a fraction of the terahertz radiation shined onto the sample passed through to the detector, and the important measurement was how much that fraction rose or fell at different temperatures.

"Less than 0.1% of the total terahertz radiation was transmitted, and the signal, which was the variation of conductivity as a function of frequency, was a further few percent of that," Kono said. "It took many hours to take reliable data at each temperature to average over many, many measurements, and it was necessary to take data at many, many temperatures to prove the existence of scaling.

"Xinwei was very, very patient and persistent," Kono said. "In addition, he carefully processed the huge amounts of data he collected to unfold the scaling law, which was really fascinating to me."

Making the films was even more challenging. To grow them thin enough to pass terahertz rays, the TU Wien team developed a unique molecular beam epitaxy system and an elaborate growth procedure. Ytterbium, rhodium and silicon were simultaneously evaporated from separate sources in the exact 1-2-2 ratio. Because of the high energy needed to evaporate rhodium and silicon, the system required a custom-made ultrahigh vacuum chamber with two electron-beam evaporators.

"Our wild card was finding the perfect substrate: germanium," said TU Wien graduate student Lukas Prochaska, a study co-lead author. The germanium was transparent to terahertz, and had "certain atomic distances (that were) practically identical to those between the ytterbium atoms in YbRh2Si2, which explains the excellent quality of the films," he said.

Si recalled discussing the experiment with Bühler-Paschen more than 15 years ago when they were exploring the means to test a new class of quantum critical point. The hallmark of the quantum critical point that they were advancing with co-workers is that the quantum entanglement between spins and charges is critical.

"At a magnetic quantum critical point, conventional wisdom dictates that only the spin sector will be critical," he said. "But if the charge and spin sectors are quantum-entangled, the charge sector will end up being critical as well."

At the time, the technology was not available to test the hypothesis, but by 2016, the situation had changed. TU Wien could grow the films, Rice had recently installed a powerful microscope that could scan them for defects, and Kono had the terahertz spectrometer to measure optical conductivity. During Bühler-Paschen's sabbatical visit to Rice that year, she, Si, Kono and Rice microscopy expert Emilie Ringe received support to pursue the project via an Interdisciplinary Excellence Award from Rice's newly established Creative Ventures program.

"Conceptually, it was really a dream experiment," Si said. "Probe the charge sector at the magnetic quantum critical point to see whether it's critical, whether it has dynamical scaling. If you don't see anything that's collective, that's scaling, the critical point has to belong to some textbook type of description. But, if you see something singular, which in fact we did, then it is very direct and new evidence for the quantum entanglement nature of quantum criticality."

Si said all the efforts that went into the study were well worth it, because the findings have far-reaching implications.

"Quantum entanglement is the basis for storage and processing of quantum information," Si said. "At the same time, quantum criticality is believed to drive high-temperature superconductivity. So our findings suggest that the same underlying physics -- quantum criticality -- can lead to a platform for both quantum information and high-temperature superconductivity. When one contemplates that possibility, one cannot help but marvel at the wonder of nature."

Si is the Harry C. and Olga K. Wiess Professor in Rice's Department of Physics and Astronomy. Kono is a professor in Rice's departments of Electrical and Computer Engineering, Physics and Astronomy, and Materials Science and NanoEngineering and the director of Rice's Applied Physics Graduate Program. Ringe is now at the University of Cambridge.

Additional co-authors include Maxwell Andrews, Maximilian Bonta, Werner Schrenk, Andreas Limbeck and Gottfried Strasser, all of the TU Wien; Hermann Detz, formerly of TU Wien and currently at Brno University; Elisabeth Bianco, formerly of Rice and currently at Cornell University; Sadegh Yazdi, formerly of Rice and currently at the University of Colorado Boulder; and co-lead author Donald MacFarland, formerly of TU Wien and currently at the University at Buffalo.

The research was supported by the European Research Council (ERC-227378), the Army Research Office (W911NF-14-1-0496, W911NF-17-1-0259, W911NF-14-1-0525), the Austrian Science Fund (FWF-W1243, P29279-N27, P29296-N27), the European Union's Horizon 2020 program (824109-EMP), the National Science Foundation (DMR-1720595, DMR-1920740, PHY-1607611), the Robert A. Welch Foundation (C-1411), Los Alamos National Laboratory and Rice University.

RCQM leverages global partnerships and the strengths of more than 20 Rice research groups to address questions related to quantum materials. RCQM is supported by Rice's offices of the provost and the vice provost for research, the Wiess School of Natural Sciences, the Brown School of Engineering, the Smalley-Curl Institute and the departments of Physics and Astronomy, Electrical and Computer Engineering, and Materials Science and NanoEngineering.

Credit: 
Rice University

Be wary of online probiotic health-benefit claims

The public should be wary of searching for probiotic information online as most webpages originate from unreliable sources and the health-benefit claims are often not supported by robust scientific evidence.

A new study, published in Frontiers in Medicine, cautions that while Google is adept at sorting the most reliable websites to the top of the list, the majority of websites providing information on probiotics are from commercial sources.

"Most webpages with information on probiotics are from commercial sources or news outlets but these provide the least complete information, in terms of not discussing potential side effects or regulatory issues," reports author Professor Pietro Ghezzi, from the Brighton and Sussex Medical School, UK.

"We also find many websites allude to benefits of probiotics in diseases for which there is not much high-level scientific evidence, other than in mice."

Probiotics are live organisms that, if research holds its promise, could be beneficial to health. There is a large US market for probiotics but less so in the EU, likely due to stricter regulation for health claims. Nevertheless, the market for probiotics continuously expands with the globalization of online sales.

Should we believe the hype?

Concerned that the public has unrealistic expectations about the beneficial effects of probiotics (bolstered by online claims and hype in the news), Ghezzi and his colleagues decided to assess the information that the public were exposed to when searching online.

"We assessed the first 150 webpages brought up by a Google search for "probiotics", recorded where they originated from and the diseases they mentioned. The scientific evidence for health benefits of probiotics against these diseases were then examined for scientific rigor," explains co-author Michel Goldman, a Professor at the Institute for Interdisciplinary Innovation in healthcare, Université libre de Bruxelles, Belgium.

The researchers used the Cochrane library - a database of clinical trials and meta-analyses of evidence-based medicine - to assess the strength of scientific evidence found online.

Goldman adds, "We also looked at how Google ranked these websites, as often the public will not go past the first ten results - these will therefore have a higher visibility and impact."

Beware of unreliable sources

News-outlets and commercial sources made up the majority of the 150 webpages and the analysis showed these were the least reliable, often not mentioning the side effects on immunocompromised individuals nor any regulatory issues. In addition, the findings of experiments on mice were used to make claims about probiotic benefits against disease in humans.

But it's not all bad news. Ghezzi explains that Google has developed very stringent criteria for ranking health-related websites, however, we should always question where the information originates from.

"Google prioritizes webpages containing more complete and scientifically robust information about probiotics, particularly health portals, and these are given a higher ranking than commercial websites. However, the fact that there is such a large amount of commercially-oriented information is problematic for consumers who are searching for honest answers."

Credit: 
Frontiers

American cancer survivors face substantial financial hardship and financial sacrifices

Bottom Line: American cancer survivors, particularly those 64 years or younger, faced substantial medical financial hardship and sacrifices in spending, savings, or living situation, according to data from a survey.

Journal in Which the Study was Published: Cancer Epidemiology, Biomarkers, and Prevention, a journal of the American Association for Cancer Research

Author: Xuesong Han, PhD, senior principal scientist in Health Services Research at the American Cancer Society

Background: "As the number of cancer survivors grows, the costs of cancer treatments rise, and patient cost-sharing increases, there is a growing need for financial intervention at multiple levels to help cancer survivors minimize their risk of financial hardship," said Han. "We hope our findings will inform the development of future health policies and interventions in care delivery."

In the United States, the number of cancer survivors increased by 1.4 million people in the past three years, reaching more than 16.9 million as of January 1, 2019. The economic burden of cancer is significant for American cancer survivors: Previous studies have reported that as high as two-thirds of cancer survivors face medical financial hardship. However, few studies have examined the intensity of financial hardship across multiple domains, or sacrifices made as a result of cancer treatment and its longer-term effects.

How the Study Was Conducted: Han and colleagues identified cancer survivors from the 2016 Medical Expenditure Panel Survey (MEPS), a nationally representative survey that collected information on health insurance coverage, health care utilization and expenditures, and health conditions.

Participants detailed the effects of their cancer, cancer treatment, and how their cancer experience has affected their finances, health insurance coverage, and employment status. Financial hardships included problems paying medical bills, financial distress, or delaying or forgoing medical care due to cost concerns. Financial sacrifices due to cancer included changes in spending and use of savings as a result of cancer treatment and its lasting effects.

Because people over the age of 65 years are generally eligible for Medicare insurance coverage, Han and colleagues examined results for adult survivors under and over the age of 65.

Results: Of the 401 cancer survivors aged 18 to 64 years, 54 percent reported they had faced medical financial hardship as a result of cancer diagnosis and treatment, and 54 percent said they had made financial sacrifices in spending, savings, or their living situation. Nearly a quarter reported trouble paying medical bills, needing to borrow money, or filing for bankruptcy due to cancer diagnosis and treatment. More than 40 percent were worried about finances and almost 30 percent were worried about forgoing or delaying care because of cost concerns.

Of the 562 cancer survivors aged 65 years or older, medical financial hardship and sacrifices were less prevalent; 42 percent reported ever facing medical financial hardship, and 38 percent said they had made financial sacrifices.

Factors that were significantly associated with more intense financial hardship included low income and educational attainment, minority racial/ethnic status, comorbidity, lack of private insurance coverage, extended employment change, and recent cancer treatment.

Financial hardship has been linked to higher symptom burden and worse quality of life, and in extreme cases, such as bankruptcy, it is associated with an increased risk of death, Han explained.

Author's Comments: "Overall, health insurance coverage is critically important for cancer patients and survivors," said Han. "Even those who had private insurance coverage reported financial hardship, suggesting that the types of coverage and extent of patient cost-sharing are important too."

"Provisions of the Affordable Care Act that have expanded insurance coverage options, such as the Medicaid expansion, have been associated with reductions in financial hardship among cancer survivors in other studies," Han explained. "Employers can play a large role in mitigating hardship through flexible workplace accommodations such as availability of paid and unpaid sick leave, and supportive programs for both survivors and family members."

Study Limitations: The main limitations of the study were the potential for recall bias in the self-reported surveys and lack of data on clinical features of cancer stage and treatment. The researchers defined measures of insurance coverage, family income, and number of comorbidities as current estimates at survey time, however they defined measures of financial hardship and sacrifices as ever occurring.

Funding & Disclosures: The authors declare no conflict of interest.

Credit: 
American Association for Cancer Research

It's 2020: Time to teach teens 'safe' sexting

image: Sameer Hinduja, Ph.D., co-author and a professor in the School of Criminology and Criminal Justice within FAU's College for Design and Social Inquiry, and co-director of the Cyberbullying Research Center.

Image: 
Florida Atlantic University

Preaching sexual abstinence to youth was popular for a number of decades, but research repeatedly found that such educational messages fell short in their intended goals. Simply telling youth not to have sex failed to delay the initiation of sex, prevent pregnancies, or stop the spread of sexually-transmitted diseases. Since the advent of photo- and video-sharing via phones, children have received similar fear-based messages to discourage sexting - the sending or receiving of sexually explicit or sexually suggestive images (photos or video) usually via mobile devices. Unfortunately, messages of sexting abstinence don't seem to be reducing the prevalence of adolescents sharing nudes.

Consequently, in a new paper published in the Journal of Adolescent Health, researchers from Florida Atlantic University and the University of Wisconsin-Eau Claire, say that it is time to teach youth "safe" sexting.

"The truth is that adolescents have always experimented with their sexuality, and some are now doing so via sexting," said Sameer Hinduja, Ph.D., co-author and a professor in the School of Criminology and Criminal Justice within FAU's College for Design and Social Inquiry, and co-director of the Cyberbullying Research Center. "We need to move beyond abstinence-only, fear-based sexting education or, worse yet, no education at all. Instead, we should give students the knowledge they need to make informed decisions when being intimate with others, something even they acknowledge is needed."

Hinduja and co-author Justin Patchin, Ph.D., a professor of criminal justice at the University of Wisconsin-Eau Claire and co-director of the Cyberbullying Research Center, acknowledge that although participating in sexting is never 100 percent "safe" (just like engaging in sex), empowering youth with strategies to reduce possible resultant harm seems prudent.

Hinduja and Patchin collected (unpublished) data in April 2019 from a national sample of nearly 5,000 youth between the ages of 12 and 17, and found that 14 percent had sent and 23 percent had received sexually explicit images. These figures represent an increase of 13 percent for sending and 22 percent for receiving from what they previously found in 2016.

The authors do want youth to understand that those who sext open themselves up to possible significant and long-term consequences, such as humiliation, extortion, victimization, school sanction, reputational damage, and even criminal charges. But they also want youth who are going to do it anyway to exercise wisdom and discretion to prevent avoidable fallout.

"This is not about encouraging sexting behaviors, any more than sex education is about encouraging teens to have sex," said Hinduja. "It simply recognizes the reality that young people are sexually curious, and some will experiment with various behaviors with or without informed guidance, and sexting is no exception."

Hinduja and Patchin provide suggested themes encapsulated in 10 specific, actionable messages that adults can share with adolescents in certain formal or informal contexts after weighing their developmental and sexual maturity.

1. If someone sends you a sext, do not send it to -- or show -- anyone else. This could be considered nonconsensual sharing of pornography, and there are laws prohibiting it and which outline serious penalties (especially if the image portrays a minor).

2. If you send someone a sext, make sure you know and fully trust them. "Catfishing"-- where someone sets up a fictitious profile or pretends to be someone else to lure you into a fraudulent romantic relationship (and, often, to send sexts) -- happens more often than you think. You can, of course, never really know if they will share it with others or post it online, but do not send photos or video to people you do not know well.

3. Do not send images to someone who you are not certain would like to see it (make sure you receive textual consent that they are interested). Sending unsolicited explicit images to others could also lead to criminal charges.

4. Consider boudoir pictures. Boudoir is a genre of photography that involves suggestion rather than explicitness. Instead of nudes, send photos that strategically cover the most private of private parts. They can still be intimate and flirty but lack the obvious nudity that could get you in trouble.

5. Never include your face. Of course, this is so that images are not immediately identifiable as yours but also because certain social media sites have sophisticated facial recognition algorithms that automatically tag you in any pictures you would want to stay private.

6. Make sure the images do not include tattoos, birthmarks, scars, or other features that could connect them to you. In addition, remove all jewelry before sharing. Also, consider your surroundings. Bedroom pictures could, for example, include wall art or furniture that others recognize.

7. Turn your device's location services off for all of your social media apps, make sure your photos are not automatically tagged with your location or username, and delete any meta-data digitally attached to the image.

8. If you are being pressured or threatened to send nude photos, collect evidence when possible. Having digital evidence (such as screenshots of text messages) of any maliciousness or threats of sextortion will help law enforcement in their investigation and prosecution (if necessary) and social media sites in their flagging and deletion of accounts.

9. Use apps that provide the capability for sent images to be automatically and securely deleted after a certain amount of time. You can never guarantee that a screenshot was not taken, nor that another device was not used to capture the image without you being notified, but using specialized apps can decrease the chance of distribution.

10. Be sure to promptly delete any explicit photos or videos from your device. This applies to images you take of yourself and images received from someone else. Having images stored on your device increases the likelihood that someone -- a parent, the police, a hacker -- will find them. Possessing nude images of minors may have criminal implications. In 2015, for example, a North Carolina teen was charged with possessing child pornography, although the image on his phone was of himself.

Credit: 
Florida Atlantic University