Earth

More homes built near wild lands leading to greater wildfire risk

image: These are areas changed in WUI.

Image: 
UW-Madison

MADISON - More than 10 million acres burned across the country during the 2017 U.S. wildfire season at a cost of more than $2 billion -- the largest bill ever.

And while many factors affect the risk for wildfires, new research out of the University of Wisconsin-Madison shows that a flurry of homebuilding near wild areas since 1990 has greatly increased the number of homes at risk from wildfires while increasing the costs associated with fighting those fires in increasingly dense developments.

The so-called wildland-urban interface, or WUI, where homes and wild vegetation meet, increased rapidly from 1990 to 2010, adding a collective area larger than the state of Washington across the contiguous United States. While the regrowth of vegetation into previously developed or agricultural land accounted for a portion of this increase, 97 percent of the growth in the WUI was attributable to homebuilding in areas that were once sparsely settled. This is the first study to establish the primary cause behind increases in the WUI in the United States.

The increase in the WUI also affects the spread of invasive species, pollution, and the spread of disease between pets and wildlife, scientists say. Denser housing is also associated with more human ignition of wildfires.

The study is published March 12 in the Proceedings of the National Academy of Sciences. Volker Radeloff, a professor of forest and wildlife ecology at UW-Madison, led the work, along with colleagues in the U.S. Department of Agriculture, the U.S. Geological Survey, the University of California, Berkeley, the University of Haifa-Oranim in Israel and the Conservation Biology Institute in Oregon.

"We've seen that many wildfires are caused by people living in close proximity to forests and wildlands. And that when these fires are spreading, they are much harder to fight when people are living there, because lives are at risk, because properties have to be protected," says Radeloff, who wanted to use the current study to understand how that risk developed over time.

Overlaying information from the USGS' National Land Cover Database on modified Census data, Radeloff's group tracked the change in the WUI over the 20-year study period. They found that 9.5 percent, or 190 million acres, of the continental United States fell into the WUI in 2010, up from 7.2 percent 20 years prior. The number of houses within the WUI increased from 31 million to more than 43 million in the same time period.

Much of the growth in WUI occurred east of the Mississippi River in the densely populated eastern region of the country, as well as in Southern states like Oklahoma and Texas. Substantial increases in the WUI also occurred out West, where wildfires have attracted widespread attention. But Radeloff notes that a larger portion of public lands unavailable for development, mountainous areas and reduced access to water limit the maximum growth in WUI in that region. Wildfires occur across the country, but not all receive national media attention.

Combined with the increase in conditions favorable for wildfires to spread linked to climate change, the increase in the WUI -- reflecting the growing number of homes susceptible to burning and their role in fire ignition -- is expected to lead to more severe wildfire seasons. The researchers recommend a suite of land management practices to limit the negative effects of expanding WUI, such as vegetation management, use of appropriate building materials and zoning regulations informed by wildfire risk.

"So there's a lot that can be done. And I think what our data shows on the development side, and others have shown on the climate change side, we better start doing it or otherwise we will have news like what we had last fall again and again," says Radaloff, referencing the devastating wild fires that swept through densely populated regions of northern California.

Admitting that it might seem counterintuitive, Radeloff says he is glad that people want to live where development meets wild lands.

"I think it reflects that people love nature. That's a very good thing. They're making the biggest economic decision of their life and it reflects an affinity to be close to wild places," he says. "That is great -- it's just that when millions of people do it at the same time that the effects are what I don't think anyone wants to see. And dealing with that is what I hope our work will help do."

Credit: 
University of Wisconsin-Madison

Beta blocker shows mixed results in protecting against chemo-induced heart damage

ORLANDO (March 11, 2018) -- After six months of follow up, women newly diagnosed with breast cancer who were given the beta blocker carvedilol to prevent heart issues while undergoing chemotherapy showed no difference in declines in heart function compared with those taking a placebo. Patients who took carvedilol, however, were significantly less likely to have an elevated marker in the blood that signals injury to the heart, according to a study being presented at the American College of Cardiology's 67th Annual Scientific Session.

The study enrolled 200 patients diagnosed with breast cancer who had normal left ventricular ejection fraction (meaning their heart's pumping ability was normal) and who, as part of their cancer treatment, were scheduled to receive anthracycline (ANT). Patients were randomly assigned to receive either carvedilol (median daily dose of 18.4 mg/day) or placebo when starting ANT chemotherapy (240 mg/m2) until completing it. ANT is a type of chemotherapy that at certain cumulative doses has been shown to cause heart damage by weakening the heart muscle. Carvedilol works by slowing the heart rate, opening blood vessels and improving blood flow, which also lowers blood pressure.

"This is the largest randomized clinical trial of a beta blocker to prevent the toxic effects of contemporary doses of anthracycline [ANT] on the heart," said Mônica Samuel Avila, MD, assistant doctor in the Heart Failure and Heart Transplant Department in Heart Institute, Clinical Hospital of Medical School of São Paulo, and the study's lead author. "This is only short-term follow up, but we found that carvedilol seems to prevent myocardial injury, but it did not have an impact on left ventricular ejection fraction."

Avila also said the study findings further reinforce the need for cardiologists and oncologists to work together to define the best treatment strategies for individual cancer patients that minimize other negative effects, especially to the heart.

Neither the patients nor those administering the treatment knew which patients received carvedilol versus the placebo. Researchers assessed cardiotoxicity by measuring heart function with periodic echocardiograms and blood tests to check for a protein called high-sensitivity troponin T, which is released into the bloodstream when injury to the heart occurs. These measures were tracked at baseline and after each cycle of ANT chemotherapy (three, six, nine and 12 weeks) and after the end of all chemotherapy (around 24 weeks). Patients in the carvedilol and placebo groups were an average of 51 and 53 years of age, respectively. All had finished chemotherapy treatment.

For the study's primary endpoint, prevention of greater than a 10 percent reduction in left ventricular ejection fraction at six months, researchers found no significant difference between the carvedilol and placebo groups, 14.5 vs. 13.5 percent, respectively. Ejection fraction is a measure of how much blood is pumped out of the heart with each contraction; anything over 55 percent is considered normal. Overall, declines in ejection fraction were minimal in both groups and were not statistically significant (from baseline to 24 weeks the average drop in ejection fraction was 65.2 to 63.9 percent in the placebo group and 64.8 to 63.9 percent in the carvedilol group).

Secondary outcomes included carvedilol's effects on two biomarkers, troponin I and B-type natriuretic peptide (BNP), and diastolic dysfunction. Of the 65 (33.8 percent) patients with higher plasma levels of troponin I that are known to be suggestive of heart damage, many more were in the placebo group--41.6 percent vs. 26 percent. There was no difference in levels of BNP, a hormone produced by the heart that goes up when heart failure develops or gets worse. While not significant, patients in the placebo group tended to have enlarged hearts, compared to patients in the carvedilol group. Avila said this could indicate that carvedilol may help prevent remodeling or changes in the structure of the heart.

According to Avila, the reason why patients taking carvedilol had lower troponin levels, but no differences in changes in ejection fraction, is difficult to explain. She noted that the six-month follow-up may not have been sufficient to see changes in heart function. In addition, any heart damage may not have been severe enough to lead to heart failure. There was only one case of overt heart failure with reduced ejection fraction, which was in the placebo group.

"Previous studies have shown that higher troponin levels can predict cardiovascular events and so we could imagine that carvedilol may be able to prevent these events, but we did not see this finding in our study," she said, adding that she and her team will continue to follow these patients and report data at one and two years.

In the meantime, researchers did find the prevalence of cardiotoxicity in this study to be lower than in previously reported data from other studies--a finding that could be attributed to lower doses of anthracyclines and lower overall cardiovascular risk in the study population. It could be that the beneficial effect of carvedilol might be more pronounced among higher risk patients, Avila said. In recent years, lower doses of anthracyclines are used to help reduce the risk of heart damage from treatment.

Credit: 
American College of Cardiology

PET myocardial perfusion imaging more effective than SPECT scans in detecting coronary disease

image: Patients who receive cardiac positron emission testing (PET) imaging instead of single photon emission computed tomography (SPECT) scan experienced a significant increase in the detection of severe obstructive coronary artery disease, according to researchers at the Intermountain Medical Center Heart Institute in Salt Lake City.

Image: 
Intermountain Medical Center Heart Institute

Patients who receive cardiac positron emission testing (PET) imaging instead of single photon emission computed tomography (SPECT) scan experienced a significant increase in the detection of severe obstructive coronary artery disease, according to researchers at the Intermountain Medical Center Heart Institute in Salt Lake City.

Both, PET and SPECT scans are nuclear imaging techniques that provide metabolic and functional information of the heart. PET scans provide better image resolution and quality, but have not yet gotten widespread adaptation compared to SPECT. The study is one of the largest of its kind involving PET patients.

For the study, researchers examined Intermountain Healthcare's Enterprise Data Warehouse, which is one of the nation's largest depositories of clinical data, and identified 3,394 patients who underwent a pharmacologic SPECT from 2011-2012 and 7,478 patients who underwent PET in 2014-2015 at Intermountain Medical Center. The average age of the patients was 65 years, and 47 percent of patients were female.

"The benefit of the study is that it helps us better identify a patient's risk for adverse events affecting the heart and their need for further care," said David Min, MD, a cardiologist specializing in cardiac imaging at the Intermountain Medical Center Heart Institute, and lead author of the study.

Researchers looked at pharmacologic SPECT so the comparison with PET scans was more accurate. Both scans involve injecting a small dose of radioactive chemical, called a radiotracer, into the vein of the arm. The tracer travels through the body and is absorbed by the organs doctors examine.

Key findings of the study:

Using PET scans instead of SPECT scans resulted in increased rates of diagnosis of severe obstructive coronary artery disease from 70 percent to 79 percent.

PET scans were associated with a lower incidence of invasive catheterization without identification of severe coronary artery disease (43% vs 55%).

Overall, PET more successfully identified patients with severe obstructive CAD and need for revascularization; compared to SPECT, PET scans increased true positives and reduced false positives for severe coronary artery disease.

Results of the study will be presented at the American College of Cardiology Scientific Session in Orlando on March 10 at 10 a.m., ET. More than 13,000 cardiologists and cardiovascular clinicians from throughout the world are attending the four-day international meeting.

"Since Intermountain Medical Center made the switch from SPECT to PET in 2013, we thought it would be valuable to look at the differences in clinical outcomes since then," said Kirk Knowlton, MD, director of cardiovascular research at the Intermountain Medical Center Heart Institute. "In order to understand the differences between the two-year period of SPECT utilization immediately before the PET program began and the two years after PET was fully implemented, we conducted a retrospective analysis of catheterization outcomes 60 days after heart patients received various treatments."

"This study involves one of the largest number of PET patients studied to date," Dr. Min added. "What we now know is that PET more successfully identifies patient who have high-grade coronary artery disease and may benefit from revascularization. Similarly, PET better identified patients who did not need an invasive procedure. This has broad implications as physicians consider what test best serves their individual patients and institutions consider the advantages and disadvantages of SPECT and PET as well as downstream resource utilization.'"

Credit: 
Intermountain Medical Center

Heart attack protocol can improve outcomes, reduce disparities between men and women

ORLANDO: Cleveland Clinic researchers have found that implementing a four-step protocol for the most severe type of heart attack not only improved outcomes and reduced mortality in both men and women, but eliminated or reduced the gender disparities in care and outcomes typically seen in this type of event.

The research was presented at the American College of Cardiology's 67th Annual Scientific Session and published in the Journal of the American College of Cardiology.

Cardiovascular disease is the leading cause of death in women, and STEMI (ST elevation myocardial infarction, caused by an abrupt and prolonged blockage of the blood supply to the heart, impacts about one million women each year. Previous studies have shown that women with STEMI have worse clinical outcomes, including higher mortality and higher rates of in-hospital adverse events. Studies have shown women also typically have higher door-to-balloon times (time from when they arrive at the hospital to when they receive a coronary intervention like angioplasty and stenting). They receive lower rates of guideline-directed medical therapy: for example, they are treated with lower rates of aspirin within 24 hours. Previous studies have attributed the differences in care and outcomes in women with STEMI to their being older and higher risk patients than men, suggesting that these disparities may be inevitable.

In this study, Cleveland Clinic researchers put in place a comprehensive four-step protocol for STEMI patients, designed to minimize variability in care. It included: (1) standardized emergency department (ED) cardiac catheterization lab activation criteria, (2) a STEMI Safe Handoff Checklist, (3) immediate transfer to an available catheterization lab, and (4) using the radial artery in the wrist as the first option for percutaneous (under the skin) coronary intervention, like stenting. This approach has been shown to have fewer bleeding complications and improved survival when compared to using the femoral artery.

The results of the study showed improvements in both genders after implementation of the protocol, and substantial reductions in care differences between men and women. Prior to the protocol, women had significantly higher 30-day mortality than men (10.7 percent vs 4.6 percent) prior to the protocol. Providers were able to lower the overall mortality rates for both men and women, and the difference between the genders was no longer statistically significant (6.5 percent vs. 3.3 percent). In-hospital deaths of women with STEMI were reduced by about 50 percent.

In addition, there was also no difference in rates of major adverse events such as in-hospital stroke, bleeding, vascular complication and transfusions after implementation. Prior to the protocol, mean door-to-balloon time for women was an average of 20 minutes longer compared to men, but afterwards, the times were equal between men and women. The system also resulted in equal rates of guideline-directed medical therapy in women.

"It's long been known that the gender gap for these types of critical heart attacks is a real issue. However, there is very little data demonstrating successful strategies and no formal recommendation on how a system should be designed to provide the best possible care for women," said Umesh Khot, M.D., vice chairman of Cardiovascular Medicine at Cleveland Clinic and senior author of the study. "Our research shows that putting into place a system that minimizes care variability raises the level of care for everyone and could be the first step to resolving the long-standing gender disparities."

The prospective observational study looked at 1,272 consecutive STEMI patients at Cleveland Clinic, of which 868 were men and 404 were women. The four-step STEMI protocol was put into place July 15, 2014. Consecutive patients were studied from July 15, 2014, through December 31, 2016. Patients treated from January 1, 2011, through July 14, 2014, were studied as a control group. Patients were assessed for guideline-directed medical therapy prior to percutaneous coronary intervention, mean door-to-balloon time, in-hospital adverse events, and 30-day mortality.

Credit: 
Cleveland Clinic

3-D mapping babies' brains

image: A team of scientists at Washington University in St. Louis has developed a new way to track folding patterns in the brains of premature babies. It's hoped this new process could someday be used for diagnosing a host of diseases, including autism and schizophrenia.

Image: 
Washington University in St. Louis

During the third trimester, a baby's brain undergoes rapid development in utero. The cerebral cortex dramatically expands its surface area and begins to fold. Previous work suggests that this quick and very vital growth is an individualized process, with details varying infant to infant.

Research from a collaborative team at Washington University in St. Louis tested a new, 3-D method that could lead to new diagnostic tools that will precisely measure the third-trimester growth and folding patterns of a baby's brain.

The findings, published online March 5 in PNAS, could help to sound an early alarm on developmental disorders in premature infants that could affect them later in life.

"One of the things that's really interesting about people's brains is that they are so different, yet so similar," said Philip Bayly, the Lilyan & E. Lisle Hughes Professor of Mechanical Engineering at the School of Engineering & Applied Science. "We all have the same components, but our brain folds are like fingerprints: Everyone has a different pattern. Understanding the mechanical process of folding -- when it occurs -- might be a way to detect problems for brain development down the road."

Working with collaborators at Washington University School of Medicine in St. Louis, engineering doctoral student Kara Garcia accessed magnetic resonance 3-D brain images from 30 pre-term infants scanned by Christopher Smyser, MD, associate professor of neurology, and his pediatric neuroimaging team. The babies were scanned two to four times each during the period of rapid brain expansion, which typically happens at 28 to 30 weeks.

Using a new computer algorithm, Bayly, Garcia and their colleagues -- including faculty at Imperial College and King's College in London -- obtained accurate point-to-point correspondence between younger and older cortical reconstructions of the same infant. From each pair of surfaces, the team calculated precise maps of cortical expansion. Then, using a minimum energy approach to compare brain surfaces at different times, researchers picked up on subtle differences in the babies' brain folding patterns.

"The minimum energy approach is the one that's most likely from a physical standpoint," Bayly said. "When we obtain surfaces from MR images, we don't know which points on the older surface correspond with which points on the younger surface. We reasoned, that since nature is efficient, the most likely correspondence is the one that produces the best match between surface landmarks, while at the same time, minimizing how much the brain would have to distort while it is growing.

"When you use this minimum energy approach, you get rid of a lot of noise in the analysis, and what emerged were these subtle patterns of growth that were previously hidden in the data. Not only do we have a better picture of these developmental processes in general, but doctors should hopefully be able to assess individual patients, take a look at their pattern of brain development, and figure out how it's tracking."

It's a measurement tool that could prove invaluable in places such as neonatal intensive care units, where preemies face a variety of challenges. Understanding an individual's precise pattern of brain development also could assist physicians trying to make a diagnosis later in a patient's life.

"You do also find folding abnormalities in populations that have cognitive issues later in life, including autism and schizophrenia," Bayly said. "It's possible, if medical researchers understand better the folding process and what goes on wrong or differently, then they can understand a little bit more about what causes these problems."

Credit: 
Washington University in St. Louis

Startup scales up CNT membranes to make carbon-zero fuels for less than fossil fuels

Mattershift, an NYC-based startup with alumni from MIT and Yale has achieved a breakthrough in making carbon nanotube (CNT) membranes at large scale. The startup is developing the technology's ability to combine and separate individual molecules to make gasoline, diesel, and jet fuel from CO2 removed from the air.

Tests confirming that Mattershift's large-scale CNT membranes match the characteristics and performance of small prototype CNT membranes previously reported on in the scientific literature were published today in Science Advances. The paper was the result of a collaboration between Mattershift and researchers in the labs of Dr. Benny Freeman at The University of Texas at Austin and Dr. Jeffrey McCutcheon at the University of Connecticut.

For 20 years, researchers have shown that CNT membranes offer tremendous promise for a wide variety of uses including the low-cost production of ethanol fuel, precision drug delivery, low-energy desalination of seawater, purification of pharmaceutical compounds, and high-performance catalysis for the production of fuels. The difficulty and high cost of making CNT membranes has confined them to university laboratories and has been frequently cited as the limiting factor in their widespread use. Mattershift’s ability to mass-produce CNT membranes unleashes the potential of this technology.

“Achieving large scale production of carbon nanotube membranes is a breakthrough in the membrane field,” said Dr. Freeman, Professor of Chemical Engineering at UT Austin. “It’s a huge challenge to take novel materials like these and produce them at a commercial level, so we’re really excited to see what Mattershift has done here. There’s such a large, unexplored potential for carbon nanotubes in molecular separations, and this technology is just scratching the surface of what’s possible.”

The company has already booked its first sales and will ship products later this year for use in a seawater desalination process that uses the least amount of energy ever demonstrated at pilot scale.

"We're excited to work with Mattershift because its membranes are uniquely tailored to allow salts to pass through our system while retaining our draw solute," said John Webley, CEO of Trevi Systems in Petaluma, California. "We already demonstrated the world's lowest energy desal process in our pilot plant in the UAE last year, and Mattershift's membranes are going to allow us to push the energy consumption even lower."

Three significant advances made this breakthrough possible. First, there has been a 100-fold reduction in the cost of carbon nanotubes in the last 10 years, with a corresponding increase in their quality. Second, is the growing understanding of how matter behaves in nano-confined environments like the interior of sub-nm CNTs, in which molecules move single file at high rates and act differently than they do in bulk fluids. And third, has been the increase in funding for tough tech startups, which enabled Mattershift to spend 5 years of intense R&D developing its technology.

"This technology gives us a level of control over the material world that we've never had before," said Mattershift Founder and CEO, Dr. Rob McGinnis. "We can choose which molecules can pass through our membranes and what happens to them when they do. For example, right now we're working to remove CO2 from the air and turn it into fuels. This has already been done using conventional technology, but it's been too expensive to be practical. Using our tech, I think we'll be able to produce carbon-zero gasoline, diesel, and jet fuels that are cheaper than fossil fuels."

Video Demo: Mattershift prototype testing for molecular extraction of fuel: vimeo.com/251516969

Using CNT membranes to produce fuels is actually just one example of a technology predicted by Nobel Prize winning physicist, Richard Feynman in the 1950s, known as Molecular Factories. Molecular Factories work by combining processes such as catalysis, separation, purification, and molecular-scale manipulation by nanoelectromechanical systems (NEMS) to make things from molecular building blocks. Each nanotube acts as a conveyor belt that performs functions on molecules as they pass through, single file, analogous to how factories function at the macro scale.

"It should be possible to combine different types of our CNT membranes in a machine that does what molecular factories have long been predicted to do: to make anything we need from basic molecular building blocks," said McGinnis. "I mean, we're talking about printing matter from the air. Imagine having one of these devices with you on Mars. You could print food, fuels, building materials, and medicines from the atmosphere and soil or recycled parts without having to transport them from Earth."

Credit: 
Mattershift

Unique diamond impurities indicate water deep in Earth's mantle

A UNLV scientist has discovered the first direct evidence that fluid water pockets may exist as far as 500 miles deep into the Earth's mantle.

Groundbreaking research by UNLV geoscientist Oliver Tschauner and colleagues found diamonds pushed up from the Earth's interior had traces of unique crystallized water called Ice-VII.

The study, "Ice-VII inclusions in Diamonds: Evidence for aqueous fluid in Earth's deep Mantle," was published Thursday in the journal Science.

In the jewelry business, diamonds with impurities hold less value. But for Tschauner and other scientists, those impurities, known as inclusions have infinite value, as they may hold the key to understanding the inner workings of our planet.

For his study, Tschauner used diamonds found in China, the Republic of South Africa, and Botswana that surged up from inside Earth. "This shows that this is a global phenomenon," the professor said.

Scientists theorize the diamonds used in the study, were born in the mantle under temperatures reaching more than 1,000-degrees Fahrenheit.

The mantle - which makes up more than 80 percent of the Earth's volume - is made of silicate minerals containing iron, aluminum, and calcium among others.

And now we can add water to the list.

The discovery of Ice-VII in the diamonds is the first known natural occurrence of the aqueous fluid from the deep mantle. Ice-VII had been found in prior lab testing of materials under intense pressure. Tschauner also found that while under the confines of hardened diamonds found on the surface of the planet, Ice-VII is solid. But in the mantel, it is liquid.

"These discoveries are important in understanding that water-rich regions in the Earth's interior can play a role in the global water budget and the movement of heat-generating radioactive elements," Tschauner said.

This discovery can help scientists create new, more accurate models of what's going on inside the Earth, specifically how and where heat is generated under the Earth's crust.

In other words: "It's another piece of the puzzle in understanding how our planet works," Tschauner said.

Of course, as it often goes with discoveries, this one was found by accident, explained Tschauner.

"We were looking for carbon dioxide," he said. "We're still looking for it, actually,"

Credit: 
University of Nevada, Las Vegas

Survivors of childhood cancer are at great risk of heart problems in adulthood

Survivors of childhood cancer are at increased risk of suffering prematurely from cardiovascular disease in adulthood, according to a study published today (Friday) in the European Heart Journal [1].

In the first study to investigate the long-term health of childhood cancer survivors by means of systematic and comprehensive clinical evaluation of their health in comparison to the general population, researchers in Germany found that as adults these people were at increased risk of having high blood pressure and dyslipidaemia (abnormal, usually high, levels of cholesterol and other fats in the blood). These conditions occurred six and eight years earlier respectively when compared with the general population.

In addition, childhood cancer survivors had a nearly two-fold increased risk of cardiovascular diseases such as congestive heart failure and venous thromboembolism. Cardiovascular disease was found in 4.5% of survivors and occurred in the majority before they reached the age of 40, nearly eight years earlier than in the general population.

Between October 2013 and February 2016, a total of 951 adult long-term survivors of childhood cancer, who were part of the "Cardiac and vascular late sequelae in long-term survivors of childhood cancer" (CVSS) study, underwent a clinical examination that included assessing factors that might put them at higher risk of cardiovascular disease, such as high blood pressure and dyslipidaemia. The researchers also checked their medical history, whether or not they smoked and whether there was any family history of cardiovascular disease. Their ages ranged from 23 to 48 at the time of this follow-up. The results were compared with over 15,000 people selected from the general population.

Professor Joerg Faber, head of the Department of Paediatric Haematology / Oncology / Haemostaseology at the University Medical Centre of the Johannes Gutenberg University Mainz, one of the three principal investigators, said: "Our results show that these survivors of childhood cancer have a substantially elevated burden of prematurely occurring traditional cardiovascular risk factors and cardiovascular diseases."

Professor Wild, who is head of the Department of Preventive Cardiology and Preventive Medicine at the University Medical Centre of the Johannes Gutenberg University Mainz and principal investigator of the German Centre for Cardiovascular Research (DZHK), added: "In particular, the premature onset of high blood pressure and blood lipid disorders may play an important role in the development of severe cardiovascular conditions, such as heart disease and stroke in the long term.

"We also found that a remarkable number attended their clinical examination for this study with previously unidentified cardiovascular risk factors and cardiovascular disease. For example, only 62 out of 269 were aware of having dyslipidaemia. Consequently, 207, approximately 80%, were only diagnosed at that point. This applies to high blood pressure in the arteries as well."

High blood pressure and dyslipidaemia were the most common cardiovascular risk factors identified in the childhood cancer survivors, 23% and 28% respectively, whereas diabetes was only found in two per cent. These conditions occurred earlier than in the general population; 17% and 25% had high blood pressure or dyslipidaemia respectively before the age of 30, and 39% and 38% by the age of 45.

At least one cardiovascular disease was identified in 4.5% of the survivors; venous thromboembolism was the most common (2%), followed by congestive heart failure (1.2%), stroke or peripheral artery disease (0.5% for each), atrial fibrillation (0.4%) and coronary heart disease (0.3%). These conditions occurred in 31 out of 43 people who were younger than 40.

The researchers say that these findings show that survivors of childhood cancer have a considerably greater risk of cardiovascular disease - a risk that continued to increase with age rather than levelling off - and this meant that, in the longer term, they may be more likely to die earlier than the general population.

However, it might be possible to prevent this, said Professor Wild. "Early systematic screening, particularly focusing on blood pressure and lipid measurements, might be suggested in all childhood cancer survivors irrespective of the type of cancer or treatment they had had. This might help to prevent long-term cardiovascular diseases by intervening early, for instance by modifying life styles and having treatment for high blood pressure."

Professor Faber added: "Usually survivors are followed for only five to ten years after completion of therapy, and this is focused on the risk of the cancer returning and the acute adverse effects of their treatment, rather than on other conditions. Current guidelines recommend cardiovascular assessments only for sub-groups known to be at risk, such as for patients who were treated with anthracycline therapy and / or radiation therapy. However, further investigations are needed to answer questions about the best follow-up care."

Treatments for childhood cancer include chemotherapy and radiotherapy, both of which can affect the heart, causing temporary or sometimes permanent damage to heart cells and blood vessels. The mechanisms for this are not all fully understood, particularly for long-term chronic effects, but certain genetic factors seem to increase the probability of adverse effects on the heart.

In an accompanying editorial [2], Dr John Groarke, a cardiovascular medicine specialist at Brigham and Women's Hospital Heart and Vascular Centre, Boston, USA, who was not involved in the research, writes: "The prospective and systematic phenotyping of survivors and comparison with control cohorts are welcome strengths of this study. Previous reports on the burden of CV disease in childhood cancer survivors have mostly relied on patient-reported outcomes and/or failed to consider relevant controls......"

He points out that treatments for childhood cancer have improved since the 1980s. "Refinements in radiation protocols and reductions in cumulative anthracycline exposure are succeeding in reducing late mortality among 5-year survivors of childhood cancer." He concludes: "There is a pressing need for research to inform practice guidelines for screening, prevention and management. There are now sufficient observational data that childhood cancer survivors represent a high-CV risk cohort that warrant more comprehensive care systems to improve CV outcomes. It is time to progress from risk observation to risk modification."

Credit: 
European Society of Cardiology

UCLA researchers develop a new class of two-dimensional materials

image: This is an artist's concept of two kinds of monolayer atomic crystal molecular superlattices. On the left, molybdenum disulfide with layers of ammonium molecules, on the right, black phosphorus with layers of ammonium molecules.

Image: 
UCLA Samueli Engineering

A research team led by UCLA scientists and engineers has developed a method to make new kinds of artificial "superlattices" -- materials comprised of alternating layers of ultra-thin "two-dimensional" sheets, which are only one or a few atoms thick. Unlike current state-of-the art superlattices, in which alternating layers have similar atomic structures, and thus similar electronic properties, these alternating layers can have radically different structures, properties and functions, something not previously available.

For example, while one layer of this new kind of superlattice can allow a fast flow of electrons through it, the other type of layer can act as an insulator. This design confines the electronic and optical properties to single active layers, and prevents them from interfering with other insulating layers.

Such superlattices can form the basis for improved and new classes of electronic and optoelectronic devices. Applications include superfast and ultra-efficient semiconductors for transistors in computers and smart devices, and advanced LEDs and lasers.

Compared with the conventional layer-by-layer assembly or growth approach currently used to create 2D superlattices, the new UCLA-led process to manufacture superlattices from 2D materials is much faster and more efficient. Most importantly, the new method easily yields superlattices with tens, hundreds or even thousands of alternating layers, which is not yet possible with other approaches.

This new class of superlattices alternates 2D atomic crystal sheets that are interspaced with molecules of varying shapes and sizes. In effect, this molecular layer becomes the second "sheet" because it is held in place by "van der Waals" forces, weak electrostatic forces to keep otherwise neutral molecules "attached" to each other. These new superlattices are called "monolayer atomic crystal molecular superlattices."

The study, published in Nature, was led by Xiangfeng Duan, UCLA professor of chemistry and biochemistry, and Yu Huang, UCLA professor of materials science and engineering at the UCLA Samueli School of Engineering.

"Traditional semiconductor superlattices can usually only be made from materials with highly similar lattice symmetry, normally with rather similar electronic structures," Huang said. "For the first time, we have created stable superlattice structures with radically different layers, yet nearly perfect atomic-molecular arrangements within each layer. This new class of superlattice structures has tailorable electronic properties for potential technological applications and further scientific studies."

One current method to build a superlattice is to manually stack the ultrathin layers one on top of the other. But this is labor-intensive. In addition, since the flake-like sheets are fragile, it takes a long time to build because many sheets will break during the placement process. The other method is to grow one new layer on top of the other, using a process called "chemical vapor deposition." But since that means different conditions, such as heat, pressure or chemical environments, are needed to grow each layer, the process could result in altering or breaking the layer underneath. This method is also labor-intensive with low yield rates.

The new method to create monolayer atomic crystal molecular superlattices uses a process called "electrochemical intercalation," in which a negative voltage is applied. This injects negatively charged electrons into the 2D material. Then, this attracts positively charged ammonium molecules into the spaces between the atomic layers. Those ammonium molecules automatically assemble into new layers in the ordered crystal structure, creating a superlattice.

"Think of a two-dimensional material as a stack of playing cards," Duan said. "Then imagine that we can cause a large pile of nearby plastic beads to insert themselves, in perfect order, sandwiching between each card. That's the analogous idea, but with a crystal of 2D material and ammonium molecules."

The researchers first demonstrated the new technique using black phosphorus as a base 2D atomic crystal material. Using the negative voltage, positively charged ammonium ions were attracted into the base material, and inserted themselves between the layered atomic phosphorous sheets."

Following that success, the team inserted different types of ammonium molecules with various sizes and symmetries into a series of 2D materials to create a broad class of superlattices. They found that they could tailor the structures of the resulting monolayer atomic crystal molecular superlattices, which had a diverse range of desirable electronic and optical properties."The resulting materials could be useful for making faster transistors that consume less power, or for creating efficient light-emitting devices," Duan said.

Credit: 
UCLA Samueli School of Engineering

Engineered cartilage template to heal broken bones

image: This is cartilage template formation via engineered extracellular matrix

Image: 
Syam Nukavarapu/UConn Photo

A team of UConn Health researchers has designed a novel, hybrid hydrogel system to help address some of the challenges in repairing bone in the event of injury. The UConn Health team, led by associate professor of orthopedic surgery Syam Nukavarapu, described their findings in a recent issue of Journal of Biomedical Materials Research-Part B, where the work is featured on the journal cover.

There are over 200 bones in an adult human skeleton, ranging in size from a couple of millimeters in length to well over a foot. How these bones form and how they are repaired if injured varies, and has posed a challenge for many researchers in the field of regenerative medicine.

Two processes involved with human skeletal development help all the bones in our body form and grow. These processes are called intramembranous and endochondral ossification, IO and EO respectively. While they are both critical, IO is the process responsible for the formation of flat bones, and EO is the process that forms long bones like femurs and humeri.

For both processes, generic mesenchymal stem cells (MSCs) are needed to trigger the growth of new bone. Despite this similarity, IO is significantly easier to recreate in the lab since MSCs can directly differentiate, or become specialized, into bone-forming cells without taking any additional steps.

However, this relative simplicity comes with limitations. To circumvent the issues associated with IO, the UConn Health team set out to develop an engineered extracellular matrix that uses hydrogels to guide and support the formation of bone through EO.

"Thus far, very few studies have been focused on matrix designs for endochondral ossification to regenerate and repair long bone," says Nukavarapu, who holds joint appointments in the departments of biomedical engineering and materials science and engineering. "By developing a hybrid hydrogel combination, we were able to form an engineered extracellular matrix that could support cartilage-template formation."

Nukavarapu notes that vascularization is the key in segmental bone defect repair and regeneration. The main problem with IO-formed bone is caused by a lack of blood vessels, also called vascularization. This means that IO isn't capable of regenerating enough bone tissue to be applied to large bone defects that result from trauma or degenerative diseases like osteoporosis. Although many researchers have tried various strategies, successfully vascularizing bone regenerated with IO remains a significant challenge.

On the other hand, vascularization is a natural outcome of EO due to the development of a cartilage template, chondrocyte hypertrophy, and eventual bone tissue formation.

While IO's simplicity caused limitations, EO's benefits result in an intricate balancing act. EO requires precise spatial and temporal coordination of different elements, like cells, growth factors, and an extracellular matrix, or scaffold, onto which the MSCs attach, proliferate, and differentiate.

To achieve this delicate balance in the lab, Nukavarapu and his colleagues combined two materials known to encourage tissue regeneration - fibrin and hyaluronan - to create an effective extracellular matrix for long bone formation. Fibrin gel mimics human bone mesenchymal stem cells and facilitates their condensation, which is required for MSC differentiation into chondrogenic cells. Hyaluronan, a naturally occurring biopolymer, mimics the later stages of the process by which differentiated chondrogenic cells grow and proliferate, also known as hypertrophic-chondrogenic differentiation.

The researchers anticipate that cartilage templates with hypertrophic chondrocytes will release bone and vessel forming factors and will also initiate vascularized bone formation. Nukavarapu says that the "use of cartilage-template matrices would lead to the development of novel bone repair strategies that do not involve harmful growth factors."

While still in the early research phase, these developments hold promise for future innovations.

"Dr. Nukavarapu's work speaks not only to the preeminence of UConn's faculty, but also to the potential real-world applications of their research," says Radenka Maric, vice president for research at UConn and UConn Health. "UConn labs are buzzing with these types of innovations that contribute to scientific breakthroughs in healthcare, engineering, materials science, and many other fields."

The researchers next plan to integrate the hybrid extracellular matrix with a load-bearing scaffold to develop cartilage templates suitable for long-bone defect repair. According to Nukavarapu, the UConn research team is hopeful that this is the first step towards forming a hypertrophic cartilage template with all the right ingredients to initiate bone tissue formation, vascularization, remodeling, and ultimately the establishment of functional bone marrow to repair long bone defects through EO.

Credit: 
University of Connecticut

Living in a sunnier climate as a child and young adult may reduce risk of MS

MINNEAPOLIS - People who live in areas where they are exposed to more of the sun's rays, specifically UV-B rays, may be less likely to develop multiple sclerosis (MS) later in life, according to a study published in the March 7, 2018, online issue of Neurology®, the medical journal of the American Academy of Neurology. Exposure in childhood and young adulthood may also reduce risk.

While UV-B rays can cause sunburn and play a role in the development of skin cancer, they also help the body produce vitamin D. Lower levels of vitamin D have been linked to an increased risk of MS.

"While previous studies have shown that more sun exposure may contribute to a lower risk of MS, our study went further, looking at exposure over a person's life span," said study author Helen Tremlett, PhD, of the University of British Columbia in Vancouver, Canada. "We found that where a person lives and the ages at which they are exposed to the sun's UV-B rays may play important roles in reducing the risk of MS."

For the study, researchers identified participants from the larger Nurses' Health Study, including 151 women with MS and 235 women of similar age without MS. Nearly all of the women, 98 percent, were white and 94 percent said they had fair to medium skin. Participants lived across the United States in a variety of climates and locations. Of those with MS, the average age at onset was 40. All of the women had completed questionnaires about summer, winter and lifetime sun exposure.

Researchers divided the women into three groups representing low, moderate and high UV-B ray exposure based on where they lived, specifically looking at latitude, altitude and average cloud cover for each location. In addition, seasonal sun exposure was explored, with high summer sun exposure defined as more than 10 hours per week and more than four hours per week in the winter.

They found that women who lived in sunnier climates with the highest exposure to UV-B rays had 45 percent reduced risk of developing MS across all pre-MS onset age groups when compared to those living in areas with the lowest UV-B ray exposure. When looking at specific age groups, those who lived in areas with the highest levels of UV-B rays between ages 5 to 15 had a 51 percent reduced risk of MS compared to the lowest group. A total of 33 of 147 people with MS, or 22 percent, had high exposure at ages 5 to 15, while 61 people, or 41 percent, had low exposure. In addition, those who spent more time outdoors in the summer at ages 5 to 15 in locations where exposure to UV-B rays was the highest had a 55 percent reduced risk of developing the disease compared to those in the lowest-exposure group.

"Our findings suggest that a higher exposure to the sun's UV-B rays, higher summer outdoor exposure and lower risk of MS can occur not just in childhood, but into early adulthood as well," said Tremlett. "The methods we applied to measure sun exposure could also be used in future studies."

Tremlett continued, "In addition, our research showed that those who did develop MS also had reduced sun or outdoor exposure later in life, in both summer and winter which may have health consequences."

A limitation of the study is that sun exposure was self-reported and memories of how much time was spent in the sun, particularly in youth, may differ from actual exposure time. However, the information related to UV-B exposure was captured using place of residence, which is less likely to be influenced by such factors. Another limitation was that almost all of the study participants were female and white, meaning the results may not apply to other populations.

Credit: 
American Academy of Neurology

Higher Vitamin D levels may be linked to lower risk of cancer

High levels of vitamin D may be linked to a lower risk of developing cancer, including liver cancer, concludes a large study of Japanese adults published by The BMJ today.

The researchers say their findings support the theory that vitamin D might help protect against some cancers.

Vitamin D is made by the skin in response to sunlight. It helps to maintain calcium levels in the body to keep bones, teeth and muscles healthy. While the benefits of vitamin D on bone diseases are well known, there is growing evidence that Vitamin D may benefit other chronic diseases, including some cancers.

But so far, most studies have been carried out in European or American populations, and evidence from Asian populations is limited.

As Vitamin D concentrations and metabolism can vary by ethnicity, it is important to find out whether similar effects would be seen in non-Caucasian populations.

So an international research team, based in Japan, set out to assess whether vitamin D was associated with the risk of total and site specific cancer.

They analysed data from the Japan Public Health Center-based Prospective (JPHC) Study, involving 33,736 male and female participants aged between 40 to 69 years.

At the start of the study, participants provided detailed information on their medical history, diet and lifestyle, and blood samples were taken to measure vitamin D levels.

Vitamin D levels varied depending on the time of year the sample was taken, tending to be higher during the summer and autumn months than in the winter or spring.

After accounting for this seasonal variation, samples were split into four groups, ranging from the lowest to highest levels of vitamin D.

Participants were then monitored for an average of 16 years, during which time 3,301 new cases of cancer were recorded.

After adjusting for several known cancer risk factors, such as age, weight (BMI), physical activity levels, smoking, alcohol intake and dietary factors, the researchers found that a higher level of vitamin D was associated with a lower (around 20%) relative risk of overall cancer in both men and women.

Higher vitamin D levels were also associated with a lower (30-50%) relative risk of liver cancer, and the association was more evident in men than in women.

No association was found for lung or prostate cancer, and the authors note that none of the cancers examined showed an increased risk associated with higher vitamin D levels.

Findings were largely unchanged after accounting for additional dietary factors and after further analyses to test the strength of the results.

The researchers point to some study limitations, for example numbers of organ specific cancers were relatively small. And while they adjusted for several known risk factors, they cannot rule out the possibility that other unmeasured (confounding) factors may have influenced the results, making it difficult to draw firm conclusions about cause and effect.

Nevertheless, key strengths include the large sample size for overall cancer, a long follow-up period and the large number of blood samples analysed.

The authors say their findings support the theory that vitamin D may protect against the risk of cancer, but that there may be a ceiling effect, which may suggest that there are no additional benefits beyond a certain level of vitamin D.

"Further studies are needed to clarify the optimal concentrations [of vitamin D] for cancer prevention." they conclude.

Credit: 
BMJ Group

Wildlife conservation in North America may not be science-based after all

image: This is a grey wolf. Photo credit to Kyle A. Artelle

Image: 
Kyle A. Artelle

A study led by recent SFU PhD alumnus Kyle Artelle has unveiled new findings that challenge the widespread assumption that wildlife management in North America is science-based. He conducted the study with SFU researchers John Reynolds and Jessica Walsh, as well as researchers from other institutions.

In the study, published by AAAS Open Access journal Science Advances, the researchers compiled and analyzed all of the publicly available documents describing 667 hunt management systems. These included 27 species groups across 62 U.S. states and Canadian provinces. They also identified four hallmarks that provide rigour to science-based management: clear objectives, use of evidence, transparency and external review.

After applying these hallmarks to the hunt management systems, they found that 60 per cent of them featured fewer than half of the indicator criteria. In addition, some of the most basic assumptions of scientific management were almost entirely absent.

For example, only nine per cent of management systems had an explanation for how quotas were set. Similarly, less than 10 per cent of management systems underwent any form of review, including internal reviews, with fewer than six per cent subjected to external review.

These and other findings in the study raised doubts for the researchers about whether North American wildlife management can accurately be described as science-based.

"The key to honest discussions about wildlife management and conservation is clarity about where the science begins and ends," says Artelle, who is now a biologist with Raincoast Conservation Foundation and a postdoctoral fellow at the University of Victoria.

"Our approach provides a straightforward litmus test for science-based claims."

These findings come at a time of heightened controversy in wildlife management, where contentious policy is often defended by agencies claiming adherence to science-based approaches.

"We are not saying that wildlife hunting decisions should be based only on science, as there can be important social and economic considerations," says SFU biological sciences professor John Reynolds. "But the extent to which these dimensions influence management decisions should be clearly articulated alongside claims of scientific rigour."

The researchers note that claims of science-based management would, however, be supported if management defined clear objectives, used evidence to inform decisions, was transparent with the public about all factors contributing to decisions, and subjected plans and approaches to external review.

Credit: 
Simon Fraser University

Experts issue recommendations to manage unwanted hair growth in women

Washington, DC--All women who have unwanted dark, coarse hair growing on the face, chest or back should undergo testing for polycystic ovary syndrome (PCOS) and other underlying health problems, Endocrine Society experts concluded in an updated Clinical Practice Guideline released today.

For the first time since 2008, the Society issued an update to its Clinical Practice Guideline on hirsutism--a condition where women experience unwanted hair growth in areas where men typically grow hair. The guideline, entitled "Evaluation and Treatment of Hirsutism in Premenopausal Women: An Endocrine Society Clinical Practice Guideline," was published online today and will appear in the April 2018 print issue of The Journal of Clinical Endocrinology & Metabolism (JCEM), a publication of the Endocrine Society.

"Excess facial or body hair is not only distressing to women, it is often a symptom of an underlying medical problem," said Kathryn A. Martin, M.D., of Massachusetts General Hospital in Boston, Mass., and chair of the task force that authored the guideline. "It is important to see your health care provider to find out what is causing the excess hair growth and treat it."

Hirsutism affects 5 percent to 10 percent of women. The excess hair growth can be caused by PCOS, a common condition that contributes to infertility and metabolic health problems.

Society experts now suggest all women with hirsutism undergo blood tests for testosterone and other male sex hormones called androgens. Women naturally have small amounts of these hormones, but the levels tend to be elevated in women with PCOS and other conditions that cause hirsutism. Experts previously called for testing for women with moderate to severe hirsutism, but the recommendation was broadened to improve diagnosis rates of PCOS and other underlying conditions.

Hirsutism can cause personal distress, anxiety and depression when it is not treated. The Society suggests treating mild cases with no sign of an underlying condition with medication or direct hair removal. For most women with hirsutism who are not trying to become pregnant, the authors suggest oral contraceptives as a first treatment. As long as a woman is not at risk of developing deep vein thrombosis or a pulmonary embolism, the type of oral contraceptive is not important, since they are all equally effective for treating hirsutism.

Although weight loss itself is not a recommended treatment for hirsutism, some studies have found it is associated with slight improvement in unwanted hair growth. As a result, the Society recommends women with both obesity and hirsutism consider making lifestyle changes to improve their overall health. A healthy diet and exercise also can be beneficial for women who have PCOS.

When women choose hair removal therapy to address hirsutism, the Society suggests photoepilation for women with unwanted auburn, brown or black hair and electrolysis for women with unwanted white or blonde hair. Women of color who choose photoepilation may need to use a long wavelength, long pulse-duration light source to avoid complications. Providers should warn women of Mediterranean and Middle Eastern descent about the increased risk of side effects such as skin pigment changes, blistering or, in rare cases, scarring.

Credit: 
The Endocrine Society

Older adult falls lead to substantial medical costs

In 2015, the estimated medical costs attributable to both fatal and nonfatal falls in older US adults was approximately $50 billion. The findings come from a recent analysis published in the Journal of the American Geriatrics Society.

For nonfatal falls in adults aged 65 and older, Medicare paid approximately $28.9 billion, Medicaid $8.7 billion and private and other payers $12.0 billion. Overall medical spending for fatal falls was estimated to be $754 million.

"Preventive strategies that reduce falls among older adults could lead to a substantial reduction in health care spending," wrote the authors.

Credit: 
Wiley