Tech

Bioinspired slick method improves water harvesting

video: This video shows the effectiveness of a slippery rough surface (left) in channeling and directing water droplets at a higher rate than the other two surfaces. Developed by researchers at the University of Texas at Dallas and Penn State, and inspired by surfaces found in nature, the material represents a new way of optimizing water harvesting applications. The research was published in the journal Science Advances.

Image: 
University of Texas at Dallas

By learning how water is collected by living organisms, including rice leaves and pitcher plants, scientists at The University of Texas at Dallas created and tested a combination of materials that can do the same thing, but faster.

The shells of certain desert-dwelling beetles can trap and direct water droplets, as can textures on rice leaves and pitcher plants. With that natural blueprint, researchers at UT Dallas, in collaboration with Penn State University, engineered a surface and infused it with a liquid lubricant that is hydrophilic -- it attracts water. They were able to "capture" water droplets from fog and air vapor, and rapidly direct them into reservoirs via lubricated microgrooves.

Their findings appear online March 30 in the journal Science Advances.

"Existing processes for creating fresh water, such as desalination, rely on the transition from vapor to water," said Dr. Simon Dai, assistant professor of mechanical engineering in the Erik Jonsson School of Engineering and Computer Science. "We wanted to create a surface that can both capture and direct water droplets efficiently."

If a surface traps a water droplet too tightly, it cannot move fast enough for capture at an efficient rate, Dai said. Think of the way water travels down a glass window. The droplet is attached firmly and takes a long time to make its way to the bottom of the glass. This time-consuming process means some water may be lost due to evaporation before it can ever be captured.

To design a suitable material, Dai and his collaborators had to meet some diverse criteria, including creating a surface that allowed the lubricant to be uniformly retained. Moreover, the surface chemistry had to be more favorable for the lubricant than for the water to wet solid textures, and the lubricant and water could not mix together.

The researchers' "hydrophilic directional slippery rough surfaces," or SRS, also combined the best water-harvesting design elements of several natural substances. Taking a cue from the desert beetle, they wanted a surface that could capture miniscule water droplets and combine them to make large droplets. They paired that attribute with the slippery function of pitcher plants.

"We found through simulations that molecules that attract water -- or hydroxy functional groups -- are the most efficient in capturing water droplets. That's why we used hydrophilic lubricant," said Dr. Steven Nielsen, associate professor in the Department of Chemistry and Biochemistry at UT Dallas, and a study author.

"We infused our directional rough surfaces with hydrophilic liquid lubricant that is molecularly mobile, so it can help the collected water coalesce into larger drops," Dai said. "Furthermore, the material is scalable. Unlike the beetle-inspired surfaces where droplet creation only occurs in specific areas, we can create small or large surfaces, all with the ability to trap and move water quickly everywhere on the surface."

Another key component of the SRS is that it can be optimized and adjusted to fit specific applications. These fit into a broad range of industries including air conditioning, power generation, desalination and water harvesting in arid regions.

"We are hopeful that these surfaces can be scaled up or down depending upon need," Dai said. "Next steps would include improving water harvesting capabilities at lower humidity ranges."

The researchers have filed a patent on the technology.

Credit: 
University of Texas at Dallas

Did highest known sea levels create the iconic shape of Mount Etna?

The iconic cone-like structure of Mount Etna could have been created after water levels in the Mediterranean Sea rose following an extended period of deglaciation, according to new research.

A study by Iain Stewart, Professor of Geoscience Communication at the University of Plymouth, explores changes in the volcano's structures which began around 130,000 years ago.

Scientists have previously said the switch from a fissure-type shield volcano to an inland cluster of nested stratovolcanoes was caused by a tectonically driven rearrangement of major border faults.

However Professor Stewart, writing in Episodes, has suggested the change coincides closely with a period of particularly high sea levels that could have triggered the fundamental change in Mount Etna's magmatic behaviour.

He also believes such a phenomenon could also explain changes at other volcanic sites across the world including the similarly iconic Stromboli, just off the north coast of Sicily, and even the volcano on Montserrat in the Caribbean.

Professor Stewart, who fronted the BBC documentary Volcano Live in 2013, said: "Mount Etna is arguably one of the most iconic volcanoes on the planet, but 100,000 years ago there would have been no cone-like structure such as you see today. I had always been interested to know what prompted that to happen but I believe the dates of sea levels rising -- and how they correspond to the volcano physically changing - offer a potential explanation. The precise sensitivities of the plumbing beneath Etna has always been something of a mystery, but exploring how sea levels interact with its fault lines could shed new light on its creation and future."

Mount Etna's eruptive history began around 500,000 years ago with submarine volcanism. But this changed around 220,000 years ago into fissure type activity which built a north-south chain of eruptive centres along the present coastline.

This ultimately created a broad shield volcano immediately east of Etna's coastline, which ceased around 130,000 years ago at the same time as the sea reached its highest levels following a period of deglaciation starting almost 12,000 years earlier.

However, Professor Stewart believes that over a few millennia those sea level rises could have caused the fault system beneath and around Mount Etna to completely change in behaviour, sealing up old lava flows and ultimately forcing them to emerge elsewhere on the island.

This ultimately created the iconic cone structure visible today, with Europe's most active volcano still continuing to erupt tens of thousands of years later.

This new research has been published days after another study showed that Etna is edging towards the Mediterranean at a rate of around 14mm per year.

Professor Stewart added: "The latest measurements of Etna's seaward slide give us a much better understanding of just how unstable Europe's biggest volcano is. But the big question remains: what is driving that instability? For me, the fact that Etna's dramatic switches in eruptive behaviour coincide with past abrupt changes in ocean levels implies that Etna's antics are at least in part orchestrated by fluctuating waters of the Mediterranean Sea."

Credit: 
University of Plymouth

Pitt physicians devise emergency and trauma care referral map for US

image: This map shows the Pittsburgh Atlas, an emergency and trauma care referral map that creates the framework to allow states and groups of counties to implement health care quality improvement programs. The map divides the country into 326 referral regions that respect state and county borders.

Image: 
David Wallace/University of Pittsburgh

PITTSBURGH, March 29, 2018 - In response to repeated calls for an integrated and coordinated emergency and trauma care system in the U.S., University of Pittsburgh School of Medicine scientists and UPMC physicians rose to the challenge and divided the nation into hundreds of referral regions that describe how patients access advanced care, in a way that respects geopolitical borders.

The Pittsburgh Atlas, published today in the Annals of Emergency Medicine, creates the framework that will allow states and groups of counties to implement quality improvement programs accountable to regional performance measures instituted by state and local governments.

"Recent proposed changes to health care could shift more responsibility to the state level with regard to who is insured or what services are offered," said lead author David J. Wallace, M.D., M.P.H., assistant professor in Pitt's Department of Critical Care Medicine. "A set of regions that maintain state lines is essential in that circumstance."

More than a decade ago, a National Academy of Medicine report endorsed coordinated, regional, accountable systems as an approach to improve health care for severe acute conditions requiring trauma and emergency services. In 2013, the National Quality Forum highlighted the importance of region-level performance measures that promote timely, high-quality care.

The Pittsburgh Atlas was partially inspired by the Dartmouth Atlas of Health Care, a set of geographic regions based on Medicare and Medicaid hospital discharge claims that was created more than 20 years ago, and is used for epidemiological studies that compare the cost, quality and consumption of health care in different parts of the country. However, the Dartmouth Atlas doesn't respect state or county boundaries - resulting in a set of regions that do not promote coordination or accountability.

Wallace and his team built the Pittsburgh Atlas by looking at nearly 731,000 Medicare patients who sought care for a heart attack, stroke or moderate to severe trauma in 2011. Referral regions were created by combining patient home counties to hospital counties, allowing multiple patient home counties to join to the same hospital county. For example, the region containing Pittsburgh (dubbed "57") consists of 22 counties stretching up the western side of Pennsylvania. It's one of 16 referral regions in the state.

The research team determined six different ways to divide the U.S. into emergency care referral regions. After examining each result, they came away with the one that keeps the vast majority of patients closest to home and named it the "Pittsburgh Atlas." It's comprised of 326 referral regions.

"We were surprised at how well our regions performed in terms of keeping patients close to home - they did as well as those in the Dartmouth Atlas," said Wallace, who also is a member of Pitt's Clinical Research Investigation and Systems Modeling of Acute Illness (CRISMA) Center. "We truly expected there would be a greater trade-off since the Pittsburgh Atlas faced the additional geopolitical boundary constraints."

Wallace acknowledged the Pittsburgh Atlas may not be representative for all patients. Although it was built using the largest collective source of hospitalizations for heart attack, stroke and major trauma in the U.S., it is primarily limited to patients 65 years old and older with data from 2011. Referral patterns may have changed with population changes and hospital openings and closings - meaning the Pittsburgh Atlas would need periodic updates.

Wallace and his team have made the data files for the Pittsburgh Atlas available so others can use it for projects that evaluate regional care delivery and seek to improve quality.

"This project gives researchers, policymakers, hospital systems and public health agencies a way to move beyond simply comparing apples to apples, and into thinking about orchards," Wallace said.

Credit: 
University of Pittsburgh Schools of the Health Sciences

Hockey victories may increase heart attack risk in Canadian men

Philadelphia, March 29, 2018 - The thrill of a hockey victory may put younger men at an increased risk for heart attack. A new study published in the Canadian Journal of Cardiology found an increase in hospital admissions for men under 55 presenting with symptoms of ST-elevation myocardial infarction (STEMI) or heart attack the day after a Montreal Canadiens win. There was little evidence within the general population of a relationship between watching hockey games and the incidence of STEMI.

Each year, cardiovascular disease claims the lives of an estimated 17.3 million people. The emotional and environmental stress of sporting events has been linked to acute cardiac problems, however, the relationship has yet to be well defined.

Because hockey is an integral part of Canadian life, researchers wanted to examine whether or not there is a link between watching hockey and heart attacks. They analyzed hospitalization data for patients at the Montreal Heart Institute for STEMI. The results of their analysis showed an association between Montreal Canadiens victories and increased STEMI risk in men, but not women. The greatest incidence was seen in men under 55 years of age, and the highest admission rates occurred after home victories, with a 40 percent increase in this age group after a triumphant game for the home team.

"Our study is the first to evaluate the association between local hockey games and admission rates for acute STEMI. Since the inauguration of the NHL in 1917, the Montreal Canadiens remains the team with the most Stanley Cup wins and is known for its extremely loyal and enthusiastic fan base. This historical role of the city of Montreal might explain in part the association between higher admission rates for STEMI," explained lead investigator Hung Q. Ly, MD, SM, interventional cardiologist at the Montreal Heart Institute, Montreal, Canada.

In the study, women were less likely to suffer a STEMI after a hockey game than men, despite the fact that prior research has shown women are more susceptible to mental stress-induced myocardial ischemia. "Previous studies have suggested that unhealthy behavioral changes including increased alcohol consumption, heavy and fatty meals, smoking, drug use, or sleep deprivation may have additive effects on the link between sporting events and increased cardiovascular risk in spectators," noted Dr. Ly "Notably, among all demographic groups in our study, the highest proportion of obesity, dyslipidemia, and smoking was found in young males, pointing towards an increased risk behavior and unhealthy lifestyle in this subgroup."

Another interesting finding is that winning games produced more heart attacks than losses. "Indeed, strong emotional response to events has been reported to increase the risk of cardiovascular events. In our study, the fact that game outcomes are likely unknown to the spectator until the end implies that emotional triggers at the end and/or after the match might impose a greater risk for vulnerable populations," observed the team of investigators. "This hypothesis is further supported by the notion that significant increases in STEMI hospital admissions occurred one day after a game in our study, while no difference in admission rates were observed on match days."

While hockey and other sports will continue to be a source of fun and excitement for people in Canada and around the world, it is important to consider how these events can influence spectator health. The emotional responses and associated physiological changes combined with a high-risk cardiovascular profile might contribute to the higher risk in this study population. According to the researchers, preventive measures targeting behavioral and lifestyle changes could positively impact this risk.

Credit: 
Elsevier

Researchers develop model to show how bacteria grow in plumbing systems

image: Illinois civil and environmental engineering professor Wen-Tso Liu leads a team of researchers who are studying how microbial communities assemble within indoor plumbing systems.

Image: 
L. Brian Stauffer

CHAMPAIGN, Ill. -- Bacteria in tap water can multiply when a faucet isn't used for a few days, such as when a house is vacant over a week's vacation, a new study from University of Illinois engineers found. The study suggests a new method to show how microbial communities, including those responsible for illnesses like Legionnaires' disease, may assemble inside the plumbing systems of homes and public buildings.

The findings are published in Nature's ISME Journal: Multidisciplinary Journal of Microbial Ecology.

Fresh tap water is teeming with harmless microbial life, and water that sits for a few days inside pipes can contain millions of bacteria. Although incidents of waterborne infections resulting from indoor plumbing are rare, the new model may help public health authorities assess drinking-water quality.

"Previous studies have relied on reproducing the conditions of a stagnant plumbing system within a lab setting," said co-author and civil and environmental engineering professor Wen-Tso Liu . "We were able to collect samples in a real-life situation."

It is critical to pinpoint where in the plumbing network water samples have come from in order to determine the source of microbes. Since it is impossible to sample water directly from plumbing without ripping up pipes and knocking down walls, the researchers came up with another way to determine sample locations.

The team collected tap water samples from three closely monitored U. of I. dormitory buildings while closed during a school break. Taking steps to prevent outside contamination from plumbing fixtures or sampling equipment, they sampled from sink taps before building closure; while the water was fresh from the city supply; and again after the water sat in contact with the interior plumbing for a week.

"We performed a variety of analyses, including tests to determine the concentration of bacteria present in the before- and after-building-closure samples," Liu said.

The lab results indicated the post-stagnation samples closest to the taps contained the highest concentrations of bacteria. The team also found that bacteria concentrations decreased significantly as the distance between the tap and pipe location increased. None of the samples in the study contained microbial species or cells concentrations that present a public health risk.

"Our results suggest that the increase in bacteria in the post-stagnation samples is a result of something occurring in the interior plumbing, not the outside city source, and in pipe segments closest to the taps," Liu said.

Bacteria that live in tap water exist in two communities - those that float freely in the water and those that live in the films that line the sides of pipes, called biofilms. Biofilms are much like the films that we see growing on the glass in fish tanks, Liu said. The team believes that the bacteria they see in the post-stagnation samples came from interactions between the water and biofilms that exist inside the pipes closest to the taps.

The researchers determined the city water biofilm composition by sampling the interior parts of water meters that are routinely collected during the water utility's replacement program. Liu worked with the municipal water company to collect almost four years' worth of discarded water meters, giving the team a large set of city biofilm data.

By combining the before- and after-stagnation data, the city biofilm "control" data and information from building blueprints, the team developed a model to test water quality inside almost any building.

"We only need two samples - one before stagnation and one after - and we can determine how extensive the microbe growth is inside in-premise pipes, and we can now do so without destroying property," Liu said.

The study also found that bacterial concentrations are highest in the first 100 milliliters of tap flow. Liu recommends that people run taps for a few moments before using the water after being away from home for a few days, and discussed the advice with U. of I. Facilities and Services and others at a campus workshop in October 2017.

"It is contrary to what we have learned about conserving water, but I like to think of it as just another basic hygiene step," Liu said. "We have made a habit out of washing our hands; I think we can make a habit out of running the tap for few moments before use as well."

Although the microbial communities in this study did not present a health risk, this method can be used in such cases, the researchers said.

"Communities have been and will continue to invest in green infrastructure that stresses water conservation," Liu said. "If interior plumbing were to become contaminated with harmful bacteria, that could lead to unforeseen public health problems when buildings are left vacant for more than a few days."

The desire to reuse and recycle water is unlikely to go away anytime soon, Liu said. "How are we going to deal with the problem when combined with water-conservation practices? If we want to head toward green practices, our engineers, public health organizations, scientists and municipal water suppliers will need to work cooperatively."

Credit: 
University of Illinois at Urbana-Champaign, News Bureau

Just one high-fat meal sets the perfect stage for heart disease

image: (from left) Dr. Julia E. Brittain, Dr. Neal L. Weintraub, Tyler W. Benson, Dr. Ryan A. Harris and Dr. Ha Won Kim photo taken in the Medical College of Georgia's Vascular Biology Center Lab.

Image: 
Phil Jones, Senior Photographer, Augusta University

AUGUSTA, Ga. (March 29, 2018) - A single high-fat milkshake, with a fat and calorie content similar to some enticing restaurant fare, can quickly transform our healthy red blood cells into small, spiky cells that wreak havoc inside our blood vessels and help set the perfect stage for cardiovascular disease, scientists report.

Just four hours after consuming a milkshake made with whole milk, heavy whipping cream and ice cream, healthy young men also had blood vessels less able to relax and an immune response similar to one provoked by an infection, the team of Medical College of Georgia scientists report in the journal Laboratory Investigation.

While the dramatic, unhealthy shift was likely temporary in these healthy individuals, the scientists say there is a definite cumulative toll from this type of eating, and that their study could help explain isolated reports of death and/or heart attack right after eating a super-high fat meal.

"We see this hopefully as a public service to get people to think twice about eating this way," says Dr. Neal L. Weintraub, cardiologist, Georgia Research Alliance Herbert S. Kupperman Eminent Scholar in Cardiovascular Medicine and associate director of MCG's Vascular Biology Center.

"The take-home message is that your body can usually handle this if you don't do it again at the next meal and the next and the next," says Dr. Julia E. Brittain, vascular biologist at the MCG Vascular Biology Center and a corresponding author of the study.

As a practicing cardiologist, Weintraub, also a corresponding author, has patients with cardiovascular disease who continue to eat a high-fat diet and he definitely asks them to think twice: "Is this food worth your life?"

While none of the scientists recommend going overboard on calories and sugar either, the healthy males in the study who instead consumed a meal with the same number of calories but no fat - three big bowls of sugar-coated flakes with no-fat milk - did not experience the same harmful changes to their blood, red blood cells and blood vessels.

"You are looking at what one, high-fat meal does to blood-vessel health," says Dr. Ryan A. Harris, clinical exercise and vascular physiologist at MCG's Georgia Prevention Institute and study co-author.

Their study in 10 young men was the first to look specifically at red blood cells, the most abundant cell in our blood. Red cells are best known for carrying oxygen and are incredibly flexible so they flow through blood vessels essentially unnoticed, Brittain says. But with a single high-fat meal, they essentially grow spikes and spew poison.

"They changed size, they changed shape, they got smaller," Harris says of the rapid changes to the form and function of red blood cells.

In both the cells and blood, there was evidence of myeloperoxidase, or MPO, an enzyme expressed by a type of white blood cell which, at high levels in the blood, has been linked to stiff blood vessels, oxidative stress and heart attack in humans.

MPO is associated with impaired ability of blood vessels to dilate, even oxidation of HDL cholesterol, which converts this usually cardioprotective cholesterol into a contributor to cardiovascular disease. When taken up by a diseased artery, it can even help destabilize plaque buildup, which can result in a stroke or heart attack.

"Myeloperoxidase levels in the blood are directly implicated in heart attack," Weintraub notes. "This is a really powerful finding."

When they used flow cytometry to examine the red blood cells, they found an increase in reactive oxygen species, a natural byproduct of oxygen use that is destructive at high levels. One effect of their elevated level is permanently changing the function of proteins, including the one that helps red blood cells maintain their normal negative charge.

MPO also impacts the cytoskeleton, the physical infrastructure of the usually plump red cells so they can't function and flex as well, says Tyler W. Benson, a doctoral student in The Graduate School at Augusta University and the paper's first author.

"Again, your red blood cells are normally nice and smooth and beautiful and the cells, after consumption of a high-fat meal, get these spikes on them," says Brittain. Much like huge ice chunks do to a river, these physical changes affect how blood flows, she says.

Bad changes occur quickly in these cells, which are "exquisitely sensitive" to their environment, Brittain says.

There were changes in white blood cells, called monocytes, which got fat themselves trying to take up the excessive fat. Their earlier studies have shown these so-called foamy monocytes promote inflammation and show up in atherosclerotic plaque. Monocytes more typically travel the circulation looking for red blood cells that need elimination, because they are old and/or diseased.

The fluid portion of the blood, called the plasma, also looked different. When they spin and separate different components of the blood to get to the red blood cells, they typically get a clear yellowish plasma on top, Benson says. But after a single, high-fat load, the fluid portion of the blood was already thick, off-color and filled with lipids.

Their blood also contained the expected high fat and cholesterol levels.

At least in mice studies and in some of Brittain's other human studies, the unhealthy changes also resolve quickly, at about eight hours, unless the high-fat feasts continue. The investigators note they only tested their participants after four hours, which is about how long it takes food to digest.

Studies to measure longer-term impact on humans would be problematic primarily because you would not want to subject healthy young individuals to the risk, Weintraub notes.

However, the MCG team also has shown that mice continuously fed a high-fat diet experience permanent changes to their red blood cells and blood similar to those experienced transiently by the young men. Changes include triggering a significant immune response that can contribute to vascular disease.

More studies are needed to see if changes in the red blood cell shape impact vascular health, the scientists write. But they conjecture that the remodeled red blood cells themselves could be targeted for elimination by monocytes. In mice chronically fed a high-fat diet, they have seen red blood cells actively consumed by macrophages, immune cells that eat cellular debris, and resulting inflammation.

Weintraub says primary prevention is the most prudent course for a healthy cardiovascular system including eating healthy, exercising regularly, and keeping tabs on vitals like cholesterol and blood pressure levels. Even patients with a high genetic risk of cardiovascular disease can dramatically reduce that risk with these positive changes, he says.

Harris' research team has done studies that indicate a single aerobic exercise session by young healthy individuals like these can counteract the unhealthy slump at four hours and related reduction in the blood vessels ability to dilate.

Participants in the new study included 10 physically active men with a good medical history, taking no prescription medicines and with good cholesterol and lipid levels.

The investigators did two thorough assessments of cardiovascular disease risk at least seven days apart. Participants were told to avoid caffeine and strenuous physical activity for 24 hours before each test and vitamin supplements for 72 hours. Like going to the doctor for bloodwork, they also were asked to fast overnight.

Half the men got the milkshakes containing about 80 grams of fat and 1,000 calories. The cereal meal also contained about 1,000 calories but very little fat. Meals were individually tweaked to ensure everyone got the same amount of fat relative to their body weight, Harris says.

Since estrogen is considered cardioprotective in non-obese premenopausal females, investigators opted to limit the study to males.

Red blood cells, probably best known for carrying oxygen, are the most abundant cell type circulating in our blood. "You have 25 trillion red blood cells and they affect every other cell in your body," says Brittain. They also carry and release the energy molecule ATP and nitric oxide, which helps blood vessels relax, as well as cholesterol.

A healthy red blood cell has a negative charge that keeps them away from other cells and traveling more toward the outer edge of blood vessels. In the arterial system, they travel fast, Brittain says.

The cells last about 120 days, but like many of us, they become less efficient with age as they use up their energy, or ATP stores, says Benson.

The American Heart Association recommends that healthy adults limit fat intake to 20-35 percent of their daily calories. The research was funded by the National Institutes of Health.

Credit: 
Medical College of Georgia at Augusta University

Global cancer trial sets new standard for post-surgery chemotherapy

Some stage III colon cancer patients can cut in half the number of chemotherapy treatments they receive after surgery, significantly reducing the costs, treatment time, and long-term toxic effects of chemotherapy, according to results of a unique global clinical trial collaboration published for the first time in the New England Journal of Medicine.

Results show that for some colon cancer patients with stage III disease who undergo surgical removal of their tumor and lymph nodes, the standard six-month post-surgery course of chemotherapy may not be needed. Instead, for many low-risk patients, a three-month course will not significantly affect the rate at which their cancer returns and will prevent harmful side effects, including nerve injury from the chemotherapy drug oxaliplatin that can sometimes cause permanent pain, numbness, and tingling.

The clinical trials that produced the results are known collectively as the International Duration Evaluation in Adjuvant Chemotherapy (IDEA) collaboration. The IDEA collaboration launched in 2007 and remains unique in its design and global scope, enrolling 12,834 eligible patients in six phase III trials run in parallel in 12 countries in North America, Europe, and Asia. In North America, the trial is managed by the Alliance for Clinical Trials in Oncology and SWOG, two members of the National Cancer Institute's (NCI) National Clinical Trials Network.

Axel Grothey, MD, an Alliance investigator and oncologist at the Mayo Clinic Cancer Center, oversees the IDEA collaboration and is lead author of the NEJM article. The group's initial findings were first presented at the 53rd Annual Meeting of the American Society of Clinical Oncology (ASCO) in June 2017, followed by the European Society for Medical Oncology 2017 Congress in September 2017. The results gained international attention, and have already changed medical practice. In January 2018, the National Comprehensive Cancer Network released its clinical practice guidelines for colon cancer, which incorporated the new chemotherapy standard based on the IDEA findings.

IDEA investigators wanted to answer a question: What is the optimal duration of adjuvant therapy in colon cancer? It's an important one. Colon cancer is the third most commonly diagnosed cancer in the world, and its burden is expected to increase by 60 percent to more than 2.2 million new cases and 1.1 million cancer deaths by 2030, according to data published in 2017 in the journal Gut.

IDEA trials focused on patients with stage III colon cancer, whose cancer spread to the lymph nodes, and who, to reduce the chances of their cancer returning, usually get standard treatment after surgery with either the FOLFOX or CAPOX chemotherapy regimens. A side effect of oxaliplatin in both of these regimens is the risk of often severe, persistent nerve damage. To see if the incidence of these side effects could be reduced, without hastening cancer's return, investigators randomly assigned patients into three- or six-month treatment groups.

The results: Three months of chemotherapy was not, in all circumstances, a better course of treatment. Rather, the data showed that the duration of chemotherapy should be a decision based on the drug combination used and the individual characteristics of a patient's cancer -- how deep the tumor was lodged in the colon wall and how many lymph nodes the cancer had spread to. For low-risk patients -- those with shallow tumors and under four lymph nodes affected -- three months of treatment with CAPOX was shown to be safe and effective, resulting in fewer side effects and the same disease-free survival rates as six months of treatment. High-risk patients, however, fared better with a six-month course in some situations.

"What gratifies me about this work is its impact on patients," said Anthony Shields, MD, PhD, a SWOG investigator and associate center director of the Barbara Ann Karmanos Cancer Institute and a professor at Wayne State University School of Medicine. "I'm already using the new standards in my practice for lower-risk colon cancer patients, which saves them a lot of time in treatment, and also prevents the toxic effects of six months of chemotherapy. There are about 400,000 patients worldwide each year who can be considered for post-surgical treatment with oxaliplatin-based therapy, so the findings will have a large impact."

Shields is the co-chair of the North American study with Jeffrey Meyerhardt, MD, MPH, of Dana-Farber Cancer Institute and Harvard Medical School. Meyerhardt is an investigator with the Alliance. Running six randomized clinical trials at the same time across the globe was the brainchild of former Alliance group statistician Daniel Sargent, PhD, of Mayo Clinic, who died in 2016 after a sudden, unexpected illness.

According to Dr. Meyerhardt, "IDEA showed the power of international cooperation in studying clinically relevant and impactful questions for cancer patients." He noted that the results "demonstrate that it is not a one size fits all answer for adjuvant therapy for colon cancer and personalization based on the cancer characterization and treatment choice is important."

While the NEJM publication is the first for IDEA, it won't be the last. Investigators are mining the data to answer other key questions about treatment for stage III colon cancer patients, and following patients to calculate overall survival rates, with those results likely published in 2019.

"The IDEA results provide a framework for discussions between patients and providers on the trade-off between side effects and the efficacy of adjuvant therapy," said lead author Dr. Grothey. "For most patients with stage III disease, three months of adjuvant therapy will be considered sufficient, which will lead to decreased long-term toxicity, higher quality of life, and savings in health care expenditures."

Credit: 
SWOG

Latest nanowire experiment boosts confidence in Majorana sighting

image: This is a device that physicists used to spot the clearest signal yet of Majorana particles. The gray wire in the middle is the nanowire, and the green area is a strip of superconducting aluminum.

Image: 
Hao Zhang/QuTech

In the latest experiment of its kind, researchers have captured the most compelling evidence to date that unusual particles lurk inside a special kind of superconductor. The result, which confirms theoretical predictions first made nearly a decade ago at the Joint Quantum Institute (JQI) and the University of Maryland (UMD), will be published in the April 5 issue of Nature.

The stowaways, dubbed Majorana quasiparticles, are different from ordinary matter like electrons or quarks--the stuff that makes up the elements of the periodic table. Unlike those particles, which as far as physicists know can't be broken down into more basic pieces, Majorana quasiparticles arise from coordinated patterns of many atoms and electrons and only appear under special conditions. They are endowed with unique features that may allow them to form the backbone of one type of quantum computer, and researchers have been chasing after them for years.

The latest result is the most tantalizing yet for Majorana hunters, confirming many theoretical predictions and laying the groundwork for more refined experiments in the future. In the new work, researchers measured the electrical current passing through an ultra-thin semiconductor connected to a strip of superconducting aluminum--a recipe that transforms the whole combination into a special kind of superconductor.

Experiments of this type expose the nanowire to a strong magnet, which unlocks an extra way for electrons in the wire to organize themselves at low temperatures. With this additional arrangement the wire is predicted to host a Majorana quasiparticle, and experimenters can look for its presence by carefully measuring the wire's electrical response.

The new experiment was conducted by researchers from QuTech at the Technical University of Delft in the Netherlands and Microsoft Research, with samples of the hybrid material prepared at the University of California, Santa Barbara and Eindhoven University of Technology in the Netherlands. Experimenters compared their results to theoretical calculations by JQI Fellow Sankar Das Sarma and JQI graduate student Chun-Xiao Liu.

The same group at Delft saw hints of a Majorana in 2012, but the measured electrical effect wasn't as big as theory had predicted. Now the full effect has been observed, and it persists even when experimenters jiggle the strength of magnetic or electric fields--a robustness that provides even stronger evidence that the experiment has captured a Majorana, as predicted in careful theoretical simulations by Liu.

"We have come a long way from the theoretical recipe in 2010 for how to create Majorana particles in semiconductor-superconductor hybrid systems," says Das Sarma, a coauthor of the paper who is also the director of the Condensed Matter Theory Center at UMD. "But there is still some way to go before we can declare total victory in our search for these strange particles."

The success comes after years of refinements in the way that researchers assemble the nanowires, leading to cleaner contact between the semiconductor wire and the aluminum strip. During the same time, theorists have gained insight into the possible experimental signatures of Majoranas--work that was pioneered by Das Sarma and several collaborators at UMD.

Theory meets experiment

The quest to find Majorana quasiparticles in thin quantum wires began in 2001, spurred by Alexei Kitaev, then a physicist then at Microsoft Research. Kitaev, who is now at the California Institute of Technology in Pasadena, concocted a relatively simple but unrealistic system that could theoretically harbor a Majorana. But this imaginary wire required a specific kind of superconductivity not available off-the-shelf from nature, and others soon began looking for ways to imitate Kitaev's contraption by mixing and matching available materials.

One challenge was figuring out how to get superconductors, which usually go about their business with an even number of electrons--two, four, six, etc.--to also allow an odd number of electrons, a situation that is normally unstable and requires extra energy to maintain. The odd number is necessary because Majorana quasiparticles are unabashed oddballs: They only show up in the coordinated behavior of an odd number of electrons.

In 2010, almost a decade after Kitaev's original paper, Das Sarma, JQI Fellow Jay Deep Sau and JQI postdoctoral researcher Roman Lutchyn , along with a second group of researchers , struck upon a method to create these special superconductors, and it has driven the experimental search ever since. They suggested combining a certain kind of semiconductor with an ordinary superconductor and measuring the current through the whole thing. They predicted that the combination of the two materials, along with a strong magnetic field, would unlock the Majorana arrangement and yield Kitaev's special material.

They also predicted that a Majorana could reveal itself in the way current flows through such a nanowire. If you connect an ordinary semiconductor to a metal wire and a battery, electrons usually have some chance of hopping off the wire onto the semiconductor and some chance of being rebuffed--the details depend on the electrons and the makeup of the material. But if you instead use one of Kitaev's nanowires, something completely different happens. The electron always gets perfectly reflected back into the wire, but it's no longer an electron. It becomes what scientists call a hole--basically a spot in the metal that's missing an electron--and it carries a positive charge back in the opposite direction.

Physics demands that the current across the interface be conserved, which means that two electrons must end up in the superconductor to balance out the positive charge heading in the other direction. The strange thing is that this process, which physicists call perfect Andreev reflection , happens even when electrons in the metal receive no push toward the boundary--that is, even when they aren't hooked up to a battery of sorts. This is related to the fact that a Majorana is its own antiparticle, meaning that it doesn't cost any energy to create a pair of Majoranas in the nanowire. The Majorana arrangement gives the two electrons some extra room to maneuver and allows them to traverse the nanowire as a quantized pair--that is, exactly two at a time.

"It is the existence of Majoranas that gives rise to this quantized differential conductance," says Liu, who ran numerical simulations to predict the results of the experiments on UMD's Deepthought2 supercomputer cluster. "And such a quantization should even be robust to small changes in experimental parameters, as the real experiment shows."

Scientists refer to this style of experiment as tunneling spectroscopy because electrons are taking a quantum route through the nanowire to the other side. It has been the focus of recent efforts to capture Majoranas, but there are other tests that could more directly reveal the exotic properties of the particles--tests that would fully confirm that the Majoranas are really there.

"This experiment is a big step forward in our search for these exotic and elusive Majorana particles, showing the great advance made in the materials improvement over the last five years," Das Sarma says. "I am convinced that these strange particles exist in these nanowires, but only a non-local measurement establishing the underlying physics can make the evidence definitive."

Credit: 
University of Maryland

Scientists find link between congenital cardiac malformation and adult adrenal cancer

SAN ANTONIO, Texas, U.S.A.--An international team led by Dr. Patricia Dahia, M.D., Ph.D., of UT Health San Antonio, discovered a genetic mutation that explains why adults with severe congenital heart defects--who live with low oxygen in their blood--are at dramatically high risk for adrenal gland cancer.

The finding is being made public March 29 in the New England Journal of Medicine.

The study focused on patients who were born with cyanotic congenital heart disease and went on to develop adrenal gland or related tumors called pheochromocytomas or paragangliomas. Detailed genetic analysis of these cases revealed mutations in a gene that regulates a hypoxia (low oxygen)-related pathway called EPAS1, also known as HIF2A. Cyanotic refers to a bluish or purplish discoloration that occurs when blood levels of oxygen are low. Patients with cyanotic heart disease have a sixfold higher risk of developing the adrenal gland tumors than patients without this severe type of heart disease, but the genetic basis for this heightened incidence was unknown.

An amplified response

"It was suspected that in patients with cyanotic heart disease, the low oxygen levels might lead directly to the growth of pheochromocytomas," said Dr. Dahia, professor of medicine in the Joe R. & Teresa Lozano Long School of Medicine at UT Health San Antonio. "We found instead that a genetic mutation is the main reason why the tumor can appear in these patients. Most remarkably, the mutation turns on the main gene that causes the body to respond to low oxygen, further amplifying this response."

"This finding provides important insights into our understanding of how the body adapts to conditions of low oxygen and how this can lead to tumors," said Dr. Dahia, who also is a member of the Mays Cancer Center, the newly named center home to UT Health San Antonio MD Anderson Cancer Center.

A perfect storm

"We found that this mutation is not inherited but is acquired later," Dr. Dahia said. "The patient's heart disease may create conditions that make it more likely for the mutation to appear. Understanding this mechanism requires further studies."

Importantly, clinical-grade inhibitors of HIF2A exist and are in early clinical trials for a variety of conditions, including pheochromocytomas. "Thus, this discovery can potentially have an impact on patients' lives," she said.

Credit: 
University of Texas Health Science Center at San Antonio

Mandatory nutrition policies may impact sugar consumption

Mandatory nutrition policies could be a valuable tool in helping high school students to lower their sugar intake, a University of Waterloo study has found.

The study compared the consumption of sugar-sweetened drinks between 41,000 secondary school students in Ontario, where school nutrition policies are mandatory, and Alberta, where they are voluntary. The study took place during the 2013-14 school year.

It found that students in Alberta had a 16 per cent higher rate of sugar-sweetened beverage consumption compared to their counterparts in Ontario, where the sale of most sugar-sweetened drinks in secondary schools have been prohibited since 2011.

"These findings have implications on how we approach efforts to promote healthy dietary habits among adolescents," said Katelyn Godin, lead researcher and PhD candidate at Waterloo. "We need to devise strategies to improve the broader food environment so that healthier dietary choices are attractive and accessible, as well as improve students' food and nutrition-related attitudes, knowledge, values, and skills."

The study also found that students' meal and snack purchases outside of school and on weekends had a greater bearing on their beverage intake than their purchases in school food outlets. Godin believes this reflects how many teens spend their leisure time with friends, such as going out for food, to sporting and music events, or shopping - all places where sugary drinks are readily available.

"Our findings suggest that while nutrition standards in schools could have an impact on sugar-sweetened beverage consumption, those standards alone won't be enough to dramatically curb adolescents' intake of these drinks," said Godin. "Given the important role of diet in chronic disease prevention, adolescents should be a priority group for intervention because poor dietary habits formed in childhood and adolescence often persist into adulthood."

According to previous studies, adolescents are the largest consumers of sugar-sweetened drinks in Canada, with many school age students consuming such beverages daily. Soft drinks and other sweetened beverages are linked to higher rates of obesity, cardiovascular disease, and a lower intake of vitamins and nutrients.

The research in the study was collected by the COMPASS research group at Waterloo, which aims to generate knowledge and evidence to advance youth health.

Credit: 
University of Waterloo

Stroke affects more than just the physical

MINNEAPOLIS - A new study looks at what problems affect people most after a stroke and it provides a broader picture than what some may usually expect to see. Stroke affects more than just physical functioning, according to a study is published the March 28, 2018, online issue of Neurology®, the medical journal of the American Academy of Neurology.

"After a stroke, people who have only mild disability can often have 'hidden' problems that can really affect their quality of life," said study author Irene L. Katzan, MD, MS, of the Cleveland Clinic in Ohio and a member of the American Academy of Neurology. "And for people with more disability, what bothers them the most? Problems with sleep? Depression? Fatigue? Not many studies have asked people how they feel about these problems, and we doctors have often focused just on physical disability or whether they have another stroke."

The study involved 1,195 people who had an ischemic stroke, or a stroke where blood flow to part of the brain is blocked. They were asked questions about their physical functioning, fatigue, anxiety, sleep problems, thinking skills such as planning and organizing, how much their pain affects other aspects of their life and their satisfaction with their current social roles and activities.

Participants took the questionnaires an average of 100 days after their stroke, and about a quarter of the participants needed help from a family member to fill out the questionnaires. Researchers also measured their level of disability.

The people with stroke had scores that were considerably worse than those in the general population in every area except sleep and depression. Not surprisingly, the area where the people with stroke were most affected was physical functioning, where 63 percent had scores considered meaningfully worse than those of the general population, with an average score of 59, where a score of 50 is considered the population average.

On the question about whether they were satisfied with their social roles and activities, 58 percent of people with stroke had scores meaningfully worse than those of the general population.

"People may benefit from social support programs and previous studies have shown a benefit from efforts to improve the social participation of people with stroke, especially exercise programs," said Katzan.

The thinking skills of people with stroke in executive function, or planning and organizing, were also affected, with 46 percent having scores that were meaningfully worse than the population average.

"The social participation and executive functioning skills are areas that have not received a lot of attention in stroke rehabilitation," Katzan said. "We need to better understand how these areas affect people's well-being and determine strategies to help optimize their functioning."

Limitations of the study include that the questionnaires did not ask about other problems that can occur after stroke, such as communication issues. Also, the study participants had milder strokes on average than people with stroke overall and the average age of participants was 62, which is lower than the average age of 69 for people with stroke overall.

Credit: 
American Academy of Neurology

Ragweed casts shade on soy production

image: The experimental plots required weeding to maintain the right ragweed densities, and to remove other weeds.

Image: 
Amit Jhala

Ragweed, its pollen potent to allergy sufferers, might be more than a source of sneezes. In the Midwest, the plant may pose a threat to soybean production.

Scientists have found that ragweed can drastically reduce soybean yield.

"It wasn't really a weed we were worried about too much," says Ethann Barnes, a graduate research assistant in agronomy and horticulture at the University of Nebraska-Lincoln. "We didn't expect it to be this competitive."

Weeds compete with crops for light, water, and nutrients. Common ragweed, which is taller than soy, has historically been overlooked as a threat. And little is known about its impact on soy in the Midwest.

So, the scientists struck out to a soybean field near Mead, Nebraska. In 2015 and 2016, they planted soybean and ragweed in late spring. Within the experimental plots, ragweed density ranged from no plants (a weed-free control) to 12 plants per meter (about 39 inches) of the row.

The researchers had two goals: see if ragweed posed a serious threat to soybean, and see if there's a way to estimate the yield loss early in the growing season.

Barnes was surprised by how much the ragweed stifled the soybean in both years. The soybean crops did worse than in previous studies. One ragweed plant every 1.6 feet of soybean row decreased soybean yield by 76% in 2015, and by 40% in 2016. And soybean yield was reduced by 95% in 2015 and 80% in 2016 when common ragweed plants were grown only three inches apart in the soybean row.

During the experiment, there was plenty of water to go around for both plants. So, the scientists think ragweed mostly hurt soybean by starving it of sunlight.

"Whether I was presenting at conferences, or even just at my thesis defense, everyone was very surprised how big of a deal common ragweed could be," says Barnes.

What's more, it was very hard to predict early in the year how the soybean would fare. Barnes found that not until early August could he plug ragweed numbers into an equation and accurately predict what the soybean loss would be.

Now, Barnes and his team are sharing this information with growers in the area. "The ultimate goal of this area of science is for growers to count the number of weeds or make a measurement in their field three weeks into the season. From there they could see whether it's financially a viable option to control their weeds or just leave them in the field," says Barnes. By knowing how much damage the weeds might do, farmers can weigh that loss against the cost of killing the weeds.

More studies will be needed to hone in on the dynamics of ragweed-and other weed--growth. An end goal, he says, is to predict early in the season how weeds will impede crop yields, so farmers can make better decisions on how to manage them. Such estimates could help farmers know if, when, and how much pesticide to apply.

He hopes his study is a step toward that goal. "Hopefully it'll have an immediate impact for farmers, and advance the science of weed competition research."

Credit: 
American Society of Agronomy

NASA finds Tropical Storm Jelawat strengthening

image: NASA's Aqua satellite passed over Tropical Storm Jelawat on March 28 at 12:11 a.m. EDT (0411 UTC) and saw coldest cloud top temperatures (purple) around the storm's center.

Image: 
NASA JPL/Heidar Thrastarson

Infrared imagery from NASA's Aqua satellite revealed that Tropical Storm Jelawat was getting stronger as it moved through the Northwestern Pacific Ocean.

NASA's Aqua satellite passed over Jelawat on March 28 at 12:11 a.m. EDT (0411 UTC) and analyzed the storm in infrared light. Infrared light provides temperature data and that's important when trying to understand how strong storms can be. The higher the cloud tops, the colder and the stronger they are.

AIRS data showed coldest cloud top temperatures in thunderstorms flaring around Jelawat's center as cold as minus 63 degrees Fahrenheit (minus 53 degrees Celsius). Storms with cloud top temperatures that cold have the capability to produce heavy rainfall.

On March 28 at 11 a.m. EDT (1500 UTC) Jelawat's maximum sustained winds strengthened to 50 knots.

The center of the tropical storm was located near 15.0 degrees north latitude and 135.6 degrees east longitude, that's about 345 nautical miles north-northwest of Yap State. Jelawat has tracked northward at 9 knots.

The Joint Typhoon Warning Center forecast calls for Jelawat to intensify to hurricane force in 24 hours and then begin a weakening trend on March 31.

Credit: 
NASA/Goddard Space Flight Center

NASA finds Tropical Cyclone Iris sheared

image: NASA's Aqua satellite passed over Iris on March 27 at 10:15 a.m. EDT (1415 UTC) and revealed three fragmented areas of strong thunderstorms (red) where cloud top temperatures as cold or colder than minus 70 degrees Fahrenheit (minus 56.6 degrees Celsius).

Image: 
NASA/NRL

Tropical Cyclone Iris is being battered by wind shear so strong that it doesn't even look like a circular storm.

The Joint Typhoon Warning Center issued their final warning on Tropical Cyclone Iris as wind shear continued tearing the storm apart. Iris' exposed low-level center was even difficult to find on infrared imagery from NASA as wind shear stretched the storm out.

NASA's Aqua satellite passed over Iris on March 27 at 10:15 a.m. EDT (1415 UTC) and analyzed the storm in infrared light. The MODIS or Moderate Resolution Imaging Spectroradiometer instrument aboard NASA's Aqua satellite revealed three fragmented areas of thunderstorms where cloud top temperatures as cold or colder than minus 70 degrees Fahrenheit (minus 56.6 degrees Celsius). Cloud tops with temperatures that cold have the potential to generate very heavy rainfall. One area with those temperatures stretched from northwest to southeast while the other two areas were small and east of the center.

At 11 a.m. EDT (1500 UTC) on March 27 Iris was located near 20.5 degrees south latitude and 158.4 degrees east longitude. That's about 451 nautical miles west-northwest of Noumea, New Caledonia. Iris was moving to the south at 8 mph (7 knots/13 kph).

The Joint Typhoon Warning Center noted that Iris was weakening rapidly and will continue to do so as it moves over cooler waters and into areas of stronger vertical wind shear. The storm is expected to dissipate in the next day or two.

Credit: 
NASA/Goddard Space Flight Center

Detection of transcranial direct current stimulation deep in the living human brain

image: In this photo, MUSC Health stroke neurologist and physician-scientist Dr. Wayne Feng (left), first author on the article, with the help of Dr. Pratik Chhatbar (right), first author on the article, demonstrates transcranial direct current stimulation.

Image: 
Medical University of South Carolina

A defining characteristic of stroke is the loss of motor control due to structural damage in specific brain areas. In fact, motor impairments (or deficits) are the number one complication after stroke. Losing the ability to carry out basic bodily functions, such as speaking, walking and swallowing, can be devastating for stroke survivors. Unfortunately, there are few effective recovery options beyond physical and occupational therapy to stimulate brain re-learning. While many researchers have tried to identify effective new therapies to mitigate motor function impairment and enhance quality-of-life, discoveries have been lacking.

Because the neural circuitry of the brain operates via electro-stimulation, applying small amounts of electrical current to the scalp (transcranially) has been investigated as a means of modulating brain activity. However, although there are promising tDCS data, the optimal dosing remains unknown. Furthermore, it is unclear what amount of scalp-applied current can penetrate the brain. To answer these questions, a team of MUSC investigators led by stroke neurologist and physician-scientist Wayne Feng, M.D., MS, attempted something that has never before been tried - they directly measured tDCS-generated EFs in vivo using deep brain stimulation (DBS) electrodes that were already implanted in patients with Parkinson's disease. Their findings provide direct evidence that helps answer some of the long-standing, daunting questions in the field.

"There are tons of studies on tDCS for stroke, depression, and pain control, but there's a lot of skepticism about using tDCS clinically because the data are mixed," explains Feng. The main problem is that it is extremely difficult to measure electrical activity deep inside the brain of a living person, and differences between the brains of living people and those of the animals and cadavers that previous studies have used are significant. Although models predict that tDCS would generate EFs throughout the brain, there is no direct evidence demonstrating or measuring these electric fields in a living person's brain.

"Until now, we didn't know what the optimal dose of current was because we didn't how much current was really getting through the scalp and skull and penetrating inside the brain. The skull bone and scalp thickness, the shape of the skull, brain size -- all of these things affect it. So, without really being able to measure the current in live patients, we were simply guessing," says Feng.

Then, Feng recognized a novel opportunity to directly measure whether tDCS generates EFs in deep brain areas among patients with movement disorders such as Parkinson's disease, who are often treated by implanting DBS electrodes.

"These patients provide a natural experimental model that gave us a chance to use their implanted electrodes to record how much externally applied tDCS current actually reaches the thalamic and subthalmic regions," says Feng.

But conducting this experiment was no simple task. Surgery to implant DBS electrodes is conducted in two steps. First, electrodes are inserted and the patient is sent home for one to two weeks while they stabilize. Then, the patient returns to connect the electrodes in their brain to a battery implanted in the chest wall. Feng's team cleverly took advantage of a 15-20-minute window during the second surgical procedure, before the surgeon connected the battery to the electrodes, to connect the electrodes to a recording device while applying the direct current through the scalp at different current levels using two different montages (pad placements). This way, the experiment could be conducted without deviating from standard clinical care or jeopardizing patient safety.

"The normal variation in surgical time for this type of procedure is 15-20 minutes, so that's all generous time we could have from the neurosurgeon," says Feng. "We mapped out the EF strength using the permutation of eight contact leads (four on each electrode), and then we changed the position of the tDCS pads on the scalp to see if moving the pad location (the montage) changed the EF distribution. It was quite challenging to do. It took us over two years to collect data on five patients."

Although arduous to collect, these data represent the first report in living humans of scalp-delivered tDCS voltage measurements across DBS electrodes at the subcortical level. They demonstrate that scalp tDCS produces an EF deep in the brain in a dose-dependent and montage-specific manner. In other words, 4 mA current produced about twice as much voltage change across electrodes as 2 mA current and the bi-temporal montage provided higher voltage differences than the occipitofrontal montage. "We showed that the signal increases with more amplitude and that the increase was proportional and montage dependent. This is important when you are treating different diseases, because the location of the pads really matters. Different pad placements create different EFs with different strengths," says Feng.

This direct evidence of EFs occurring inside the living human brain when tDCS is applied through the scalp dispels the previous belief that most of the current is shunted before it penetrates the skull and only reaches the cortex. In addition, it provides real human data against which modeling assumptions can be tested. Feng anticipates that the information from this study and his next investigations to validate and optimize current tDCS modeling will greatly advance the field.

"I hope that, in the future, we can get an image of the patient's brain and apply a computational model to indicate the exact amount of current and montage that a stroke patient needs for motor recovery. It would be a precision neuro-modulation approach for stroke recovery," says Feng.

Feng also points out that, although his focus is motor function recovery after stroke, tDCS has also been studied for treating various disease conditions, such as depression, and for post-surgical pain control to reduce the use of pain medication.

Credit: 
Medical University of South Carolina