Tech

Study first to demonstrate brain mechanisms that give The Iceman unusual resistance to cold

image: Dr. Otto Muzik, professor in the School of Medicine at Wayne State University, prepares for a study with Wim Hof, 'The Iceman,' to better understand how his brain responds during exposure to cold.

Image: 
Wayne State University School of Medicine

DETROIT - Dutch adventurer Wim Hof is known as "The Iceman" for good reason. Hof established several world records for prolonged resistance to cold exposure, an ability he attributes to a self-developed set of techniques of breathing and meditation -- known as the Wim Hof Method -- that have been covered by the BBC, CNN, National Geographic and other global media outlets. Yet, how his brain responds during cold exposure and what brain mechanisms may endow him with this resistance have not been studied -- until now.

Wayne State University School of Medicine professors Otto Muzik, Ph.D., and Vaibhav Diwadkar, Ph.D., changed that. Their publication, "Brain Over Body: A study on the willful regulation of autonomic function during cold exposure," published in the journal NeuroImage, is the first to study how The Iceman's brain responds during experimentally controlled whole-body cold exposure. These investigations are part of the scientists' series of seminal studies launched in 2014 on how the human brain responds to thermoregulatory challenges. The results document compelling brain processes in The Iceman and present intriguing possibilities for how his techniques might exert positive effects related to disorders of the immune system and even psychiatry.

Over three days, Muzik and Diwadkar studied Hof's brain and body functions using two distinct imaging techniques -- including functional magnetic resonance imaging (fMRI) to study his brain and positron emission tomography (PET) to study his body. During the studies, Hof wore a specifically designed whole-body suit the researchers could infuse with temperature-controlled water while the imaging data were acquired in order to relate changes in his biology to cold exposure.

The Iceman's results when compared to a group of healthy comparison participants were startling.

Practice of the Wim Hof Method made Hof's skin temperature relatively invariant to cold exposure, a finding the researchers attributed to his increased sympathetic innervation and glucose consumption in intercostal muscle revealed by PET imaging. The method appeared to allow him to generate heat that dissipates to lung tissue and warms circulating blood in the pulmonary capillaries.

"The willful regulation of skin temperature -- and, by implication, core body temperature, even when the body is being stressed with cold -- is an unusual occurrence and may explain his resistance to frostbite," said Muzik, professor of pediatrics, of neurology and of radiology.

"From our previous studies, we expected The Iceman to show significant brain activations in a region known as the anterior insula, where the brain's higher thermoregulatory centers are located. However, we observed more substantial differences in an area called periaqueductal gray matter, located in the upper brainstem. This area is associated with brain mechanisms for the control of sensory pain and is thought to implement this control through the release of opioids and cannabinoids," Muzik added.

These last set of results are striking -- not only for what they reveal about The Iceman, but even more so for the implications of the relevance of the Wim Hof Method for behavioral and physical health. The researchers hypothesize that by generating a stress-induced analgesic response in periaqueductal gray matter, the Wim Hof Method may promote the spontaneous release of opioids and cannabinoids in the brain. This effect has the potential to create a feeling of well-being, mood control and reduced anxiety.

"The practice of the Wim Hof Method may lead to tonic changes in autonomous brain mechanisms, a speculation that has implications for managing medical conditions ranging from diseases of the immune system to more intriguingly psychiatric conditions such as mood and anxiety disorders," said Diwadkar, professor of psychiatry and behavioral neurosciences. "We are in the process of implementing interventional studies that will evaluate these questions using behavioral and biological assessments. These possibilities are too intriguing to ignore."

"It is not mysterious to imagine that what we practice can change our physiology. The goal of our research is to ascertain the mechanisms underlying these changes using objective and scientific analyses, and to evaluate their relevance for medicine," Muzik added.

Credit: 
Wayne State University - Office of the Vice President for Research

Stunning footage shows how drones can boost turtle conservation

video: This is a short video of turtle drone footage for social media usage (or other usage as required).

Image: 
University of Exeter

Drones are changing the face of turtle research and conservation, a new study shows.

By providing new ways to track turtles over large areas and in hard-to-reach locations, the drones have quickly become a key resource for scientists.

The research, led by the University of Exeter, also says stunning drone footage can boost public interest and involvement in turtle conservation.

"Drones are increasingly being used to gather data in greater detail and across wider areas than ever before," said Dr Alan Rees, of the Centre for Ecology and Conservation on the University of Exeter's Penryn Campus in Cornwall.

"Satellite systems and aircraft transformed turtle conservation, but drones offer cheaper and often better ways to gather information.

"We are learning more about their behaviour and movements at sea, and drones also give us new avenues for anti-poaching efforts."

The paper warns that, despite the benefits, drones cannot fully replace ground work and surveys.

And it says more research is required to understand if and how turtles perceive drones during flight, and whether this has an impact on them.

Credit: 
University of Exeter

A new way to combine soft materials

image: An unmodified hydrogel (left) peels off easily from an elastomer. A chemically-bonded hydrogel and elastomer (right) are tough to peel apart, leaving residue behind

Image: 
(Image courtesy of Suo Lab/Harvard SEAS)

Every complex human tool, from the first spear to latest smartphone, has contained multiple materials wedged, tied, screwed, glued or soldered together. But the next generation of tools, from autonomous squishy robots to flexible wearables, will be soft. Combining multiple soft materials into a complex machine requires an entirely new toolbox -- after all, there's no such thing as a soft screw.

Current methods to combine soft materials are limited, relying on glues or surface treatments that can restrict the manufacturing process. For example, it doesn't make much sense to apply glue or perform surface treatment before each drop of ink falls off during a 3D printing session. But now, researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have developed a new method to chemically bond multiple soft materials independent of the manufacturing process. In principle, the method can be applied in any manufacturing processes, including but 3D printing and coating. This technique opens door to manufacturing more complex soft machines.

The research is published in Nature Communications.

"This technique allows us to bond various hydrogels and elastomers in various manufacturing processes without sacrificing the properties of the materials," said Qihan Liu, a postdoctoral fellow at SEAS and co-first author of the paper. "We hope that this will pave the way for rapid-prototyping and mass-producing biomimetic soft devices for healthcare, fashion and augmented reality."

The researchers focused on the two most-used building blocks for soft devices, hydrogels (conductors) and elastomers (insulators). To combine the materials, the team mixed chemical coupling agents into the precursors of both hydrogels and elastomers. The coupling agents look like molecular hands with small tails. As the precursors form into material networks, the tail of the coupling agents attaches to the polymer networks, while the hand remains open. When the hydrogel and elastomer are combined in the manufacturing process, the free hands reach across the material boundary and shake, creating chemical bonds between the two materials. The timing of the "handshake" can be tuned by multiple factors such as temperature and catalysts, allowing different amounts of manufacturing time before bonding happens.

The researchers showed that the method can bond two pieces of casted materials like glue but without applying a glue layer on the interface. The method also allows coating and printing of different soft materials in different sequences. In all cases, the hydrogel and elastomer created a strong, long-lasting chemical bond.

"The manufacturing of soft devices involves several ways of integrating hydrogels and elastomers, including direct attachment, casting, coating, and printing," said Canhui Yang, a postdoctoral fellow at SEAS and co-first author of the paper. "Whereas every current method only enables two or three manufacturing methods, our new technique is versatile and enables all the various ways to integrate materials."

The researchers also demonstrated that hydrogels -- which as the name implies are mostly water -- can be made heat resistant in high temperatures using a bonded coating, extending the temperature range that hydrogel-based device can be used. For example, a hydrogel-based wearable device can now be ironed without boiling.

"Several recent findings have shown that hydrogels can enable electrical devices well beyond previously imagined," said Zhigang Suo, Allen E. and Marilyn M. Puckett Professor of Mechanics and Materials at SEAS and senior author of the paper. "These devices mimic the functions of muscle, skin, and axon. Like integrated circuits in microelectronics, these devices function by integrating dissimilar materials. This work enables strong adhesion between soft materials in various manufacturing processes. It is conceivable that integrated soft materials will enable spandex-like touchpads and displays that one can wear, wash, and iron."

Credit: 
Harvard John A. Paulson School of Engineering and Applied Sciences

New mathematical framework establishes the risk of dramatic collapses of real networks

image: Dierent realizations of the initial damage are here shown to be more or less damaging for a network. Panel (a)
shows an initial damage of a connected network aecting exclusively two out of the N = 8 nodes of the network (blue nodes
indicate damaged nodes green nodes indicate not damaged nodes). Panel (b) shows that this initial damage is very disruptive
for the network and results in giant component of size R = 1. Panel (c) shows another initial damage conguration of the same
network which aects only two nodes of network. In this case panel (d) shows that the eect of the damage are reduced and
most of the network remains connected resulting in a giant component R = 6.

Image: 
Ginestra Bianconi

A theoretical framework explaining the risk of rare events causing major disruptions in complex networks, such as a blackout in a power grid, has been proposed by a mathematician at Queen Mary University of London.

Rare events can abruptly dismantle a network with much more severe consequences than usual and understanding their probability is essential in reducing the chances of them happening.

A network is formed by a set of nodes and the links between them. For instance power grids are networks whose nodes are power stations connected by the electrical grid. Similarly an ecological network, which the framework could be applied to, is formed by species connected by ecological interactions like a predator-prey relationship.

Usually if some of the nodes are damaged, networks like these are robust enough to remain functional but on rare occasions specific damage can lead to the dismantling of the whole network and cause major blackouts or ecological regime shifts, such as an ecological collapse.

Mathematicians often use percolation theory, a well-developed branch of applied mathematics that studies the response of a network to the damage of a random fraction of its nodes, to shed light on these phenomena. However, this theory is able only to characterise the average response of a network to random damage. Therefore the prediction of the average behaviour cannot be used to estimate the risk of a network collapse as a result of a rare event.

This study establishes a large deviation theory of percolation that characterises the response of a network to rare events. The proposed theoretical framework correctly captures the effect of rare damage configurations that can be observed in real networks. Interestingly the work reveals that discontinuous percolation transitions - abrupt collapses of a network - are occurring as soon as rare events are taken into consideration.

The theoretical framework could enable strategies to be developed to sustain networks by identifying which nodes need to be preserved to prevent a collapse.

Ginestra Bianconi, author of the study, said: "There is an urgent need to evaluate the risk of collapse in response to rare configurations of initial damage. This result sheds light on the hidden fragility of networks and their risk of a sudden collapse and could be especially useful for understanding mechanisms to avoid the catastrophic dismantling of real networks."

She added: "It is important to estimate the risk of a dramatic cascade of failures because you want to reduce the risk. In the design of a power-grid that must provide the energy to an entire country you want to avoid rare events in which you have major blackouts, or in the design of preservation strategies of an ecosystem that is currently diversified and prosperous you want to know what is the probability of a sudden ecological collapse and mass extinction. Therefore it is necessary to understand this risk of these events happening."

The present large deviation study of percolation considers exclusively node percolation on single networks like those mentioned. However, Ginestra Bianconi suggests the outlined methodology could be extended to the study of more detailed models of propagation of event failures.

Credit: 
Queen Mary University of London

Search for first stars uncovers 'dark matter'

A team of astronomers led by Prof. Judd Bowman of Arizona State University unexpectedly stumbled upon "dark matter," the most mysterious building block of outer space, while attempting to detect the earliest stars in the universe through radio wave signals, according to a study published this week in Nature.

The idea that these signals implicate dark matter is based on a second Nature paper published this week, by Prof. Rennan Barkana of Tel Aviv University, which suggests that the signal is proof of interactions between normal matter and dark matter in the early universe. According to Prof. Barkana, the discovery offers the first direct proof that dark matter exists and that it is composed of low-mass particles.

The signal, recorded by a novel radio telescope called EDGES, dates to 180 million years after the Big Bang.

What the universe is made of

"Dark matter is the key to unlocking the mystery of what the universe is made of," says Prof. Barkana, Head of the Department of Astrophysics at TAU's School of Physics and Astronomy. "We know quite a bit about the chemical elements that make up the earth, the sun and other stars, but most of the matter in the universe is invisible and known as 'dark matter.' The existence of dark matter is inferred from its strong gravity, but we have no idea what kind of substance it is. Hence, dark matter remains one of the greatest mysteries in physics.

"To solve it, we must travel back in time. Astronomers can see back in time, since it takes light time to reach us. We see the sun as it was eight minutes ago, while the immensely distant first stars in the universe appear to us on earth as they were billions of years in the past."

Prof. Bowman and colleagues reported the detection of a radio wave signal at a frequency of 78 megahertz. The width of the observed profile is largely consistent with expectations, but they also found it had a larger amplitude (corresponding to deeper absorption) than predicted, indicating that the primordial gas was colder than expected.

Prof. Barkana suggests that the gas cooled through the interaction of hydrogen with cold, dark matter.

"Tuning in" to the early universe

"I realized that this surprising signal indicates the presence of two actors: the first stars, and dark matter," says Prof. Barkana. "The first stars in the universe turned on the radio signal, while the dark matter collided with the ordinary matter and cooled it down. Extra-cold material naturally explains the strong radio signal."

Physicists expected that any such dark matter particles would be heavy, but the discovery indicates low-mass particles. Based on the radio signal, Prof. Barkana argues that the dark-matter particle is no heavier than several proton masses. "This insight alone has the potential to reorient the search for dark matter," says Prof. Barkana.

Once stars formed in the early universe, their light was predicted to have penetrated the primordial hydrogen gas, altering its internal structure. This would cause the hydrogen gas to absorb photons from the cosmic microwave background, at the specific wavelength of 21 cm, imprinting a signature in the radio spectrum that should be observable today at radio frequencies below 200 megahertz. The observation matches this prediction except for the unexpected depth of the absorption.

Prof. Barkana predicts that the dark matter produced a very specific pattern of radio waves that can be detected with a large array of radio antennas. One such array is the SKA, the largest radio telescope in the world, now under construction. "Such an observation with the SKA would confirm that the first stars indeed revealed dark matter," concludes Prof. Barkana.

Credit: 
American Friends of Tel Aviv University

Forage-based diets on dairy farms produce nutritionally enhanced milk, finds industry-backed study

MORRIS, MINNESOTA - Omega-6 and omega-3 fatty acids are essential human nutrients, yet consuming too much omega-6 and too little omega-3 can increase the risk of cardiovascular disease, obesity, and diabetes. Today, Americans consume 10 to 15 grams of omega-6 for every gram of omega-3.

Previous studies have shown that consuming organic beef or organic dairy products lowers dietary intakes of omega-6, while increasing intakes of omega-3 and conjugated linoleic acid (CLA), another valuable, heart-healthy fatty acid.

In a collaborative research project including the University of Minnesota, Johns Hopkins University, Newcastle University in England, Southern Cross University in Linsmore, NSW Australia, and the Aarhus University Hospital in Denmark, researchers have found that cows fed a 100% organic grass and legume-based diet produce milk with elevated levels of omega-3 and CLA, and thus provides a markedly healthier balance of fatty acids. The improved fatty acid profile in grass-fed organic milk and dairy products (hereafter, "grassmilk") brings the omega-6/omega-3 ratio to a near 1 to 1, compared to 5.7 to 1 in conventional whole milk.

Co-author Dr. Bradley Heins, Associate Professor of Dairy Science at the University of Minnesota's West Central Research and Outreach Center points out that "With growing consumer demand for organic dairy products, producers may be able to expand their profitability and market share by converting to grass-based pasture and forage-feeding systems."

Findings from the study "Enhancing the Fatty Acid Profile of Milk through Forage-Based Rations, with Nutrition Modeling of Dietary Outcomes," published in Food Science and Nutrition, compared the fatty acid profile of milk from cows managed under three systems in the United States:

1. "Grassmilk" cows receive an essentially 100% organic grass and legume forage-based diet, via pasture and stored feeds like hay and silage.

2. "Organic" cows receive, on average, about 80% of their daily Dry Matter Intake (DMI) from forage-based feeds and 20% from grain and concentrates.

3. "Conventional" cows are fed rations in which forage-based feeds account for an estimated 53% of daily DMI, with the other 47% coming from grains and concentrates. Conventional management accounts for over 90% of the milk cows on U.S. farms.

Grassmilk provides by far the highest level of omega-3s--0.05 grams per 100 grams of milk (g/100 g), compared to 0.02 g/100 g in conventional milk - a 147% increase in omega-3s. Grassmilk also contains 52% less omega-6 than conventional milk, and 36% less omega-6 than organic milk. In addition, the research team found that grassmilk has the highest average level of CLA--0.043 g/100 g of milk, compared to 0.019 g/ 100 g in conventional milk and 0.023 g/100 g in organic.

Implications for Public Health

Daily consumption of grassmilk dairy products could potentially improve U.S. health trends. In addition to the well-established metabolic and cardiovascular benefits of omega-3 fatty acids and CLA, there are additional benefits for pregnant and lactating women, infants, and children. Various forms of omega-3 fatty acids play critical roles in the development of eyes, the brain, and the nervous system. Adequate omega-3 intakes can also slow the loss of cognitive function among the elderly.

In describing the public health implications of the study's main findings, co-author Charles Benbrook, a Visiting Scholar at the Bloomberg School of Public Health at Johns Hopkins University, points out that "The near-perfect balance of omega-6 and omega-3 fatty acids in grassmilk dairy products will help consumers looking for simple, lifestyle options to reduce the risk of cardiovascular and other metabolic diseases."

Source of Samples and Funding

The team analyzed over 1,160 samples of whole grassmilk taken over three years from on-farm bulk tanks prior to any processing. All samples came from farmer members of CROPP Cooperative and were tested by an independent laboratory.

Credit: 
University of Minnesota

Cancer metastasis: Cell polarity matters

Not only the number of migrating cancer cells determines the risk for metastasis but also their characteristics, scientists from the German Cancer Research Center (DKFZ) have now reported in Nature Communications. For circulating cancer cells to be able to invade tissues and settle at other sites in the body, they have to exhibit a specific polarity. This discovery might in future help to better predict individual risk for metastasis and find appropriate therapies that can reduce it.

Metastatic tumors, the dreaded "daughter tumors", form when cancer cells break away from a tumor and migrate via the lymph and the bloodstream in order to finally settle at some distant site in the body. However, the quantity of circulating cancer cells in the body is not the only factor that determines a patient's risk of developing metastatic sites. "Some patients display high quantities of circulating tumor cells and have no or only a few metastatic sites while in others who suffer from many metastases, hardly any migrating tumor cells can be found," said Mathias Heikenwälder from the German Cancer Research Center (Deutsches Krebsforschungszentrum, DKFZ) in Heidelberg.

The team led by Heikenwälder has therefore taken a closer look at the properties of migrating cancer cells. In human cancer cells as well as in patients with different types of cancer, and also in mice, they observed that a portion of the circulating cancer cells exhibit a specific polarity. "Under the microscope, this looks as if the cells had a kind of nose," Heikenwälder described. Two cytoskeletal proteins called ezrin and merlin play a key role in the formation of this nose. Furthermore, the scientists also found that the number of freely circulating tumor cells exhibiting this special polarity correlates with the risk of developing metastasis, both in human tumor cell lines and in mice.

"This polarity seems to help the free cancer cells return from the blood vessels into body tissue," explains Anna Lorentzen, who is the first author of the publication. With the polarized end, i.e., with the nose, the cells attach to the endothelial layer lining the interior of the vessels. Subsequently, the pole is shifted to the side facing the attachment site and the tumor cell migrates through the endothelial layer into the tissue.

As a cross-check, the researchers used a cell-biological trick to block polarization of the circulating cells. Both in culture and in mice, the manipulated cells were no longer able to attach efficiently to endothelial cells.

With this discovery, the DKFZ researchers have not only found a new mechanism promoting the formation of metastatic sites. "We also have found a link that might in future be used to better predict and even reduce the risk for metastasis in cancer patients," Heikenwälder stressed.

Credit: 
German Cancer Research Center (Deutsches Krebsforschungszentrum, DKFZ)

Michelob Ultra Pure Gold - organic light beer

Michelob ULTRA has announced Michelob ULTRA Pure Gold, which is made with organic grains but still has only 85 calories and 2.5 carbs. Like Michelob ULTRA, the new brew will also be free from artificial colors and flavorings. The brand's sustainable efforts extend to the packaging, having received the Sustainable Forestry Initiative stamp of approval.

A marriage of light-manipulation technologies

image: This image gives a close-up view of a metasurface-based flat lens (square piece) integrated onto a MEMS scanner. Integration of MEMS devices with metalenses will help manipulate light in sensors by combining the strengths of high-speed dynamic control and precise spatial manipulation of wave fronts.This image was taken with an optical microscope at Argonne's Center for Nanoscale Materials.

Image: 
Argonne National Laboratory

Researchers have, for the first time, integrated two technologies widely used in applications such as optical communications, bio-imaging and Light Detection and Ranging (LIDAR) systems that scan the surroundings of self-driving cars and trucks.

In the collaborative effort between the U.S. Department of Energy’s (DOE) Argonne National Laboratory and Harvard University, researchers successfully crafted a metasurface-based lens atop a Micro-Electro-Mechanical System (MEMS) platform. The result is a new infrared light-focusing system that combines the best features of both technologies while reducing the size of the optical system.

Metasurfaces can be structured at the nanoscale to work like lenses. These metalenses were pioneered by Federico Capasso, Harvard’s Robert L. Wallace Professor of Applied Physics, and his group at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS). The lenses are rapidly finding applications because they are much thinner and less bulky than existing lenses, and can be made with the same technology used to fabricate computer chips. The MEMSs, meanwhile, are small mechanical devices that consist of tiny, movable mirrors.

“These devices are key today for many technologies. They have become technologically pervasive and have been adopted for everything from activating automobile air bags to the global positioning systems of smart phones,” said Daniel Lopez, Nanofabrication and Devices Group Leader at Argonne’s Center for Nanoscale Materials, a DOE Office of Science User Facility.

Lopez, Capasso and four co-authors describe how they fabricated and tested their new device in an article in APL Photonics, titled “Dynamic metasurface lens based on MEMS technology.” The device measures 900 microns in diameter and 10 microns in thickness (a human hair is approximately 50 microns thick).

The collaboration’s ongoing work to further develop novel applications for the two technologies is conducted at Argonne’s Center for Nanoscale Materials, SEAS and the Harvard Center for Nanoscale Systems, which is part of the National Nanotechnology Coordinated Infrastructure.

In the technologically merged optical system, MEMS mirrors reflect scanned light, which the metalens then focuses without the need for an additional optical component such as a focusing lens. The challenge that the Argonne/Harvard team overcame was to integrate the two technologies without hurting their performance.

The eventual goal would be to fabricate all components of an optical system — the MEMS, the light source and the metasurface-based optics — with the same technology used to manufacture electronics today.

“Then, in principle, optical systems could be made as thin as credit cards,” Lopez said.

These lens-on-MEMS devices could advance the LIDAR systems used to guide self-driving cars. Current LIDAR systems, which scan for obstacles in their immediate proximity, are, by contrast, several feet in diameter.

“You need specific, big, bulky lenses, and you need mechanical objects to move them around, which is slow and expensive,” said Lopez.

“This first successful integration of metalenses and MEMS, made possible by their highly compatible technologies, will bring high speed and agility to optical systems, as well unprecedented functionalities,” said Capasso.

Credit: 
DOE/Argonne National Laboratory

Saline use on the decline at Vanderbilt following landmark studies

Vanderbilt University Medical Center is encouraging its medical providers to stop using saline as intravenous fluid therapy for most patients, a change provoked by two companion landmark studies released today that are anticipated to improve survival and decrease kidney complications.

Saline, used in medicine for more than a century, contains high concentrations of sodium chloride, which is similar to table salt. Vanderbilt researchers found that patients do better if, instead, they are given balanced fluids that closely resemble the liquid part of blood.

"Our results suggest that using primarily balanced fluids should prevent death or severe kidney dysfunction for hundreds of Vanderbilt patients and tens of thousands of patients across the country each year," said study author Matthew Semler, MD, MSc, assistant professor of Medicine at Vanderbilt University School of Medicine.

"Because balanced fluids and saline are similar in cost, the finding of better patient outcomes with balanced fluids in two large trials has prompted a change in practice at Vanderbilt toward using primarily balanced fluids for intravenous fluid therapy."

The Vanderbilt research, published today in the New England Journal of Medicine, examined over 15,000 intensive care patients and over 13,000 emergency department patients who were assigned to receive saline or balanced fluids if they required intravenous fluid.

In both studies, the incidence of serious kidney problems or death was about 1 percent lower in the balanced fluids group compared to the saline group.

"The difference, while small for individual patients, is significant on a population level. Each year in the United States, millions of patients receive intravenous fluids," said study author Wesley Self, MD, MPH, associate professor of Emergency Medicine.

"When we say a 1 percent reduction that means thousands and thousands of patients would be better off," he said.

The authors estimate this change may lead to at least 100,000 fewer patients suffering death or kidney damage each year in the US.

"Doctors have been giving patients IV fluids for over a hundred years and saline has been the most common fluid patients have been getting," said study author Todd Rice, MD, MSc, associate professor of Medicine.

"With the number of patients treated at Vanderbilt every year, the use of balanced fluids in patients could result in hundreds or even thousands of fewer patients in our community dying or developing kidney failure. After these results became available, medical care at Vanderbilt changed so that doctors now preferentially use balanced fluids," he said.

Credit: 
Vanderbilt University Medical Center

Man-made earthquake risk reduced if fracking is 895m from faults

image: Miles Wilson, PhD student, Durham University, UK, who has led research showing that the risk of man-made earthquakes due to fracking is greatly reduced if high-pressure fluid injection used to crack underground rocks is 895m away from faults in the Earth's crust.

Image: 
Durham University

The risk of man-made earthquakes due to fracking is greatly reduced if high-pressure fluid injection used to crack underground rocks is 895m away from faults in the Earth's crust, according to new research.

The recommendation, from the ReFINE (Researching Fracking) consortium, is based on published microseismic data from 109 fracking operations carried out predominantly in the USA.

Jointly led by Durham and Newcastle Universities, UK, the research looked at reducing the risk of reactivating geological faults by fluid injection in boreholes.

Researchers used microseismic data to estimate how far fracking-induced fractures in rock extended horizontally from borehole injection points.

The results indicated there was a one per cent chance that fractures from fracking activity could extend horizontally beyond 895m in shale rocks.

There was also a 32 per cent chance of fractures extending horizontally beyond 433m, which had been previously suggested as a horizontal separation distance between fluid injection points and faults in an earlier study.

The research is published in the journal Geomechanics and Geophysics for Geo-Energy and Geo-Resources.

Fracking - or hydraulic fracturing - is a process in which rocks are deliberately fractured to release oil or gas by injecting highly pressurised fluid into a borehole. This fluid is usually a mixture of water, chemicals and sand.

In 2011 tremors in Blackpool, UK, were caused when injected fluid used in the fracking process reached a previously unknown geological fault at the Preese Hall fracking site.

Fracking is now recommencing onshore in the UK after it was halted because of fracking-induced earthquakes.

Research lead author Miles Wilson, a PhD student in Durham University's Department of Earth Sciences, said: "Induced earthquakes can sometimes occur if fracking fluids reach geological faults. Induced earthquakes can be a problem and, if they are large enough, could damage buildings and put the public's safety at risk.

"Furthermore, because some faults allow fluids to flow along them, there are also concerns that if injected fluids reach a geological fault there is an increased risk they could travel upwards and potentially contaminate shallow groundwater resources such as drinking water.

"Our research shows that this risk is greatly reduced if injection points in fracking boreholes are situated at least 895m away from geological faults."

The latest findings go further than a 2017 ReFINE study which recommended a maximum distance of 433m between horizontal boreholes and geological faults. That research was based upon numerical modelling in which a number of factors, including fluid injection volume and rate, and fracture orientation and depth, were kept constant.

Researchers behind the latest study said that changing these parameters might lead to different horizontal extents of fractures from fluid injection points.

The researchers added that this did not mean the modelling results of the previous study were wrong. Instead they said the previous study was approaching the same problem using a different method and the new study provided further context.

In the latest research the researchers used data from previous fracking operations to measure the distance between the furthest detected microseismic event - a small earthquake caused by hydraulic fracturing of the rock or fault reactivation - and the injection point in the fracking borehole.

From the 109 fracking operations analysed, the researchers found that the horizontal extent reached by hydraulic fractures ranged from 59m to 720m.

There were 12 examples of fracking operations where hydraulic fractures extended beyond the 433m proposed in the 2017 study.

According to the new study, the chance of a hydraulic fracture extending beyond 433m in shale was 32 per cent and beyond 895m was one per cent.

The research also found that fracking operations in shale rock generally had their furthest detected microseismic events at greater distances than those in coal and sandstone rocks.

Microseismic data was used in previous Durham University research from 2012. This suggested a minimum vertical distance of 600m between the depth of fracking and aquifers used for drinking water, which now forms the basis of hydraulic fracturing regulation in the UK's Infrastructure Act 2015.

Professor Richard Davies, Newcastle University, who leads the ReFINE project, said: "We strongly recommend that for the time being, fracking is not carried out where faults are within 895m of the fracked borehole to avoid the risk of fracking causing earthquakes and that this guideline is adopted world-wide."

Credit: 
Durham University

Brain-gut communication in worms demonstrates how organs can work together to regulate lifespan

ANN ARBOR -- Our bodies are not just passively growing older.

Cells and tissues continuously use information from our environments--and from each other -- to actively coordinate the aging process. A new study from the University of Michigan Life Sciences Institute now reveals how some of that cross-talk between tissues occurs in a common model organism.

Recent research has shown that signaling between the intestine and brain can regulate a range of biological processes. So far, research has focused mainly on how signals from the gut can affect neurological functions, including some neurodegenerative diseases. Much less is known about how the brain communicates with the gut to affect certain biological process, such as aging.

LSI faculty member Shawn Xu, who is also a professor of molecular and integrative physiology at the U-M Medical School, and his colleagues wanted to determine how brain-gut signals might affect aging in Caenorhabditis elegans, or roundworms. Because their nervous system is so well-mapped, these tiny worms offer clues about how neurons send and receive information in other organisms as well, including humans.

The researchers discovered that brain-gut communication leads to what Xu calls an "axis of aging," wherein the brain and intestines work together to regulate the worm's longevity. The findings are scheduled for publication Feb. 28 in the journal Genes & Development.

Using different environmental temperatures, which are known to affect roundworms' lifespan, the researchers investigated how neurons process information about external temperature and transmit that information to other parts of the body. They identified two different types of neurons -- one that senses warmth and the other coolness -- that act on the same protein in the intestine, telling it to either slow down or speed up the aging process.

When the cool-sensing neuron detects a drop in temperature, it sets off a chain of communication that ultimately releases serotonin into the worm's gut. This serotonin prompts a known age-regulating protein, DAF-16, to boost its activity and increase the worm's longevity.

The warmth-sensing neuron, in contrast, sends a compound similar to insulin to the intestine. There, it blocks the activity of that same DAF-16 protein, shortening the worm's lifespan.

Using these two paths, the brain is able to process cues from the external environment and then use that information to communicate with the intestine about aging. What's more, these signals can be broadcast from the intestine to other parts of the body, allowing the neurons to regulate body-wide aging.

And because many of the key players in these reactions are conserved in other species, Xu believes this research may have implications beyond roundworms.

"From our findings, it's clear that the brain and gut can work together to detect aging-related information and then disseminate that information to other parts of the body," Xu said. "We think it's likely that this sort of signaling axis can coordinate aging not only in C. elegans, but in many other organisms as well."

Credit: 
University of Michigan

How do teachers integrate STEM into K-12 classrooms?

image: Teachers who integrated engineering design projects, like this landslide model, observed higher engagement from their students.

Image: 
Sarah Bird/Michigan Tech

A team led by Michigan Technological University set out to find what makes STEM integration tick. Their research--published in the International Journal of STEM Education--followed several case studies to observe the impacts of low, medium and high degrees of integration within a classroom. They found that across the board the greatest challenge that teachers face is making explicit connections between STEM fields while balancing the need for context and student engagement.

Emily Dare, assistant professor of STEM education at Michigan Tech, is the lead author on the study. She says different teachers have different approaches to STEM integration.

"This alone is not terribly surprising as we know that teachers conceptualize integrated STEM in multiple ways," Dare says. "What is new about this current study is that this degree of integration may be related to a teacher's understanding in making explicit and meaningful connections between the disciplines, as opposed to assuming that students will make those connections on their own."

Dare and her co-authors-Joshua Ellis of Michigan Tech and Gillian Roehrig of the University of Minnesota-worked with nine middle school science teachers to assess STEM integration in their classrooms. The researchers relied on both reflective interviews with the teachers and classroom implementation data like the number of instructional days dedicated to two or more disciplines and the amount of time given each discipline.

"The teachers who integrated more often in their class appeared to be more critical of their instruction," Dare says, "And after their first time implementing integrated STEM instruction, they were already considering ways in which to improve their practice."

She explains that this speaks volumes about teachers' motivation and dedication to incorporating these approaches in their classrooms: If they find the integrated approach valuable, they may be more willing to spend time helping students make those content connections.

STEM education calls for connecting science, technology, engineering, and math. Within that framework, three themes arose from the results of Dare and her collaborators' work that distinguished low, medium and high STEM integration.

First, the nature of integration varied; that is, the role teachers perceived they should play in making explicit or implicit connections. A more active role in making connections reflected higher integration, though not without its challenges. Previously, Dare led research helping to clarify what STEM education, and therefore integration, means in practical terms for teachers.

Second, classroom integration depended on whether a teacher chose to focus primarily on science or engineering. Dare and her team argue that science versus engineering is a false choice. Teachers with higher degrees of STEM integration wove in science concepts throughout engineering design projects, like connecting lessons on heat transfer and insulators to building solar ovens. Across the board, design-based projects tended to happen in the last few days of instruction. Teachers with lower degrees of integration tended to focus on the science first, then shift completely into engineering.

Third, student engagement played a role-and an important one. Students tended to be motivated for engineering design projects; teachers explained that the work provided context, making the concepts more real and understandable. The challenge is that teachers felt pressed to balance the hands-on work with conceptual and reflective activities. Plus, maintaining a contextual example over several weeks became difficult.

The paper authors point out that while the study subjects are middle school physical science teachers doing first-time STEM instruction, many of the identified themes are not content-specific. Because of that, the successes and challenges identified may shed light on general struggles that are common to educators who are integrating across STEM disciplines under new teaching standards.

Teachers' primary challenges focused on trying to keep the lessons real for students and struggling with better integrating math. Dare suggests this may be because science teachers are just that, not math teachers or engineers.

"For teacher educators," Dare says, "this means continuing to support teachers in their classrooms as they embark on testing out new strategies and curriculum units in their classrooms."

Credit: 
Michigan Technological University

Another clue for fast motion of the Hawaiian hotspot

image: The graph shows the dates of volcanoes of the three volcanic chains in the Pacific and their relative movement over time (left). The location of the three volcanic chains shown in the map (right). The stars mark the youngest end or the active volcanoes today.

Image: 
Nature Communications, Kevin Konrad et al.

The island chain of Hawaii consists of several volcanoes, which are fed by a "hotspot". In geosciences a "hotspot" refers to a phenomenon of columnar shaped streams, which transport hot material from the deep mantle to the surface. Like a blow torch, the material burns through the Earth's crust and forms volcanoes. For a long time, it was assumed that these hotspots are stationary. If the tectonic plate moves across it, a chain of volcanoes evolves, with the youngest volcano at one end, the oldest at the other.

This concept was initially proposed for the Hawaiian Islands. They are the youngest end of the Hawaiian-Emperor chain that lies beneath the Northwest Pacific. But soon there was doubt over whether hotspots are truly stationary. The biggest contradiction was a striking bend of about 60 degrees in this volcanic chain, which originated 47 million years ago. "If you try to explain this bend with just a sudden change in the movement of the Pacific Plate, you would expect a significantly different direction of motion at that time relative to adjacent tectonic plates," says Bernhard Steinberger of the GFZ German Research Center for Geosciences. "But we have not found any evidence for that." Recent studies have suggested that apparently two processes were effective: On the one hand, the Pacific Plate has changed its direction of motion. On the other hand, the Hawaiian hotspot moved relatively quickly southward in the period from 60 to about 50 million years ago – and then stopped. If this hotspot motion is considered, only a smaller change of Pacific plate motions is needed to explain the volcano chain.

This hypothesis is now supported by work in which Steinberger is also involved. First author Kevin Konrad, Oregon State University in Corvallis, Oregon, and his team have evaluated new rock dating of volcanoes in the Rurutu volcanic chain, including, for example, the Tuvalu volcanic islands in the Western Pacific. Furthermore, they added similar data from the Hawaiian-Emperor chain and the Louisville chain in the Southern Pacific. Based on the geography and the age of volcanoes in these three chains, researchers can look into the geological past and see how the three hotspots have moved relative to each other over millions of years. The new data published in the journal Nature Communications shows that the relative motion of hotspots under the Rurutu and Louisville is small while the Hawaiian-Emperor hotspot displays strong motion between 60 and 48 million years ago relative to the other two hotspots. "This makes it very likely that mainly the Hawaii hotspot has moved," says Steinberger. According to his geodynamic modelling the Hawaiian hotspot moved at a rate of several tens of kilometers per million years. Paleomagnetic data support this interpretation, says Steinberger. "Our models for the motion of the Pacific Plate and the hotspots therein still have some inaccuracies. With more field data and information about the processes deep in the mantle, we hope to explain in more detail how the bend in the Hawaiian-Emperor chain has evolved."

Credit: 
GFZ GeoForschungsZentrum Potsdam, Helmholtz Centre

Massive data analysis shows what drives the spread of flu in the US

image: Flu cases cases in the US tend to originate in the southeast and move north away from the coasts.

Image: 
Andrey Rzhetsky, UChicago

Using several large datasets describing health care visits, geographic movements and demographics of more than 150 million people over nine years, researchers at the University of Chicago have created models that predict the spread of influenza throughout the United States each year.

They show that seasonal flu outbreaks originate in warm, humid areas of the south and southeastern U.S. and move northward, away from the coasts. The approach differs from traditional flu tracking models that rely on transmission rates, or the expected number of people who can get sick and pass the virus along to others. Instead, the new models use several other factors that influence those transmission rates. The study was published Februrary 27 in the journal eLife.

"It's a very high-resolution picture, perhaps even higher than what the Centers for Disease Control and Prevention can see, because it incorporates so many data sources," said Andrey Rzhetsky, PhD, the study's senior author and the Edna K. Papazian Professor of Medicine and Human Genetics at UChicago.

Rzhetsky and Ishanu Chattopadhyay, PhD, assistant professor of medicine at UChicago and the study's lead author, began with health care records from Truven MarketScan, a database of de-identified patient data from more than 40 million families in the United States. They analyzed nine flu seasons, from 2003 to 2011, flagging insurance claims for treatment for flu-like symptoms. This data shows when and where each flu outbreak begins and generates "streams" to track its spread from county to county. The source counties tended to be on the coasts near the Gulf of Mexico or the Atlantic Ocean.

The researchers also analyzed 1.7 billion geo-located messages from Twitter over a three-and-a-half-year period to capture people's week-to-week travel patterns between counties. For example, if someone routinely tweets from home, then tweets from work or while visiting family in the next county, this would establish a pattern of movement between the two counties.

The analysis also incorporated data on "social connectivity," which included estimates of how often people visit close friends and neighbors, air travel, weather, vaccination rates and changes in the flu virus itself.

The team combined all of these data points to draw a picture of what factors drive the northward spread of the flu each year. In the paper, they liken the typical outbreak to a forest fire. To spread, a fire needs flammable, dry tinder, an initiating spark and wind to hasten its movement. In the southern U.S., people have a high degree of social connectivity. The number of close friends, friends who are also neighbors, and communities of people who all know each other is much higher than the country at large, meaning they have lots of opportunities to spread the flu.

This high social connectivity is the flammable material. The spark is the warm, humid weather of the southern coast, and the wind is the collective movement of all these people, over short distances by land, as they drive from county to county.

The researchers were able to use these models to recreate three years of historical flu data fairly accurately. Rzhetsky said that as the first reports of the flu begin to come in each fall, these tools could be used to help public health officials focus prevention efforts.

"For example, if flu-like symptoms are being reported in one county, you could tell people in neighboring counties to stay away from crowds, or you could focus vaccination efforts in certain places in advance," he said. "It could be used essentially as a weather forecast for the flu."

Credit: 
University of Chicago Medical Center