Tech

Weight loss surgery linked to an increased risk of inflammatory bowel disease

A new Alimentary Pharmacology & Therapeutics analysis has found a link between the development of inflammatory bowel disease (IBD) and a past history of weight loss surgery.

For the analysis, investigators first conducted a multi-institutional case series of patients with a history of IBD and weight loss surgery. A total of 15 cases of IBD with a prior history of bariatric surgery were identified and reviewed. They next conducted a matched case-control study using medical and pharmacy claims database from 2008 to 2012. A total of 8980 cases and 43,059 controls were included in the database analysis.

A past history of weight loss surgery was associated with a nearly 2-times increased likelihood of developing IBD. The mechanism by which weight loss surgery may increase the risk of IBD is unclear. Alterations in gut microbes following the surgery may play a role. In addition, weight loss surgery patients have elevated rates of vitamin D and bile salt deficiencies.

Prospective studies are needed to confirm the association found in this analysis and delineate if certain types of weight loss surgeries have differential effects on risk of IBD. "While we do not think our findings should at all discourage or take away from the health benefits of bariatric surgery, since the absolute risk of developing IBD following bariatric surgery remains extremely small, we think this association highlights potential disease mechanisms and the need to carefully evaluate new gastrointestinal symptoms in patients with prior weight loss surgery," said senior author Dr. Jean-Frederic Colombel, of the Icahn School of Medicine at Mount Sinai, in New York. "Of note, another recent study from the Mayo Clinic had similar findings calling for the need for further prospective studies on this topic."

Credit: 
Wiley

Study reveals how the brain tracks objects in motion

CAMBRIDGE, MA -- Catching a bouncing ball or hitting a ball with a racket requires estimating when the ball will arrive. Neuroscientists have long thought that the brain does this by calculating the speed of the moving object. However, a new study from MIT shows that the brain's approach is more complex

The new findings suggest that in addition to tracking speed, the brain incorporates information about the rhythmic patterns of an object's movement: for example, how long it takes a ball to complete one bounce. In their new study, the researchers found that people make much more accurate estimates when they have access to information about both the speed of a moving object and the timing of its rhythmic patterns.

"People get really good at this when they have both types of information available," says Mehrdad Jazayeri, the Robert A. Swanson Career Development Professor of Life Sciences and a member of MIT's McGovern Institute for Brain Research. "It's like having input from multiple senses. The statistical knowledge that we have about the world we're interacting with is richer when we use multiple senses."

Jazayeri is the senior author of the study, which appears in the Proceedings of the National Academy of Sciences the week of March 5. The paper's lead author is MIT graduate student Chia-Jung Chang.

Objects in motion

Much of the information we process about objects moving around us comes from visual tracking of the objects. Our brains can use information about an object's speed and the distance it has to cover to calculate when it will reach a certain point. Jazayeri, who studies how the brain keeps time, was intrigued by the fact that much of the movement we see also has a rhythmic element, such as the bouncing of a ball.

"It occurred to us to ask, how can it be that the brain doesn't use this information? It would seem very strange if all this richness of additional temporal structure is not part of the way we evaluate where things are around us and how things are going to happen," Jazayeri says.

There are many other sensory processing tasks for which the brain uses multiple sources of input. For example, to interpret language, we use both the sound we hear and the movement of the speaker's lips, if we can see them. When we touch an object, we estimate its size based on both what we see and what we feel with our fingers.

In the case of perceiving object motion, teasing out the role of rhythmic timing, as opposed to speed, can be difficult. "I can ask someone to do a task, but then how do I know if they're using speed or they're using time, if both of them are always available?" Jazayeri says.

To overcome that, the researchers devised a task in which they could control how much timing information was available. They measured performance in human volunteers as they performed the task.

During the task, the study participants watched a ball as it moved in a straight line. After traveling some distance, the ball went behind an obstacle, so the participants could no longer see it. They were asked to press a button at the time when they expected the ball to reappear.

Performance varied greatly depending on how much of the ball's path was visible before it went behind the obstacle. If the participants saw the ball travel a very short distance before disappearing, they did not do well. As the distance before disappearance became longer, they were better able to calculate the ball's speed, so their performance improved but eventually plateaued.

After that plateau, there was a significant jump in performance when the distance before disappearance grew until it was exactly the same as the width of the obstacle. In that case, when the path seen before disappearance was equal to the path the ball traveled behind the obstacle, the participants improved dramatically, because they knew that the time spent behind the obstacle would be the same as the time it took to reach the obstacle.

When the distance traveled to reach the obstacle became longer than the width of the obstacle, performance dropped again.

"It's so important to have this extra information available, and when we have it, we use it," Jazayeri says. "Temporal structure is so important that when you lose it, even at the expense of getting better visual information, people's performance gets worse."

Integrating information

The researchers also tested several computer models of how the brain performs this task, and found that the only model that could accurately replicate their experimental results was one in which the brain measures speed and timing in two different areas and then combines them.

Previous studies suggest that the brain performs timing estimates in premotor areas of the cortex, which plays a role in planning movement; speed, which usually requires visual input, is calculated in visual cortex. These inputs are likely combined in parts of the brain responsible for spatial attention and tracking objects in space, which occurs in the parietal cortex, Jazayeri says.

In future studies, Jazayeri hopes to measure brain activity in animals trained to perform the same task that human subjects did in this study. This could shed further light on where this processing takes place and could also reveal what happens in the brain when it makes incorrect estimates.

Credit: 
Massachusetts Institute of Technology

Researchers reshape the energy landscape of phonons in nanocrystals

Phonons, which are packets of vibrational waves that propagate in solids, play a key role in condensed matter and are involved in various physical properties of materials. In nanotechnology, for example, they affect light emission and charge transport of nanodevices. As the main source of energy dissipation in solid-state systems, phonons are the ultimate bottleneck that limits the operation of functional nanomaterials.In an article recently published in Nature Communications, an INRS team of researchers led by Professor Luca Razzari and their European collaborators show that it is possible to modify the phonon response of a nanomaterial by exploiting the zero-point energy (i.e., the lowest possible - "vacuum" - energy in a quantum system) of a terahertz nano-cavity. The researchers were able to reshape the nanomaterial phonon response by generating new light-matter hybrid states. They did this by inserting some tens of semiconducting (specifically, cadmium sulfide) nanocrystals inside plasmonic nanocavities specifically designed to resonate at terahertz frequencies, i.e., in correspondence of the phonon modes of the nanocrystals.

"We have thus provided clear evidence of the creation of a new hybrid nanosystem with phonon properties that no longer belong to the original nanomaterial," the authors said.

This discovery holds promise for applications in nanophotonics and nanoelectronics, opening up new possibilities for engineering the optical phonon response of functional nanomaterials. It also offers an innovative platform for the realization of a new generation of quantum transducers and terahertz light sources.

Credit: 
Institut national de la recherche scientifique - INRS

UToledo engineer creates solution to cheaper, longer lasting battery packs

image: Dr. Ngalula Mubenga, assistant professor of electrical engineering technology at The University of Toledo, holding a battery cell next to the bilevel equalizer.

Image: 
Dan Miller, The University of Toledo

An electrical engineer at The University of Toledo, who nearly died as a girl in Africa because of a hospital's lack of power, has developed a new energy storage solution to make battery packs in electric vehicles, satellites, planes and grid stations last longer and cost less.

The new technology called a bilevel equalizer is the first hybrid that combines the high performance of an active equalizer with the low cost of the passive equalizer.

"It's a game changer because we solved the weak cell issue in lithium ion battery storage for packs with hundreds of cells," said Dr. Ngalula Mubenga, assistant professor of electrical engineering technology at UT. "Whenever we are talking about batteries, we are talking about cells connected in a series. Over time, the battery is not balanced and limited by the weakest cell in the battery."

Before the bilevel equalizer, battery makers and automotive manufacturers balanced the cell voltages in a large battery pack using either a passive circuit, which loses more energy, or an active circuit, which is 10 times more expensive.

"In spite of their significant losses, passive equalizers are used in most applications because they are relatively simple and low cost," Mubenga said.

In Mubenga's new technology, the cells are grouped into sections. Each cell within the section is balanced by a passive equalizer, while the entire section is balanced by an active equalizer.

"If there are 120 cells in a battery, divide the cells into 10 groups of 12," Mubenga said. "Then you only need nine active equalizer units and 120 passive equalizer units using the bilevel equalizer. With current active equalizers, manufacturers would have to use 120 active equalizers. For manufacturers that can't afford to use only active equalizers, the bilevel equalizer is the solution to the problem."

Experiments have shown that the bilevel equalizer increases the discharge capacity of lithium ion batteries by about 30 percent, and the pack lasts longer because the cells are balanced.

"Instead of an electric vehicle's battery lasting only four years, it would last much longer," Mubenga said.

Mubenga worked on the project with Dr. Tom Stuart, professor emeritus in the UT Department of Electrical Engineering and Computer Science, who had the idea for the bilevel equalizer.

Their team is licensing the hybrid equalizer and retrofit kit to manufacturers. The research was recently published in Batteries, an international journal. Project funding was provided by the Ohio I-Corps program and Ohio Third Frontier program.

Mubenga plans to present their new, patented technology at 2 p.m. Wednesday, March 7 at the Advanced Design and Manufacturing Expo at the Huntington Convention Center of Cleveland in a session titled "Lowering the Cost of Energy Storage for E/HV and Grid Applications Using a Bilevel Equalizer for Large Li-Ion Batteries."

Mubenga understands the life-changing power of electricity. When she was 17 years old in her native country of the Democratic Republic of Congo in Africa, she waited three days for surgery after her appendix burst because there was no power at the hospital.

"I was living in a small town called Kikwit, far away from the big and beautiful capital city of Kinshasa," Mubenga said. "I was very sick, doctors needed to do surgery, but they couldn't find any gas to turn on the power generator. For three days, my life depended on electricity. I was praying. I could not eat. And decided if I made it alive, I would work to find a solution so people wouldn't die because of lack of electricity."

The hospital found fuel to power the generator, doctors did the surgery and Mubenga survived.

She started studying renewable energy at the UT College of Engineering in 2000 and earned a bachelor's degree, master's degree and PhD in electrical engineering. After earning her professional engineer license in Ohio, she went on to found her company called the SMIN Power Group, which develops and installs solar power systems in communities throughout the Democratic Republic of Congo.

"My passion is deep," Mubenga said. "In places like that small town of Kikwit, if you have solar power, you can have electricity and save lives."

Another factor fueling Mubenga's research motivation is a connection between her native country and lithium ion batteries.

"Most of the minerals for today's electronics are mined in the Democratic Republic of the Congo," Mubenga said. "The Democratic Republic of the Congo is a leading producer of cobalt, copper, gold, diamond, tantalum and tin in the world. Indeed, the Democratic Republic of the Congo contains about 50 percent of the world's reserve of cobalt, a mineral used to make lithium ion batteries."

Credit: 
University of Toledo

No fish story! Research finds marine reserves sustain broader fishing efforts

image: This lined bristletooth fish is swimming in a marine protected area in the Philippines, where new Florida Tech research has found fish grow healthier than in areas where fishing is allowed.

Image: 
Florida Institute of Technology

MELBOURNE, FLA. -- New research from Florida Institute of Technology finds that fish born in marine reserves where fishing is prohibited grow to be larger, healthier and more successful at reproduction.

The findings, recently published in the online journal Plos One, highlight the positive impact of a tool of fisheries management that has long been a source of frustration and concern for fishermen, who believe such restrictions impinge on their livelihood.

In their examination of marine reserves, also known as marine protected areas or MPAs, around coral reefs in the Philippines, Robert Fidler, a Fulbright scholar who recently received his Ph.D. from Florida Institute of Technology, and his major professor, Fulbright faculty scholar Ralph Turingan, found evidence that MPAs in fact helped to produce and maintain the more desirable large-bodied and older fish within populations that have been fished by local fishermen for centuries.

"The first reaction to marine reserves by local users is traditionally, 'You close all of these fishing areas and we can't fish anymore in there,'" Turingan said. "That is the wrong way to think. These MPAs are actually important is sustaining fishing activities."

And, Turingan added, "Our evidence shows this is a long-term thing."

Fidler, Turingan and their collaborators found that key life-history traits in three coral-reef fishes - maximum length, growth rate, and body size and age at sexual maturity - are significantly improved in the brown surgeonfish, lined bristletooth and manybar goatfish living within MPAs compared with the same species outside of MPAs.

Because fishing removes the largest fish, heavy fishing on coral reefs drives the fish to mature at younger ages and smaller sizes.

Small fish have fewer, smaller eggs that, if they survive to hatching, produce weaker fish. By contrast, larger fish produce higher quality eggs, and more of them, which in turn produce healthier fish that grow larger and reproduce more, and the cycle continues.

The researchers found that the more robust fish naturally migrate from the MPAs to the fished areas, where they can be harvested by fishermen.

"It's like raising them in aquaculture and putting them back, but this is more natural way of replenishing depleted stocks," Turingan said.

Though their findings were based on studies of coral reef fish in the Philippines, Fidler and Turingan predicted the results could be replicated wherever marine reserves are established on coral reefs.

Credit: 
Florida Institute of Technology

Without 46 million year-old bacteria, turtle ants would need more bite and less armor

image: A cephalotes ant tending a membracid on a plant.

Image: 
Photo by Jon Sanders.

You've probably heard about poop pills, the latest way for humans to get benevolent bacteria into their guts. But it seems that a group of ants may have been the original poop pill pioneers -- 46 million years ago.

A new collaborative study, published in Nature Communications, determined that turtle ants (Cephalotes) are able to supplement their low-nitrogen diets by passing helpful bacteria from older ants to younger ones through anal secretions. Once this is done, the now-internalized microbes (tiny bacteria) naturally produce the nitrogen necessary for turtle ants to survive.

"Turtle ants eat a lot of food that is hard to digest and contains few essential nutrients in accessible form," said Jacob Russell, PhD, an associate professor in Drexel University's College of Arts and Sciences and the paper's senior author. "The fact that they can subsist on such diets and have moved away from aggressively competing for more optimal food resources with other ants is almost certainly a function of their investment in symbioses with gut bacteria."

Carried out by researchers at Drexel, the University of California San Diego, the University of Pennsylvania, Harvard University, The Rockefeller University, Calvin College, and the Field Museum of Natural History, this multi-institution, international study was spearheaded by Yi Hu, PhD, while finishing a postdoc at Drexel, and Jon Sanders, PhD, a postdoc at UC San Diego.

The study was inspired by work Russell did with Carrie Moreau, PhD, and Naomi Pierce, PhD, in Pierce's lab more than a decade ago when they discovered that many ants with low-quality diets harbored specialized bacterial symbionts - likely to supplement their diets.

It turned out that turtle ants were a great example of this. While many ants attack other animals for their food or scavenge the carcasses of dead animals, turtle ants rely on foraging for nectar, pollen, fungi and other resources from plant canopies. They also consume urine from mammals and bird feces, which do contain lots of nitrogen -- but in forms inaccessible to animals without the aid of microbes.

To test whether the gut bacteria significantly contributed to the ants' nutrient intake, the researchers kept some of the turtle ants in a lab, put them on a diet of urea (the main waste in urine), and gave them antibiotics -- which killed their gut bacteria. In this case, the ants weren't able to get the nitrogen they normally did when on a diet strictly made up of urea.

Finding that turtle ants keep nitrogen producing bacteria in their guts shows how they can survive so well while eating foods that so few other animals seem to want.

With a seemingly reduced use for offensive capabilities, in conjunction with their shifts to these lower quality diets, turtle ants have lost many traits that other ants utilize to compete for or attack their food.

"These ants have evolved reduced mandibles -- jaws -- and lost the ability to sting," said Russell. "As a result, they are not very good at preying on living invertebrate animals or scavenging dead ones. This also means they have lost features that are integral to competing with other ant species."

In turn, the ants evolved more passive defenses, like thick armor and "a specialized caste of adults that use their heads to plug the entrances of their hollow tree branch nests," Russell explained.

What's interesting is that the thick defensive armor these ants developed requires a good deal of nitrogen, which again points back to the importance of turtle ants' symbiotic relationship with their gut bacteria.

"That armor may be possible due to the large contributions gut microbes make to their nitrogen budgets," Russell said.

Since the microbes are so important to their lives, it would seem that turtle ants have also evolved a way to protect them.

"These ants develop a fine-mesh filter near the start of their digestive tract, which may insulate their downstream gut microbes from foreign invaders. This has likely helped to reinforce the integrity of these ancient bacterial communities," Russell said.

Direct information on the functions of ant-associated bacteria has been relatively limited, with leaf-cutter and carpenter ants making up the majority of this knowledge. So knowing that turtle ants benefit to such a degree from their bacteria -- especially that microbially provisioned nitrogen may be essential to their survival -- is significant.

"This work illustrates that members of complex communities can evolve together, laying the groundwork for future research on how these organisms evolve in response to reliable partnerships," Russell said.

Mammals, like us, have a complex set of bacteria in our guts that may have also evolved with hosts for millions of years -- albeit in a much less specific fashion. Knowing now about the turtle ants and their symbiotic bacteria raises further questions about how we developed, ourselves. At the same time, it may also provide answers.

"The turtle ant system -- which is relatively simple -- may prove useful in helping us to model questions about our own partnerships with microbes and how important they are for human health," Russell concluded.

Credit: 
Drexel University

Towards an unconscious neural reinforcement intervention for common fears

image: In a collaboration between researchers based Advanced Telecommunications Research Institute International (ATR), Japan, and University of California, Los Angeles (UCLA) scientists have moved one major step towards the development of a novel form of brain-based treatment for phobia that may soon be applicable to patients

Image: 
(C)ATR, UCLA

In a new study published today in the Proceedings of the National Academy of Sciences, an international team of scientists reported that they can now make people less afraid of everyday objects of phobia such as snakes and spiders, by directly manipulating the brain activity in human participants. Importantly, this new procedure is entirely free from the unpleasant feelings one typically needs to go through in traditional psychotherapeutic treatments, where patients may be required to encounter the very feared objects repeatedly, or to relive past experiences that are frightening or aversive. Instead, in this new intervention procedure, participants are only required to play a mental exercise game to earn money, without having to think consciously about the feared objects at all.

The study is based on recent experiments conducted at the Advanced Telecommunications Research Institute International, Japan, which have already demonstrated the effectiveness of rationale applied here. By using cutting edge methods borrowed from artificial intelligence similar to computer algorithms used to recognize faces from images, the team was able to read out unconscious spontaneous occurrences of mental images in the brain. That is, these researchers can tell if a participant's brain is 'unconsciously' thinking of a snake (which happens every now and then without our awareness), based on images acquired using conventional fMRI (functional magnetic resonance imaging, a measurement available in many hospitals). By giving the participant a small amount of monetary reward whenever this happens, the snake is thus associated with a positive feeling, thereby eventually becomes less frightening and unpleasant.

"We knew it could work in principle. The challenge was to figure out how to read out the snake-related thoughts from the brain images in the clinic, with actual patients rather than normal participants in the laboratory," says lead author Dr. Vincent Taschereau-Dumouchel, who is a clinical psychologist by training. "The big difference is, in normal participants we can show them many images of snakes, and let the computer algorithm learn what is the relevant pattern of brain activity from a large amount of data. But if we are to apply this procedure to patients, who are uncomfortable with seeing snakes in the first place, this becomes a problem."

The team devised an innovative solution to the problem, by inferring the patterns of brain activity from other participants.

"We can think of it this way: Let's say you are afraid of snakes. To decode the patterns of your brain activity, you do not necessarily have to see snakes. I, as a surrogate of yours, can see snakes for you, as I'm not afraid of them. From there, we could computationally infer what should be your brain signature for snakes, based on mine, with an ingenious method devised by the Haxby lab at Dartmouth, called hyperalignment," says last author Professor Hakwan Lau who is based at UCLA as well as the University of Hong Kong.

Although different individual's brain activity patterns have different spatial organizations, the hyperalignment method can correct for this discrepancy. Importantly, the team realized that a patient could also have not just one, but as many as dozens of surrogate participants to help. They have shown that with a large amount of data from many surrogates - all collected without having the potential patient see any of the feared images - they can crack the problem with reliable results.

"Not only did we replicate our previous findings, that after the intervention participants showed reduced physiological and brain responses to the feared images, the effects were somewhat more robust still. This is true even though we are now dealing with the added complexity of everyday images relevant to actual phobia," says Mitsuo Kawato, the corresponding author from the team based at ATR, Japan.

These researchers feel that now they are in a position to take this new method to the next level - to test it in actual phobic patients. They already have a grant proposal pending support from the National Institute of Mental Health (1R61MH113772-01A1); during the later stages this would involve double-blinded clinical trials. If successful, they are hoping that this can inspire a whole variety of new treatments, not just for phobia but also for several related psychiatric illnesses, including post-traumatic stress disorders and depression.

Credit: 
ATR Brain Information Communication Research Laboratory Group

Glaciers in Mongolia's Gobi Desert actually shrank during the last ice age

image: The Gobi-Altai mountain range in western Mongolia is in a very dry region but ice can accumulate on mountaintops, such as Sutai Mountain, the tallest peak in the range. In the picture, friends of Jigjidsurengiin Batbaatar descend this mountain after helping to install a weather station.

Image: 
Jigjidsurengiin Batbaatar/University of Washington

The simple story says that during the last ice age, temperatures were colder and ice sheets expanded around the planet. That may hold true for most of Europe and North America, but new research from the University of Washington tells a different story in the high-altitude, desert climates of Mongolia.

The recent paper in Quaternary Science Reviews is the first to date ancient glaciers in the high mountains of Mongolia's Gobi Desert. It compares them with glacial records from nearby mountains to reveal how glaciers behave in extreme climates.

On some of the Gobi mountain ranges included in the study, glaciers started growing thousands of years after the last ice age ended. In contrast, in slightly wetter parts of Mongolia the largest glaciers did date from the ice age but reached their maximum lengths tens of thousands of years earlier in the glacial period rather than at its culmination, around 20,000 years ago, when glaciers around most of the planet peaked.

Both trends differ from the typical chronology of glacier growth during an ice age.

"In some of the Gobi mountains, the largest glaciers didn't happen during the last ice age," said first author Jigjidsurengiin Batbaatar, a UW doctoral student in Earth and Space Sciences. "Some of these glaciers were starving for precipitation then. Our measurements show that they actually shrank as cold, dry conditions of the ice age became more intense. Then they grew when the warming climate of the Holocene brought more moist air, feeding the glaciers with more snow."

Batbaatar and co-author Alan Gillespie, a UW research professor emeritus in Earth and Space Sciences, collected samples from moraines, which are long ridges of rocky debris dropped at a glacier's edge. They used a dating technique perfected in the last 20 years that measures elemental changes in the rock that occur when the rock gets bombarded by cosmic rays after the glacier's retreat.

"We were expecting to find rocks exposed for 20,000 years, the date of the peak of the last ice age, but these moraines were much younger. That means that these glaciers were smaller when the climate was the coldest," Batbaatar said. "The results were so surprising that we went back to double check."

The study was possible both because of advances in the cosmic-ray dating method, and political changes that allow more access to Central Asia.

"After the fall of the Soviet Union, Russia opened up, China opened up, and Mongolia opened up to Western researchers with these novel dating techniques. And we see a very different pattern of glacial advances compared to North America and Europe," Batbaatar said.

The data collected in 2007 and 2010 confirm a theoretical study by Summer Rupper, a former UW doctoral student now at the University of Utah, and UW faculty member Gerard Roe. In very cold and dry environments, where rain and snow are scarce, it predicted that temperature would not always be the main factor driving a glacier's growth.

"Because the melting is so dominant a process, and the melting is mostly controlled by temperature, people think of glaciers as thermometers. But we all know that precipitation plays a role," Batbaatar said.

The new study confirms that so-called "starving glaciers" in dry, high-altitude environments are indeed controlled by precipitation. They grow so slowly that they seldom reach the lower altitudes where melting is possible. Instead, they shrink when sunlight hits the surface and transforms ice into water vapor, a process called sublimation. These glaciers are thus less sensitive to temperature shifts, but very sensitive to precipitation amounts.

"Generally, people have assumed from well-documented North American and European records that the largest glaciers should have come in the peak of the last ice age," Batbaatar said. "But in Mongolia, our results show that this was not the case. Glacier behavior there was different from the better-studied areas of the Alps or the Sierra Nevada in the U.S. Even within Mongolia we observe very different behavior from range to range."

The conditions at the Gobi-Altai mountain range are extreme, with precipitation at the five research sites Batbaatar established there ranging from roughly 50 to 300 millimeters (2 inches to 1 foot) per year. Nearby mountains in Mongolia with more precipitation have more typically behaving glaciers. But other extreme climates, for example the driest parts of Tibet or the Andes, can produce glaciers with similar paradoxical trends.

"Even in this current warming climate, some mountains are so high that the temperatures are still below freezing, and the warming ocean may provide more precipitation to drive some of the glaciers to advance," Batbaatar said.

He is now working to interpret more measurements collected from a wider geographic area in Central Asia.

"Batbaatar has shown that glaciers growing in cold, arid, desert mountains may be out of sync with those in wetter, warmer environments such as the Alps," Gillespie said. "His findings move us toward a more complete understanding of how glaciers advance and retreat in response to climatic fluctuations."

Credit: 
University of Washington

Cuba's sonic attack may have been accidental - and due to spying

 A series of "sonic attacks" that sickened U.S. and Canadian government workers in Cuba last year could have been the side-effect of attempts to "eavesdrop" using high frequencies, according to University of Michigan and Chinese engineering researchers who reverse-engineered the attacks in a lab. Beginning in December 2016, at least two dozen U.S.

Study discovers South African wildfires create climate cooling

image: This is a view from the window of a data collection flight conducted by NASA's P-3 Orion research aircraft over the Atlantic Ocean during August 2017. A layer of smoke is visible over patchy clouds.

Image: 
NASA/Kirk Knobelspriesse

University of Wyoming researchers led a study that discovered that biomass smoke originating from South Africa that drifts over the southeast Atlantic Ocean significantly enhances the brightness of low-level clouds there -- creating a reflective process that actually helps cool the Earth and counteract the greenhouse effect.

"If you change the particles, you are changing the composition of the cloud," says Xiaohong Liu, a UW professor in the Department of Atmospheric Science and the Wyoming Excellence Chair in Climate Science. "For our study, we found the smoke comes down and can mix within the clouds. The changed clouds are more reflective of sunlight. Brighter clouds counteract the greenhouse effect. It creates cooling."

Liu is the corresponding author of a paper, titled "Biomass Smoke from Southern Africa Can Significantly Enhance the Brightness of Stratocumulus over the Southeastern Atlantic Ocean," that was published March 5 (today) in the Proceedings of the National Academy of Sciences (PNAS). The journal is one of the world's most prestigious multidisciplinary scientific serials, with coverage spanning the biological, physical and social sciences.

Zheng Lu, a UW research associate in Liu's research group, was the paper's lead author. The two used the National Center for Atmospheric Research (NCAR)-Wyoming Supercomputing Center in Cheyenne to conduct high-resolution computational modeling of the smoke and its effects on the clouds.

Other contributors to the paper were from the University of Maryland-Baltimore County (UMBC); the University of Science and Technology of China; the NASA Goddard Space Flight Center; and the University of Michigan.

For years, scientists determined that smoke, overall, diminishes the clouds' cooling effect by absorbing light that the clouds beneath the aerosols would otherwise reflect. This new study does not dispute that phenomenon. However, more dominantly, the new study found that smoke and cloud layers are closer to each other than previously thought. This makes the clouds more reflective of light and, thus, accelerates the clouds' cooling effect. This is due to the tiny aerosol particles from the smoke that serve as the nuclei for the formation of cloud droplets.

"The purpose of this paper is to look at these competing processes. Which one is more important?" asks Zhibo Zhang, a co-author of the paper and an associate professor in the Department of Physics at UMBC.

Running the advanced computer models, Liu explains that carbon dioxide (CO2) -- from human activities since the Industrial Revolution -- provides a greenhouse effect of 1.66 watts per square meter that is uniformly distributed over the globe. Fire smoke produces a much larger cooling effect: 7 watts per square meter over the southeast Atlantic during the fire season of each year.

"Our group is the first to quantify this brightening effect," Liu says. "This (smoke aerosols in clouds) reflects more solar radiation to space, which results in less solar radiation reaching the Earth's surface. This creates a cooling effect."

Each year, biomass-burning aerosols from South Africa emit into the atmosphere during the fire season, which runs from July through October. Many are wildfires, while other fires are set intentionally to clear farmland. These fires create so much smoke that they are observable on satellite images from space, Liu says, pointing to prominent red patches on his computer screen.

The aerosols transport westward over the southeast Atlantic Ocean and interact with underlying stratocumulus cloud decks, which are located approximately 1 kilometer above sea level surface, Liu explains.

Previous studies have shown that such aerosols greatly agitate the top of atmosphere radiation balance by scattering and absorbing solar radiation, and alter cloud properties by changing the stability of the lower troposphere.

Liu says this new study, using state-of-the-art computer modeling and satellite observations, found that the aerosols mixed into the clouds act as cloud condensation nuclei and increase the brightness of stratocumulus clouds. This results in substantial cooling of the Earth.

The research team would ultimately like to refine global climate models by improving how such models account for clouds and interacting with aerosols emitted from a variety of sources, including power plants, automobiles, deserts and oceans. Over time, they plan to quantify the magnitude of the cooling effect of aerosols on the Earth's climate system, which may have masked the greenhouse effect of CO2. However, the extent of this masking is unknown.

"It's like a puzzle," Liu says.

Credit: 
University of Wyoming

'Epigenetic landscape' is protective in normal aging, impaired in Alzheimer's disease

image: Coronal section of human brain indicating the lateral temporal lobe (red circle) used in this study.

Image: 
Coronal section of human brain indicating The lab of Shelley Berger, PhD, Perelman School of Medicine, University of Pennsylvania

PHILADELPHIA - Although certain genetic variants increase the risk of Alzheimer's disease (AD), age is the strongest known risk factor. But the way in which molecular processes of aging predispose people to AD, or become impaired in AD remains a mystery. A team of researchers from the Perelman School of Medicine at the University of Pennsylvania, publishing in Nature Neuroscience this week, profiled the epigenomic landscape of AD brains, specifically in one of the regions affected early in AD, the lateral temporal lobe. They compared these to both younger and elderly cognitively normal control subjects. The team described the genome-wide enrichment of a chemical modification of histone proteins that regulates the compaction of chromosomes in the nucleus (called acetylation of lysine 16 on histone H4, or H4K16ac for short).

Changes to the way H4K16ac is modified along the genome in disease versus normal aging brains may signify places for future drug development. Because changes in H4K16ac govern how genes are expressed, the location and amount of epigenetic alterations is called the "epigenetic landscape."

"This is the first time that we have been able to look at these relationships in human tissue by using donated postmortem brain tissue from the Penn Brain Bank," said Shelley Berger, PhD, a professor of Cell and Developmental Biology in the Perelman School of Medicine and a professor of Biology in the School of Arts and Sciences. "Our results establish the basis for an epigenetic link between aging and Alzheimer's disease."

Berger, also the director of the Penn Epigenetics Institute, Nancy Bonini, PhD, a professor of Biology, and Brad Johnson, MD, PhD, an associate professor of Pathology and Laboratory Medicine, are co-senior authors of the new study.

H4K16ac is a key modification in human health because it regulates cellular responses to stress and to DNA damage. The team found that, while normal aging leads to increasing H4K16ac in new positions along the genome and an increase in where it is already present, in great contrast, AD entails losses of H4K16ac in the proximity of genes linked to aging and AD. In addition, the team discovered an association between the location of H4K16ac changes and genetic variants identified in prior AD genome-wide association studies.

A three-way comparison of younger, older, and AD brain tissue revealed a specific class of H4K16ac changes in AD compared to normal age-established changes in the brain. This finding indicates that certain normal aging changes in the epigenome may actually protect against AD and when these goes awry, a person may become predisposed to AD.

"These analyses point to a new model of Alzheimer's disease. Specifically it appears that AD is not simply an advanced state of normal aging, but rather dysregulated aging that may induce disease-specific changes to the structure of chromatin - the combination of histone proteins and DNA." said first author Raffaella Nativio, PhD, a postdoctoral fellow in Berger's lab.

Accumulation of intercellular amyloid plaques and neurofibrillary tangles are the two hallmarks of AD that drive the death of neurons and the corresponding loss of cognitive abilities. However, expression of plaques and tangles is very late in the development of AD, while epigenome alterations might occur much earlier and represent targets to attack with medications.

The authors emphasized that this study does not suggest a cure for AD, but rather the possibility of finding ways to prevent nerve cell death and enhance the quality of aging. Their upcoming experiments aim to discover the physiological changes that cause the decrease of H4K16ac specifically in AD brains, but not in normal-aged brains.

Credit: 
University of Pennsylvania School of Medicine

UNC Lineberger researchers identify genetic 'seeds' of metastatic breast cancer

CHAPEL HILL -- Breast cancer is the second leading cause of cancer death in women in the United States, with most deaths caused by the cancer spreading beyond the breast. In a new study, University of North Carolina Lineberger Comprehensive Cancer Center researchers have identified genetic clues that explain how breast cancer spreads, or metastasizes - findings that may lead to better treatments or approaches to prevent its spread at the onset.

In the Journal of Clinical Investigation, the researchers published their analysis of the genetic differences they discovered in patients' primary breast cancers and their metastatic cancers. By understanding how breast cancer metastases evolve, researchers hope to better explain how they occur. This insight could reveal new approaches in the treatment and prevention of metastatic breast cancer.

"This was a very difficult study to do, but it allowed us to take a snapshot of both the primary tumor, and the tumor after it had spread, in order to trace its evolution," said the study's first author Marni Siegel, a graduate student in the UNC MD/PhD program.

Using data drawn from the UNC-Chapel Hill Breast Cancer Tumor Donation Program, the researchers analyzed DNA and the gene expression patterns in both the primary tumor and matched metastatic cancers from 16 patients. One of the major findings was that the cancer typically did not spread outside the breast as a single cell. Instead, researchers found that, based on the genetic patterns, a collection of cells most likely broke away.

"When it spreads, breast cancer often does not spread as a single cell, but rather as a collection of cells that may have different genes driving them," Siegel said. "The metastases in distant organs reflect the diversity that is seen in the original breast cancer."

Siegel said this finding has implications for treatment. If metastatic cancers are most often made up of cells with different genetic drivers, perhaps researchers should be targeting the primary tumor with multiple drugs to contain the cancer.

"We may need more than one drug in order to effectively target these different genetic drivers that we found," Siegel said.

Another major finding was that the many of the genetic drivers -- or the genetic changes responsible for the aggressiveness in the tumor -- occurred in the primary tumor, and were maintained in the metastases.

"A lot of the genes that cause the original cancer are also potentially responsible for the metastatic process, and the cancer may not need to acquire new traits to be able to spread to distant sites," said UNC Lineberger's Charles M. Perou, PhD, the May Goldman Shaw Distinguished Professor of Molecular Oncology, and a professor of Genetics and Pathology & Laboratory Medicine.

The researchers said the findings may suggest that treatments used at the outset may also prevent the cancer's spread.

"A deeper analysis of the primary tumor may be all we need to prevent metastases," said UNC Lineberger's Lisa A. Carey, MD, physician-in-chief of the N.C. Cancer Hospital and the Richardson and Marilyn Jacobs Preyer Distinguished Professor in Breast Cancer Research, who leads the Tumor Donation Program. "If you can get a better handle on the biology of the primary tumor, and the elements of the tumor that may be more or less dangerous, then you don't need to worry about testing every single metastasis for treatment decisions."

Through their analysis, the researchers determined the genetic alterations found in both the primary tumors and the metastases frequently were a type of genetic abnormality called a copy number alteration, which is when sections of DNA are repeated (i.e. duplicated). Often it was genes involved in cellular metabolism that were upregulated. Since cells inside tumors are isolated from the blood supply, they must adapt to rely on alternative methods of energy usage and production.

"We saw frequent copy number changes, and gene expression changes in metastases that reflect a change in metabolism," Siegel said. "These tumors are genetically very complex, and we demonstrated this by showing just the sheer volume of copy number changes, and that there are multiple clones in a single metastases tumor, and changes in metabolism that reflects the tumor's ability to adapt."

Although they found genetic drivers in the metastases that originated in the primary tumors, the researchers also found genetic variation between metastases in different organs coming from the same patient.

"This variation could help explain why sometimes we can see responses in the lung, but progression in the liver to the very same therapy," said UNC Lineberger's Carey Anders, MD, medical director of the UNC Breast Center and co-senior author of the paper.

"There are likely resistance mechanisms that are site-specific," added Anders, who is co-director of the Breast Cancer Tumor Donation Program at UNC Lineberger at UNC Lineberger and associate professor in the UNC School of Medicine Division of Hematology/Oncology. "Being able to understand what came from the original tumor and what happened only in the metastases is key to improving treatment. The unique resource of the Tumor Donation Program, coupled with UNC's expertise in genomics, allowed our team to make these discoveries."

Credit: 
UNC Lineberger Comprehensive Cancer Center

Don't talk and drive

In their detailed analysis of dozens of empirical studies on the effects of talking while driving, human factors researchers have provided a comprehensive and credible basis for governments seeking to enact legislation restricting drivers' use of cell phones. The analysis, just published in Human Factors: The Journal of the Human Factors and Ergonomics Society, is titled "Does Talking on a Cell Phone, With a Passenger, or Dialing Affect Driving Performance? An Updated Systematic Review and Meta-Analysis of Experimental Studies."

Author Jeff Caird, a professor in psychology and community health sciences at the University of Calgary, notes that the number of studies on cell phones and driving has more than tripled since the last meta-analysis was conducted in 2008. He and coauthors Sarah Simmons, Katelyn Wiley, Kate Johnston, and William Horrey aimed to update and extend the reliability and validity of the previous conclusions.

They examined 93 studies that were published between 1991 and 2015 and measured the effects of cell phone use on driving. The overall sample had 4,382 participants, with drivers' ages ranging from 14 to 84 years. The studies measured variables such as drivers' reaction time to hazards or emergency events, stimulus detection, lane positioning, speed, eye movements, and collisions.

Overall, the studies concluded that speaking on both handheld and hands-free phones negatively impacted driving performance, and drivers who engaged in conversation with their passengers experienced similar negative effects. Moreover, dialing, like texting, requires drivers to look away from the road for an extended period and can result in even greater detriments to driving performance than conversation alone.

"Driving is a distraction from everyday distractions such as cell phones," Caird notes. "The technological solution of driverless vehicles will allow us to get back to our preferred distractions. Until then..."

Credit: 
Human Factors and Ergonomics Society

Suicide prevention: Choosing the right word

A new study reveals the impact of the associative meaning of a single word on how readers subsequently view and refer to suicide.

In German, three terms are used to denote suicide - Suizid, Selbstmord ('self-murder') and Freitod ('free death'). A new empirical study shows that the choice of word used in media reports of suicides has a measurable impact on how readers subsequently perceive and evaluate the act of suicide. The study was carried out by Dr. Florian Arendt of the Department of Communication Science and Media Research at Ludwig-Maximilians-Universitaet (LMU) in Munich, together with his former colleague Dr. Sebastian Scherr (who is now at the Leuven School for Mass Communication Research in Belgium) and researchers based at the Medical University of Vienna. It has now been published in the journal Social Science & Medicine.

The authors recruited 451 individuals for the web-based study. The sample was first divided into three groups. The participants read short newspaper reports on suicides, which differed from one another only in the word used to refer to the act itself (Suizid, Selbstmord or Freitod). The texts given to members of each group used only one of these terms. Participants were then asked to summarize the content of what they had read in their own words, and to fill in the blanks in a word puzzle designed to test implicit memory. Finally, they were asked about their personal attitudes to suicide. "We found a clear effect, insofar as participants favored the use of the term that they had previously read in the texts assigned to each of them," says Florian Arendt.

In addition, the results provided the first indications that the three terms actually trigger different associations in readers' minds. Participants who had read the reports that referred to Freitod expressed a more positive view of suicide by incurably ill patients than those who had encountered either of the other terms. Notably, the use of Freitod by the media is regarded as problematic. "The word Freitod implies that the victim made a clear-sighted and rational decision. But all the evidence suggests that suicidal individuals typically have a very restricted perspective on their personal situation, their lives and their surroundings - they exhibit a kind of emotional tunnel vision. Seen in this light, it is very difficult to describe such a decision as free or rational," Arendt points out.

In an earlier publication, Arendt had shown that the expression most frequently chosen by news media in the German-speaking countries is Selbstmord. Meanwhile, the neutral term Suizid is used almost as often, but nevertheless the word Freitod still appears regularly in press reports. Selbstmord is not recommended because of its implicit reference to crime. Given the problematic connotations of the word Freitod noted above, the recommended term for use in reports of suicide in the German media is the neutral term Suizid.

"Our study underlines the fact that the media could play a major role in the prevention of suicides. Journalists should take care to choose the least 'loaded' term, says Arendt. However, careful choice of words is "only one of the measures which empirical research has shown can reduce the incidence of suicide." The World Health Organization (WHO) has issued guidelines for news coverage of suicides. In the English language, for example, the WHO-recommendation is not to refer to suicide as "successful" or as a "failed attempt" because these terms may elicit problematic associative meanings, implying that death is a desirable outcome. Instead, it is recommended to write "died by suicide."

Credit: 
Ludwig-Maximilians-Universität München

Restoring lipid synthesis could reduce lung fibrosis

(PHILADELPHIA) - Pulmonary fibrosis, an ongoing process of scarring that leaves patients chronically short of breath, can progress in severity until the only course of treatment is lung transplant. A new study shows that restoring the lipids that help keep lung tissue flexible and inflated can help slow disease progression in laboratory models of pulmonary fibrosis.

"This is the first paper to show that rather than being a 'second hit' to help initiate the disease, blocking lipid synthesis alone -- with no other insult to the lungs -- can instigate fibrotic scaring," said Ross Summer Professor of Medicine at Thomas Jefferson University and physician-researcher in the Jane and Leonard Korman Respiratory Institute.

"This suggests that failure to produce lipids, perhaps because of injury or age-related metabolic changes in lung cells, may be an underappreciated process in development of lung fibrosis, one that may also offer a new and potentially easier path to new treatment of this disease," said Dr. Romero lead author and NIH-funded investigator on the study published Feb 21st in the American Journal of Respiratory Cell and Molecular Biology.

Surfactants, or lipids produced inside the lung tissue, allow airways to inflate and deflate with ease. In fact, surfactants are often one of the first treatments given to premature infant to help ensure the lungs inflate and develop properly. In addition, all cells within the lung need lipids as signaling molecules and to build their internal and external membranes. But in earlier work Drs. Summer and Romero have shown that when lung tissue is injured -- by things like viral infection, particulate inhalation, or other insults -- lung cells eventually stop producing lipids in order to conserve energy for other cellular repairs.

In this study, the teams of Drs. Summer and Romero used a drug that inhibited lipid production in the lung and showed that this drug alone was capable of instigating lung fibrosis. In the converse experiment, the group showed that when increasing lipid production in lungs of animals already injured and developing pulmonary fibrosis, lung scarring could be reduced by 70-80 percent.

Lung fibrosis is thought to also come about when the endoplasmic reticulum (ER) in the cells of the lung becomes stressed and can no longer properly fold and unfold proteins. "We think that the chronic ER stress might ensue because of the inability of cells to produce sufficient lipids to supply their vast amount of ER membranes. Without appropriate lipid stores, the ER cannot properly manufacture or remove damaged proteins out of the cell into lysosomes, as a result, damage accumulates in these lung cells leading to irreversible fibrosis."

Dr. Summer and colleagues are currently working to develop a therapy that could restore lipid production in the lungs of pulmonary fibrosis patients and slow the fibrotic process. As a physician who treats patients with pulmonary fibrosis in a multidisciplinary clinic at Jefferson, the research has a sense of urgency. "I'd like to be able to offer my patients better options for this disease," he said.

Credit: 
Thomas Jefferson University