Culture

Estimating the wear and tear of ice on structures over coming decades or even centuries

image: An example of an ice rubble formation by the Gulf of Bothnia, Finland.

Image: 
Janne Ranta/Aalto University

Conditions in Arctic waters are among the most extreme in the world. Strong winds and currents powerfully push ice across vast distances, resulting in large ridges reaching tens of metres in height. At the same time, global warming and increased human presence have added new pressures in these northern environments. Yet the mechanisms behind the effects of ice on physical structures -- particularly over the long term -- have remained an open question in a time of continuous environmental change.

Researchers at Aalto University have developed a new method of assessing how ever-moving, heavy loads of ice affect structures like bridges or even wind turbines across a wide variety of conditions.

'Rubbling is the process of how ice -- moved by winds and currents -- breaks against marine structures and how it impacts these structures,' explains Jukka Tuhkuri, a leading ice researcher and professor at Aalto University. 'The process is extremely sensitive to initial conditions, chaotic even, which makes systematic analysis in the field incredibly challenging.'

To foresee a wealth of scenarios and effects over the very long term, the Aalto-based team is making use of numerical experiments -- advanced computer simulations that draw on knowledge gained from the field -- to see the effects of micro-level changes to known elements of rubbling.

'With this method, we can really see what's happening since we have full control of the factors involved. With real sea ice, we just don't have that possibility,' says Assistant Professor Arttu Polojärvi.

The precise simulations have allowed researchers to learn about the mechanics behind the process in a way never before possible.

'By constantly running simulations, we've learned that the thickness of ice is by far the most important thing when it comes to how ice loads impact structures. Compressive strength comes second but we can almost forget about everything else, contrary to conventional thinking in the field,' says Tuhkuri.

The method also helps address the main challenge of an ever-changing climate: foreseeing future conditions. Global warming means Arctic ice is getting thinner, storms are becoming more severe and more ice is moving. At the same time, overall conditions are easing in these regions; industry and tourism are picking up, which carries risks for both humans and the environment.

'We can't estimate the future with the field data available today. Strong, thick ice stays where it is, but even small storms can carry thin ice,' says Polojärvi. 'We need to be able to estimate 100- or even 500-year ice loads on permanent structures so we can ensure they are safe and last, while also minimising the materials used to make them as sustainable as possible.'

The team will present their work on Friday 7 June 2019 at a week-long gathering of leading ice researchers, the IUTAM symposium on physics and mechanics of sea ice, held at Aalto University in Greater Helsinki, Finland.

Questions and Answers

Q: Have numerical experiments been used before to study ice loads?

A: This research marks the first time that numerical experiments have been used to conduct statistical analysis of ice loads, providing estimates of future wear and tear on physical structures over the very long term. In the past, researchers have studied ice rubbling ? that is how ice pushed by winds and currents breaks against marine structures and how it impacts the structures ? in various ways: experimentally in laboratories, experimentally by observing full-scale events, by developing theoretical models, and using different numerical tools.

Q: Can the method really predict how ice will behave as the climate heats up -- even centuries ahead?

A: Yes, if colleagues in geophysics can predict how ice gets thinner and weaker with global warming, we can predict how the ice loads change. This is because the new model is based on fundamental physical relations and, thus, we can change the thickness in the model and see what happens. Other models may not be as detailed.

Q: What risks do ice loads carry to humans and the environment?

A: Marine structures can break and cause accidents. If humans are working on a platform that fails, the consequences can be serious. Structural failures can also be very expensive and lead to environmental risks due to potential leaks in fuel tanks and fuel pipes, etc.

Credit: 
Aalto University

'Citizen scientists' help track foxes, coyotes in urban areas

image: A trail camera captured images of a fox in its den on the University of Wisconsin, Madison campus.

Image: 
Photo courtesy UW Urban Canid Project

CHAMPAIGN, Ill. -- As foxes and coyotes adapt to urban landscapes, the potential for encounters with humans necessarily goes up. A team of scientists is taking advantage of this fact to enlist the eyeballs and fingertips of humans - getting them to report online what they see in their own neighborhoods and parks.

One goal of the research, reported in the journal Landscape and Urban Planning, is to determine how information gathered by "citizen scientists" stacks up against data more laboriously collected by researchers who study canids like foxes and coyotes.

"Working with citizen scientists allows us to collect data over a period of years across large areas with basically no cost," said Max Allen, a wildlife ecologist at the Illinois Natural History Survey who led the research. "Getting the same data is expensive and time-consuming using traditional methods like radio-tracking collars."

An untested assumption is that community-generated data are reliable and valid, Allen said. Another concern has to do with the likelihood that citizen scientists will over-report sightings near their own homes and fail to observe canids in less urbanized areas, thus skewing the data to areas where humans are more likely to encounter the animals, he said.

To help address these concerns, Allen and his colleagues at the University of Wisconsin, Madison simultaneously compared information collected by scientists using radio- and GPS-tracking of red foxes and coyotes with data citizen scientists shared online. The researchers used social media to recruit the citizen scientists, directing them to the iNaturalist.org website, one of the largest citizen-science projects in the world.

The study area encompassed a 6,120-hectare expanse in Madison that included the University of Wisconsin campus. The territory ranged from highly developed areas to natural areas and open water. Some of the less-developed parts of landscape were open lawns or meadows with few trees and some were covered in dense forest.

The researchers trapped and tagged 11 coyotes and 11 red foxes, fitting them with either radio telemetry or GPS collars. They closely monitored the animals' movements from May 2015 to December 2016. At the same time, they collected several hundred community-generated observations of red foxes and coyotes through the iNaturalist website.

After working to confirm the reliability of the reports from citizen scientists, the researchers found a significant amount of overlap between the research data and citizen sightings, roughly 65% for foxes and 56% for coyotes. Coyote home ranges were centered on the three large green spaces in the study area, while foxes tended to avoid these areas, Allen said. The foxes were more likely to be found in residential neighborhoods.

"We believe this led to red foxes having greater overlap because they occurred closer to where people spend most of their time," Allen said.

The researchers found that the citizen scientists on the iNaturalist site didn't track actual distribution of the canids but were more likely to identify hotspots of human-carnivore interactions.

"This shows us the areas where conflicts between humans and carnivores are most likely to occur," Allen said. "The online reports can be used as a tool to proactively monitor and manage urban wildlife."

The online reporters did not capture as much data as the scientists could with their more sophisticated techniques, but the findings suggest research efforts could greatly benefit from the participation of community reporters, Allen said.

"A side benefit of citizen science and programs like iNaturalist is that they help people learn more about their local wildlife while also helping scientists collect accurate data," Allen said. "It is important for them to collect accurate data, but we vetted each photograph that was reported and found that almost all of them accurately identified the species.

Credit: 
University of Illinois at Urbana-Champaign, News Bureau

Building blocks of the Earth

Chemical analyses of meteorites allow for a better estimation of the chemical composition of the Earth and its potential building blocks. That is the result of a study conducted by a research team from the Institutes of Geology and Mineralogy at the Universities of Cologne and Bonn. The results have appeared in the current issue of 'Nature Geoscience'.

The study focuses on the distribution and origin of so-called volatile elements such as zinc, lead and sulphur, which have low boiling temperatures in space. The newly determined distribution of these volatile elements in the Earth shows that some of these building blocks have a chemical composition similar to carbonaceous chondrites, an aqueous group of primitive meteorites. These meteorites come closest to the composition of the original solar nebula from which our solar system formed. Thus, the study also indirectly provides another valuable indication of the source of vital components such as water, carbon and nitrogen on Earth.

The chemical composition of the Earth is not easy to determine. Geological processes such as the formation of the metallic core and the outer crust led to a redistribution of the elements composing our planet. For example, elements attracted to iron have migrated into the Earth's core, while elements attracted to silicate compose the rocks of the Earth's mantle and crust. 'Today, we only have access to samples from the silicate part of the Earth, which is why we can only estimate the chemical composition of the entire Earth through the additional analysis of primitive meteorites - the potential building blocks of the Earth,' said Professor Carsten Münker from the University of Cologne. The recent publication makes an important contribution to understanding the chemical composition of the deeper layers of the Earth.

The research team focused on the distribution of volatile trace elements such as the rare metals indium, cadmium and tellurium. This is a particular challenge since a proportion of these metals was lost already at the beginning of the solar system due to their volatility. Today, they are extremely rare both in meteorites and in the Earth - less than one gram per ton of rock. 'So far, we have always assumed that the distribution of these elements decreases linearly the more volatile they are,' said the geochemist Dr Frank Wombacher, one of the initiators of the study.

By using high-precision methods, however, the scientists arrived at a surprising result. 'While the frequencies initially decrease linearly, contrary to expectations the most volatile elements are all equally depleted,' explains Ninja Braukmüller, a doctoral researcher who carried out the study in Cologne. Indium and zinc, the volatile elements attracted to silicate in the Earth's mantle, also show this pattern. 'This seems to be unique among the potential building blocks of the Earth,' says Dr Claudia Funk, a co-author of the study. The results allow the scientists to conclude that the building blocks that have brought volatile elements to Earth are similar in their chemical composition to that of primitive carbonaceous chondrites.

Credit: 
University of Cologne

Association for Molecular Pathology expresses serious concerns with Congress' attempt to resurrect human gene patenting debate and reverse settled Supreme Court ruling

ROCKVILLE, Md. - June 4, 2019 - The Association for Molecular Pathology (AMP), the premier global, molecular diagnostic professional society, expressed serious concerns with Congress' recent proposal to amend Section 101 of the Patent Act. If enacted, the draft legislation would overturn 150 years of patent case law and permit patenting of human genes and naturally-occurring associations between genes and diseases. In a recent letter to Senators Coons and Tillis, and Representatives Collins, Johnson and Stivers, AMP joined a diverse community of 169 medical, scientific, patient advocacy, women's health, and civil rights organizations, opposing the recent proposal.

For over 150 years, the Supreme Court has held that laws of nature, natural phenomena and abstract ideas are not patent-eligible under Section 101 of the Patent Act. In the landmark 2013 Association for Molecular Pathology v. Myriad Genetics1 case, a unanimous Supreme Court ruled in favor of AMP and determined that "A naturally occurring DNA segment is a product of nature and not patent eligible merely because it has been isolated." The Court concluded that such patents would lock up genetic information and prevent others from scientific and medical work.

"The Court's 2013 ruling was the culmination of many years of deep concern within the medical field and was celebrated across the greater scientific community who fought hard for the chance to be heard," said Mary Steele Williams, Executive Director of AMP. "Today, AMP is prepared to win this fight again. In this age of precision medicine, it is more important than ever to maintain the boundaries between nature and technology, so that we can continue to develop innovative diagnostics for devastating diseases and provide access to the best medical care. Since the Supreme Court's decision, multiplex gene panels that feature dozens of genes in a single test are now routine practice. Advances such as this would have been difficult if not impossible without the Court's decision."

AMP maintains its position that the authorization of patents for human genes and naturally occurring associations between genes and diseases would impede the scientific community from working together to discover novel diagnostics and treatments for rare and common diseases including cancer, muscular dystrophy, Alzheimer's disease, and heart disease. It would also create barriers to patients' access to potentially life-saving genomic tests and eliminate access to confirmatory testing. Further, these patents would create monopolies that would stifle competition and dramatically increase costs for payers, patients and the healthcare system overall.

"The fact that the new proposed legislation was drafted with input only from a select group of patent holders, lawyers, large corporations, and the pharmaceutical industry should be of grave concern to the public," said Victoria M. Pratt, PhD, FACMG, Associate Professor, Director of Pharmacogenetics and Molecular Genetics Laboratories, Indiana University School of Medicine, and President of AMP. "AMP members share a common goal of putting the patient first and we will continue to offer our collective expertise and engage key stakeholders and members of Congress to make sure this proposed legislation as currently drafted is not passed into law."

Credit: 
Association for Molecular Pathology

Sleepless nights linked to high blood pressure

A bad night's sleep may result in a spike in blood pressure that night and the following day, according to new research led by the University of Arizona.

The study, to be published in the journal Psychosomatic Medicine, offers one possible explanation for why sleep problems have been shown to increase the risk of heart attack, stroke and even death from cardiovascular disease.

The link between poor sleep and cardiovascular health problems is increasingly well-established in scientific literature, but the reason for the relationship is less understood.

Researchers set out to learn more about the connection in a study of 300 men and women, ages 21 to 70, with no history of heart problems. Participants wore portable blood pressure cuffs for two consecutive days. The cuffs randomly took participants' blood pressure during 45-minute intervals throughout each day and also overnight.

At night, participants wore actigraphy monitors - wristwatch-like devices that measure movement - to help determine their "sleep efficiency," or the amount of time in bed spent sleeping soundly.

Overall, those who had lower sleep efficiency showed an increase in blood pressure during that restless night. They also had higher systolic blood pressure - the top number in a patient's blood pressure reading - the next day.

More research is needed to understand why poor sleep raises blood pressure and what it could mean long-term for people with chronic sleep issues. Yet, these latest findings may be an important piece of the puzzle when it comes to understanding the pathway through which sleep impacts overall cardiovascular health.

"Blood pressure is one of the best predictors of cardiovascular health," said lead study author Caroline Doyle, a graduate student in the UA Department of Psychology. "There is a lot of literature out there that shows sleep has some kind of impact on mortality and on cardiovascular disease, which is the No. 1 killer of people in the country. We wanted to see if we could try to get a piece of that story - how sleep might be impacting disease through blood pressure."

The study reinforces just how important a good night's sleep can be. It's not just the amount of time you spend in bed, but the quality of sleep you're getting, said study co-author John Ruiz, UA associate professor of psychology.

Improving sleep quality can start with making simple changes and being proactive, Ruiz said.

"Keep the phone in a different room," he suggested. "If your bedroom window faces the east, pull the shades. For anything that's going to cause you to waken, think ahead about what you can do to mitigate those effects."

For those with chronic sleep troubles, Doyle advocates cognitive behavioral therapy for insomnia, or CBTI, which focuses on making behavioral changes to improve sleep health. CBTI is slowly gaining traction in the medical field and is recommended by both the American College of Physicians and the American Academy of Sleep Medicine as the first line of treatment for insomnia.

Doyle and Ruiz say they hope their findings - showing the impact even one fitful night's rest can have on the body - will help illuminate just how critical sleep is for heart health.

"This study stands on the shoulders of a broad literature looking at sleep and cardiovascular health," Doyle said. "This is one more study that shows something is going on with sleep and our heart health. Sleep is important, so whatever you can do to improve your sleep, it's worth prioritizing."

Credit: 
University of Arizona

Everything will connect to the internet someday, and this biobattery could help

image: This solid phase bacteria-powered biobattery could be a low-cost power source for the Internet of Disposable Things.

Image: 
Sean Choi

BINGHAMTON, N.Y. - In the future, small paper and plastic devices will be able to connect to the internet for a short duration, providing information on everything from healthcare to consumer products, before they are thrown away. Researchers at Binghamton University, State University of New York have developed a micro biobattery that could power these disposable sensors.

The Internet of Disposable Things is a phenomenon in which wireless sensors are attached to nearly any type of device in order to provide up-to-date information via the internet. For example, a sensor could be attached to food packaging to monitor the freshness of the food inside.

"Internet of Disposable Things (IoDT) is a new paradigm for the rapid evolution of wireless sensor networks," said Seokheun Choi, associate professor of electrical and computer engineering at Binghamton University. "This novel technique, constructed in a small, compact, disposable package at a low price point, can connect things inexpensively to function for only a programmed period and then be readily thrown away."

Choi's previous small-size microbial fuel cells suffered from low power density and energy-intensive fluidic feeding operation, so he thought that a small-power, disposable, solid-state battery-type microbial fuel cell platform without the fluidic system would be more applicable and potentially realizable.

"Previously, my group had two directions: 1) disposable paper-based biobatteries for single-use low-power systems (e.g. biosensors) and 2) long-term microbial fuel cells for sustainable applications," said Choi. "The biobattery we developed this time was a kind of combined technique of those two; the power duration was significantly enhanced by using solid-state compartments but the device is a form of a battery without complicated energy-intensive fluidic feeding systems that typical microbial fuel cells require."

"Current IoDTs are mostly powered by expensive and environmentally-hazardous batteries, thus, ultimately leading to significant cost increases and environmental issues for their large-scale deployment," added Choi. "Our biobattery is low-cost, disposable and environmentally-friendly."

Choi is in the process of integrating serially connected biobatteries with a DC-DC converter.

Credit: 
Binghamton University

New research explores the mechanics of how birds flock

video: Wildlife researchers have long tried to understand why birds fly in flocks and how different types of flocks work. This video shares findings from a new study from the University of North Carolina at Chapel Hill that explores the mechanics and benefits of the underlying flock structure used by four types of shorebirds.

Image: 
UNC-Chapel Hill

Wildlife researchers have long tried to understand why birds fly in flocks and how different types of flocks work. A new study from the University of North Carolina at Chapel Hill explores the mechanics and benefits of the underlying flock structure used by four types of shorebirds. Understanding more about how these birds flock moves researchers a step closer to understanding why they flock.

The study, led by Aaron Corcoran, a postdoctoral researcher studying bat and bird flight and ecology, and biology Professor Tyson Hedrick of UNC-Chapel Hill, appears in the June 4 issue of eLife. The National Science Foundation funded the work.

In the study, the researchers focused on four types of shorebirds that vary in size: dunlin, short-billed dowitcher, American avocet and marbled godwit. Corcoran and Hedrick filmed and analyzed almost 100 hours of video footage to better understand the mechanics of shorebird flocks. They found that the birds fly in a newly defined shape the team named a compound V-formation, which they believe provides an aerodynamic advantage and predator protection.

This compound formation is a blend of two of the most common flock formations. One is a cluster formation, common with pigeons, where a large number of birds fly in a moving three-dimensional cloud with no formal structure. This structure is useful for avoiding predators. The second is a simple V-formation, commonly used by Canada Geese, where a smaller number of birds will line up in a well-defined two-dimensional V-shape.

"A flying bird creates downward-moving air immediately behind it and upward-moving air just beyond its wingspan on the left and right," Hedrick said. "Taking advantage of this upward-moving air is all about positioning, and birds in the simple-V formation and compound-V formation are positioned correctly for aerodynamic advantage."

To better understand the cluster-V formation and its mechanics, Corcoran and Hedrick recorded 18 cluster-like flocks of 100 to 1,000 birds flying over a bird sanctuary and agricultural fields during a migration stopover. The researchers measured the individual bird positions, flight speeds and even flapping frequency using three-dimensional computer reconstructions of the flocks from the video recordings.

"We thought we would find a trend in flock organization related to how large or small the different birds were," Hedrick said. "Instead we saw that regardless of size, all these birds flew in the same formation - one that might let them get an aerodynamic benefit while flying in large groups, aiding their long-distance migration."

Birds often fly in flocks ranging from very structured V-formations to loose clusters to improve flight efficiency, navigation or for predator avoidance. However, because it is difficult to measure large flocks of moving birds, few studies have measured how birds position themselves in large flocks or how their position affects their flight speed and flapping frequency.

The four types of birds studied in this project live in similar environments, but vary greatly in size, fly at different speeds, and have been evolutionarily separate for 50 million years. The birds mostly flocked with their own species, except for a few occasions where the godwits and dowitchers flew together in a mixed flock.

The study also showed that each bird -- regardless of size or species, or even the species of its neighbor -- most commonly flew about one wingspan to the side and between a half to one-and-a-half wingspans back from the bird in front of it. This flock structure, which is different from that of other flocking birds like pigeons and starlings, was termed a compound V-formation because birds flying in simple V-shaped formations follow similar rules.

Credit: 
University of North Carolina at Chapel Hill

Gene-edited chicken cells resist bird flu virus in the lab

Scientists have used gene-editing techniques to stop the bird flu virus from spreading in chicken cells grown in the lab.

The findings raise the possibility of producing gene-edited chickens that are resistant to the disease.

Researchers prevented the virus from taking hold by deleting a section of chicken DNA inside lab-grown cells.

The next step will be to try to produce chickens with the genetic change. No birds have been produced yet, the team says.

Scientists targeted a specific molecule inside chicken cells called ANP32A. Researchers at Imperial College London found that during an infection, flu viruses hijack this molecule to help replicate themselves.

Working with experts from the University of Edinburgh's Roslin Institute, the researchers used gene-editing techniques to remove the section of DNA responsible for producing ANP32A.

They found the virus was no longer able to grow inside cells with the genetic change.

Bird flu is a major threat to farmed chickens worldwide, with severe strains killing up to 100 per cent of birds in a flock. In rare instances, certain variations of the virus can infect people and cause serious illness. Efforts to control the spread of the disease are urgently needed.

Researchers at The Roslin Institute previously worked with experts from Cambridge University to produce chickens that did not transmit bird flu to other chickens following infection, using genetic modification techniques. The new approach is different because it does not involve introducing new genetic material into the bird's DNA.

The study was funded by the Biotechnology and Biological Sciences Research Council, which also provides strategic funding to The Roslin Institute. PhD student funding was provided by the global poultry research company Cobb-Vantress. The research is published in the journal eLife.

Dr Mike McGrew, of the University of Edinburgh's Roslin Institute, said: "This is an important advance that suggests we may be able to use gene-editing techniques to produce chickens that are resistant to bird flu. We haven't produced any birds yet and we need to check if the DNA change has any other effects on the bird cells before we can take this next step."

Professor Wendy Barclay, Chair in Influenza Virology at Imperial College London, said: "We have long known that chickens are a reservoir for flu viruses that might spark the next pandemic. In this research, we have identified the smallest possible genetic change we can make to chickens that can help to stop the virus taking hold. This has the potential to stop the next flu pandemic at its source."

Rachel Hawken, Senior Director of Genomics and Quantitative Genetics at Cobb-Vantress, said: "Avian influenza resistance in broiler production is of global significance and this research is an important step toward that goal. It is exciting for Cobb to be a part of exploring new technologies that could be used to advance poultry breeding in the future."

Credit: 
University of Edinburgh

Red and white meats are equally bad for cholesterol

Contrary to popular belief, consuming red meat and white meat such as poultry, have equal effects on blood cholesterol levels, according to a study published today in the American Journal of Clinical Nutrition.

The study, led by scientists at Children's Hospital Oakland Research Institute (CHORI) -- the research arm of UCSF Benioff Children's Hospital Oakland -- surprised the researchers with the discovery that consuming high levels of red meat or white poultry resulted in higher blood cholesterol levels than consuming a comparable amount of plant proteins. Moreover, this effect was observed whether or not the diet contained high levels of saturated fat, which increased blood cholesterol to the same extent with all three protein sources.

"When we planned this study, we expected red meat to have a more adverse effect on blood cholesterol levels than white meat, but we were surprised that this was not the case -- their effects on cholesterol are identical when saturated fat levels are equivalent," said the study senior author Ronald Krauss, M.D., senior scientist and director of Atherosclerosis Research at CHORI.

Krauss, who is also a UCSF professor of medicine, noted that the meats studied did not include grass-fed beef or processed products such as bacon or sausage; nor did it include fish.

But the results were notable, as they indicated that restricting meat altogether, whether red or white, is more advisable for lowering blood cholesterol levels than previously thought. The study found that plant proteins are the healthiest for blood cholesterol.

This study, dubbed the APPROACH (Animal and Plant Protein and Cardiovascular Health) trial, also found that consuming high amounts of saturated fat increased concentrations of large cholesterol-enriched LDL particles, which have a weaker connection to cardiovascular disease than smaller LDL particles.

Similarly, red and white meat increased amounts of large LDL in comparison to nonmeat diets. Therefore, using standard LDL cholesterol levels as the measure of cardiovascular risk may lead to overestimating that risk for both higher meat and saturated fat intakes, as standard LDL cholesterol tests may primarily reflect levels of larger LDL particles.

Consumption of red meat has become unpopular during the last few decades over concerns about its association with increased heart disease. Government dietary guidelines have encouraged the consumption of poultry as a healthier alternative to red meat.

But there had been no comprehensive comparison of the effects of red meat, white meat and nonmeat proteins on blood cholesterol until now, Krauss said. Nonmeat proteins such as vegetables, dairy, and legumes, such as beans, show the best cholesterol benefit, he said.

"Our results indicate that current advice to restrict red meat and not white meat should not be based only on their effects on blood cholesterol," Krauss said. "Indeed, other effects of red meat consumption could contribute to heart disease, and these effects should be explored in more detail in an effort to improve health."

Credit: 
University of California - San Francisco

Gene mutation evolved to cope with modern high-sugar diets

The gene variant became more common in humans after cooking and farming became widespread, and might now help people avoid diabetes, according to the findings published in eLife.

"We found that people differ in how efficiently their bodies can manage blood sugar levels, resulting from an evolutionary process that seems to have been brought about by changing diets," said the study's lead author, Professor Frances Brodsky, Director of UCL Biosciences.

The researchers were investigating the CLTCL1 gene, which directs production of the CHC22 protein that plays a key role in regulating a glucose transporter in our fat and muscle cells.

After people eat, the hormone insulin reacts to higher levels of blood glucose by releasing the transporter to remove glucose from the blood, taking it into muscle and fat tissue. Between meals, with the help of the CHC22 protein, the glucose transporter remains inside muscle and fat so that some blood sugar will continue to circulate.

The research team, consisting of specialists in population genetics, evolutionary biology, ancient DNA and cell biology, analysed human genomes as well as those of 61 other species, to understand how the gene producing CHC22 has varied throughout evolutionary history.

In humans, by looking at the genomes of 2,504 people from the global 1000 Genomes Project, they found that almost half of the people in many ethnic groups have a variant of CHC22 that is produced by a mutated gene, which became more common as people developed cooking and farming.

The researchers also looked at genomes of ancient humans, and found that the newer variant is more common in ancient and modern farming populations than in hunter-gatherers, suggesting that increased consumption of carbohydrates could have been the selective force driving the genetic adaptation.

By studying cells, the researchers found that the newer CHC22 variant is less effective at keeping the glucose transporter inside muscle and fat between meals, meaning the transporter can more readily clear glucose out of the blood. People with the newer variant will therefore have lower blood sugar.

"The older version of this genetic variant likely would have been helpful to our ancestors as it would have helped maintain higher levels of blood sugar during periods of fasting, in times when we didn't have such easy access to carbohydrates, and this would have helped us evolve our large brains," said first author Dr Matteo Fumagalli, who began the study at UCL before moving to Imperial College London.

"In more recent years, with our high-carb diets that often provide us too much sugar, the newer variant may be advantageous," Dr Fumagalli added.

The researchers say that while this genetic variant does not play a direct role in the development of diabetes, having the older variant may make people more likely to develop diabetes, and it may also exacerbate insulin resistance involved in diabetes.

"People with the older variant may need to be more careful of their carb intake, but more research is needed to understand how the genetic variant we found can impact our physiology," added Professor Brodsky.

Co-author Professor Mark Thomas (UCL Genetics, Evolution & Environment) added: "Our analyses strongly suggest that we have found yet another example of how prehistoric changes in dietary habits have shaped human evolution. Understanding how we have adapted to these changes doesn't only inform us about why people lived or died in the past, but also helps us to better understand the relationship between diet, health and disease today."

Credit: 
University College London

Researchers discover cells that change their identity during normal development

image: The cells are located at the fin tips, which are flared during social interactions, like "raising a flag," researchers said.

Image: 
UVA

A new study by researchers at the University of Virginia and other institutions has discovered a type of pigment cell in zebrafish that can transform after development into another cell type.

David Parichy, the Pratt-Ivy Foundation Distinguished Professor of Morphogenesis in UVA's Department of Biology, said that researchers in his lab noticed that some black pigment cells on zebrafish became gray and then eventually white. When they looked closer, they found dramatic changes in gene expression and pigment chemistry.

"We realized that the cells have a secret history hiding in plain sight," he said. "Zebrafish have been studied closely for more than 30 years - we know a lot about them - but this is the first time this transformation has been noticed. It's a very surprising discovery."

The unique cell population sheds the pigment melanin, changing in color from black to white during the life cycle of an individual fish. These special cells are found at the edges of the fins, where they seem to act as a signal to other zebrafish.

The ability of a developed cell to differentiate directly into another type of cell is exceptionally rare. Normally such a change requires experimental intervention, returning the cell to a stem-cell state in a dish, before it can differentiate, or transform, as something else.

The new finding, published recently online in the journal Proceedings of the National Academy of Sciences, suggests that some developed cells might be more amenable to change than generally believed.

"For a long time, the idea in developmental biology has been that once a cell has completed its development, it stays that way," said Parichy, who led the study. "We are discovering that this is not always the case; that, in fact, there are some rare cell populations that are able to change into something new even after their initial development. The dogma says this isn't supposed to happen."

Stem cells develop into one type of cell or another, and then those differentiated cells normally stay that way - a skin cell stays a skin cell, muscle cells stay muscle, and so on. But the newly discovered cells, called melanocytes, which are similar to those of humans, contain melanin initially, then lose it and make a white pigment in its place. These cells block the molecular pathways that otherwise would allow them to make melanin and turn on new genes required for their new appearance.

This ability to change makes the cells a good study model for understanding both how cells differentiate, and how it may be possible to make cells differentiate into something new even while still in the animal.

The discovery, Parichy said, has possible implications for regenerative medicine, where researchers might want to use cells already present to make replacement tissues of various cell types. Such a capability could be useful in treating patients after stroke, spinal cord injury, heart attacks or other trauma.

"Knowing how cells can be made to change their differentiated state is essential to regenerative medicine, so having an example in which a species does this naturally is very valuable," he said.

Researchers already are using stem cells to create various cell types, from muscle to skin, but perhaps developed cells also could play a role. Parichy hopes that what he and other researchers learn from the highly unusual transformation of these pigment cells will provide greater understanding of the process and, perhaps, how to manipulate it by reprogramming cells.

"If we can understand how cells go from black to white, this has implications for helping us better understand cells more generally," he said.

The study also showed that zebrafish were able to recognize whether or not these transforming cells were present, and this affected their social interactions. The cells are located at the fin tips, which are flared during social interactions, like "raising a flag." As such, these black cells that turn to white may affect associations of fish in the wild, with consequences for access to food and mates, as well as avoidance of predators.

Remarkably the authors also found an additional population of white cells in zebrafish, made in a different way, and that different species of the fish had different complements of these populations. Parichy noted that the study "really shows how much you can learn by tackling questions at levels ranging from genes to cells to behavior to evolution."

Credit: 
University of Virginia

Should STEMI patients recover in the ICU?

image: Providers need more clear guidance on whether a patient who has suffered from STEMI heart attack should recover in the intensive care unit, a new University of Michigan study in The BMJ finds.

Image: 
Michigan Medicine

A trip to an intensive care unit can be more than twice as costly as a stay in a non-ICU hospital room, but a new study finds intensive care is still the right option for some vulnerable patients after a severe heart attack.

The difficulty lies in determining which people are best served in the ICU while they recover.

The new Michigan Medicine (University of Michigan) research, published in The BMJ, found ICU admission was associated with improved 30-day mortality rates for patients who had a STEMI heart attack and weren't clearly indicated for an ICU or non-ICU unit.

"For these patients who could reasonably be cared for in either place, ICU admission was beneficial," says lead author Thomas Valley, M.D., M.Sc., an assistant professor of internal medicine at Michigan Medicine, who cares for patients in the intensive care unit.

But Valley cautions against simply continuing to send nearly everyone to the ICU.

"ICU care is a treatment just like any medication," Valley says. "Providers need to know whether it's right for an individual person just like we try to do with a prescription drug."

The researchers analyzed Medicare data from more than 100,000 patients hospitalized with STEMI, or ST-elevation myocardial infarction, a dangerous heart attack that requires quick opening of the blocked blood vessel to restore blood flow. Those patients were hospitalized at 1,727 acute care hospitals across the U.S. in a nearly two-year period from January 2014 to October 2015, and most were sent to the ICU after treatment.

"A lot of the focus is on getting these people to the cardiac catheterization lab as soon as possible to open up the blood vessel, but less is known about what you do after that," Valley says.

Current U.S. guidelines don't address whether to send patients to the ICU, while European guidelines recommend the ICU.

Valley says providers could use more clear guidance on how to make these decisions.

In this study, the mortality rate was 6.1% lower after 30 days for those admitted to their hospital's ICU. Valley says the surprising results--in the face of other studies that show ICU overuse--demonstrate that ICU care is misdirected.

'An important debate in cardiology'

This study addresses an important issue in ICU care, says Michael Thomas, M.D., an assistant professor of internal medicine who runs the Cardiac ICU at Michigan Medicine's Cardiovascular Center.

"At Michigan Medicine, all of our STEMI patients are admitted to the Cardiac ICU," says Thomas, who was not involved with the BMJ paper. "However, knowing where to send these patients after STEMI is an important debate in cardiology right now."

"Some recent studies suggest many patients don't need ICU level of care and that it wastes resources. But before we pull back from this model, we need to understand this problem more fully," he says.

Across the nation, 75% of STEMI heart attack patients are sent to the ICU, most of the time after reperfusion treatment in the cath lab to open up the blocked vessel.

ICU vs. non-ICU care

People recovering from a STEMI are some of the very sick patients ICUs were originally designed for, so providers may not even think about disrupting the longtime status quo, Valley says.

"The historical thinking was, 'Why not send everyone to the ICU?' Now, we see that there are risks associated," Valley says. "For example, in the ICU, you're more likely to have a procedure, whether you need it more or not.

"We must also consider the risk of infection, sending someone to a unit full of really sick patients who might have C. diff or other serious infections."

The sleep quality as people are recovering from their heart attacks may also be lower in the ICU, because patients are given such close nursing care, Valley says.

That's necessary for the sickest patients, but it might be disruptive to those people on the bubble who could be getting better rest on a regular floor, he says.

Medicare has requirements for what constitutes ICU care, such as high nurse staffing levels and access to lifesaving care.

"Because of Medicare requirements, ICUs tend to be more similar across hospitals than non-ICUs," Valley says.

"Perhaps some hospitals can take care of patients anywhere, while others really need to use the ICU at high rates in order to provide safe care."

A clear benefit for some, increased cost for others

Valley says these data show a clear benefit of ICU care for vulnerable patients, as opposed to non-STEMI patients studied who did not have a significant difference in mortality rates with or without ICU admission.

"Physicians might look at STEMI patients and wonder, 'Do they really need the ICU? Could it harm them? Is it a good use of resources?'" Valley says.

Valley, a member of U-M's Institute for Healthcare Policy and Innovation, has previously found ICU overuse occurred for less critical patients hospitalized for a flare-up of chronic obstructive pulmonary disease (COPD) or heart failure. In that study, ICU admission dramatically increased cost of care without an increased survival benefit.

The next step, according to Valley, is to determine what is beneficial about the ICU for those patients who benefit from it. He says that could lead to hospitals adopting some ICU care practices on non-ICU floors.

Valley hopes making non-ICU floors more similar to the ICU in some ways could improve outcomes while reducing cost of care and infection risk.

Credit: 
Michigan Medicine - University of Michigan

UTA researcher uses nanoparticles stimulated by microwaves to combat cancer

A physicist at The University of Texas at Arlington has proposed a new concept for treating cancer cells, further advancing the University's status as a leader in health and the human condition.

In a recently published paper in the journal Nanomedicine: Nanotechnology, Biology and Medicine, UTA physics Professor Wei Chen and a team of international collaborators advanced the idea of using titanium dioxide (TiO2) nanoparticles stimulated by microwaves to trigger the death of cancer cells without damaging the normal cells around them.

The method is called microwave-induced radical therapy, which the team refers to as microdynamic therapy, or MDT.

The use of TiO2 nanoparticles activated by light and ultrasound in cancer treatments has been studied extensively, but this marks the first time researchers have shown that the nanoparticles can be effectively activated by microwaves for cancer cell destruction--potentially opening new doors to treatment for patients fighting the disease.

Chen said the new therapy centers on reactive oxygen species, or ROS, which are a natural byproduct of the body's metabolism of oxygen. ROS help kill toxins in the body, but can also be damaging to cells if they reach a critical level.

TiO2 enters cells and produces ROS, which are able to damage plasma membranes, mitochondria and DNA, causing cell death.

"Cancer cells are characterized by a higher steady-state saturation of ROS than normal, healthy cells," Chen said. "This new therapy allows us to exploit that by raising the saturation of ROS in cancer cells to a critical level that triggers cell death without pushing the normal cells to that same threshold."

The pilot study for this new treatment concept builds upon Chen's expertise in the use of nanoparticles to combat cancer.

Chen's collaborators hail from the Guangdong Academy of Medical Sciences and Beihang University. The team conducted experiments that demonstrate the nanoparticles can significantly suppress the growth of osteosarcomas under microwave irradiation.

While TiO2 and low-power microwave irradiation alone did not effectively kill cancer cells, the combination of the two proved successful in creating a toxic effect for the tumor cells. Microwave ablation therapy has already proven to be an effective treatment against bone cancer, obtaining better results than MDT. However, MDT has applications for combatting other types of cancer, not just the osteosarcomas used for this pilot case.

Using light to activate ROS--as is seen in photodynamic therapy--can be challenging for the treatment of tumors deeply located within the body; in contrast, microwaves lend the ability to create deeper penetration that propagates through all types of tissues and non-metallic materials.

"This new discovery is exciting because it potentially creates new avenues for treating cancer patients without causing debilitating side effects," Chen said. "This targeted, localized method allows us to keep healthy cells intact so patients are better equipped to battle the disease."

The results of the pilot study indicate MDT is a promising approach for cancer treatment even though the method is still being developed and its limitations explored. The research team has filed a patent for MDT. Next, they plan to turn their attention to understanding the physics and internal mechanisms behind the powerful combination of microwaves and TiO2.

"Dr. Chen continues to build a research portfolio that holds transformative implications for cancer treatment," said Alex Weiss, chair of UTA's Department of Physics. "This new work is exemplary of the spirit of discovery we strive to embody at UTA. I look forward to what Dr. Chen and his collaborators can accomplish in the coming phases of this research as they pioneer a potential new avenue for combatting cancer."

Credit: 
University of Texas at Arlington

Solving the sun's super-heating mystery with Parker Solar Probe

image: Alfven waves still.

Image: 
Michigan Engineering

ANN ARBOR--It's one of the greatest and longest-running mysteries surrounding, quite literally, our sun--why is its outer atmosphere hotter than its fiery surface?

University of Michigan researchers believe they have the answer, and hope to prove it with help from NASA's Parker Solar Probe.

In roughly two years, the probe will be the first man-made craft to enter the zone surrounding the sun where heating looks fundamentally different that what has previously been seen in space. This will allow them to test their theory that the heating is due to small magnetic waves traveling back and forth within the zone.

Solving the riddle would allow scientists to better understand and predict solar weather, which can pose serious threats to Earth's power grid. And step one is determining where the heating of the sun's outer atmosphere begins and ends--a puzzle with no shortage of theories.

"Whatever the physics is behind this superheating, it's a puzzle that has been staring us in the eye for 500 years," said Justin Kasper, a U-M professor of climate and space sciences and a principal investigator for the Parker mission. "In just two more years, Parker Solar Probe will finally reveal the answer."

The U-M theory, and how the team will use Parker to test it, is laid out in a paper published June 4 in The Astrophysical Journal Letters.

In this "zone of preferential heating" above the sun's surface, temperatures rise overall. More bizarre still, individual elements are heated to different temperatures, or preferentially. Some heavier ions are superheated until they're 10 times hotter than the hydrogen that is everywhere in this area--hotter than the core of the sun.

Such high temperatures cause the solar atmosphere to swell to many times the diameter of the sun and they're the reason we see the extended corona during solar eclipses. In that sense, Kasper says, the coronal heating mystery has been visible to astronomers for more than a half millenium, even if the high temperatures were only appreciated within the last century.

This same zone features hydromagnetic "Alfvén waves" moving back and forth between its outermost edge and the sun's surface. At the outermost edge, called the Alfvén point, the solar wind moves faster than the Alfvén speed, and the waves can no longer travel back to the sun.

"When you're below the Alfvén point, you're in this soup of waves," Kasper said. "Charged particles are deflected and accelerated by waves coming from all directions."

In trying to estimate how far from the sun's surface this preferential heating stops, U-M's team examined decades of observations of the solar wind by NASA's Wind spacecraft.

They looked at how much of helium's increased temperature close to the sun was washed out by collisions between ions in the solar wind as they traveled out to Earth. Watching the helium temperature decay allowed them to measure the distance to the outer edge of the zone.

"We take all of the data and treat it as a stopwatch to figure out how much time had elapsed since the wind was superheated," Kasper said. "Since I know how fast that wind is moving, I can convert the information to a distance."

Those calculations put the outer edge of the superheating zone roughly 10 to 50 solar radii from the surface. It was impossible to be more definitive since some values could only be guessed at.

Initially, Kasper didn't think to compare his estimate of the zone's location with the Alfvén point, but he wanted to know if there was a physically meaningful location in space that produced the outer boundary.

After reading that the Alfvén point and other surfaces have been observed to expand and contract with solar activity, Kasper and co-author Kristopher Klein, a former U-M postdoc and new faculty at University of Arizona, reworked their analysis looking at year-to-year changes rather than considering the entire Wind Mission.

"To my shock, the outer boundary of the zone of preferential heating and the Alfvén point moved in lockstep in a totally predictable fashion despite being completely independent calculations," Kasper said. "You overplot them, and they're doing the exact same thing over time."

So does the Alfvén point mark the outer edge of the heating zone? And what exactly is changing under the Alfvén point that superheats heavy ions? We should know in the next couple of years. The Parker Solar Probe lifted off in August 2018 and had its first rendezvous with the sun in November 2018--already getting closer to the sun than any other human-made object.

In the coming years, Parker will get even closer with each pass until the probe falls below the Alfvén point. In their paper, Kasper and Klein predict it should enter the zone of preferential heating in 2021 as the boundary expands with increasing solar activity. Then NASA will have information direct from the source to answer all manner of long-standing questions.

"With Parker Solar Probe we will be able to definitively determine through local measurements what processes lead to the acceleration of the solar wind and the preferential heating of certain elements," Klein said. "The predictions in this paper suggest that these processes are operating below the Alfvén surface, a region close to the sun that no spacecraft has visited, meaning that these preferential heating processes have never before been directly measured."

Credit: 
University of Michigan

No increased risk of birth defects in children of fathers treated for testicular cancer

New research has found no increased risk of congenital malformations associated with treatment with radiotherapy or chemotherapy in children of fathers with testicular cancer. The study, by Yahia Al-Jebari of Lund University, Sweden and colleagues, is published in the open-access journal PLOS Medicine on June 4, 2019. It followed 4,207 children of 2,380 fathers and finds that those conceived after treatment were not at a greater risk of congenital malformations than those conceived before.

Radiotherapy and chemotherapy have been shown to cause mutations and genetic damage in animal and human sperm and there has been concern that men treated with those therapies could have an increased chance of having children with genetic disease or birth defects. Because testicular cancer affects younger men, the team were able to compare children conceived before and after their fathers received treatment.

The study data were from Swedish national birth registries, with 2 million Swedes born between 1994 and 2014. Limitations are a lack of data on frozen or donor sperm, and only a limited number of patients were treated with radiotherapy and high doses of chemotherapy. However, even for those children, no increased risk was detected.

The research does suggest a slightly raised risk associated with the fathers' testicular cancer, regardless of treatment, but that increase was small, from 3.5 to 4.4 children in every 100 born, and should not be a worry to patients.

Yahia Al-Jebari says: "Our research set out to investigate whether treatment for the most common cancer among young men leads to a higher risk of fathering a child with a birth defect and we saw no increased risk associated with cancer therapies. We did see a slightly raised risk to children of these fathers but this was only very small and was not associated with treatment given. Patients should be reassured that this is not a cause for concern."

Credit: 
PLOS