Tech

The skin of the earth is home to pac-man-like protists

image: Even small soil samples contain vast numbers of microorganisms. Soil samples from different layers of soil wait to be analyzed in staff scientist Ben Turner's lab at the Smithsonian Tropical Research Institute in Panama.

Image: 
STRI

Pac-Man, the open-mouthed face of the most successful arcade game ever, is much more well-known than any of the one-celled organisms called protists, at least among people over 30. But the first study to characterize protists in soils from around the world--co-authored by Smithsonian scientists--found that the most common groups of soil protists behave exactly like Pac-Man: moving through the soil matrix, gobbling up bacteria. Their results are published in Science Advances.

"As part of a bigger project to understand all of the microbes in soil we are characterizing bacteria and fungi, but also a lesser-known, but equally important group called protists," said Angela Oliverio, former STRI intern and lead author on the paper with professor Noah Fierer and post-doctoral fellow Manuel Delgado-Baquerizo at the University of Colorado, Boulder; staff scientist Ben Turner at the Smithsonian Tropical Research Institute in Panama; researcher Stefan Geisen at the Netherlands Institute of Ecology and professor Fernando Maestre at the Universidad Rey Juan Carlos and the Universidad de Alicante, Spain.

Protists reproduce quickly and are probably much more responsive to climate change than larger forms of life. Like the cartoon character Sheldon Plankton in Spongebob Squarepants, protists are not plants, animals or fungi. They are single-celled organisms but, unlike bacteria, they have a nucleus. They move through water using whip-like flagellae and tiny hairs called cilia. Some of the nastier protists cause sleeping sickness, malaria and red tide, but nearly all play important, if mysterious, roles in the energy- and nutrient-trading relationships that connect ecosystems.

Identifying millions of miniscule protists in soil used to be impossible, but recently-developed technology to classify protists based on their genetic code makes it possible to characterize them on a large scale. The team sequenced the 18S ribosomal RNA studied from soil samples from across six continents to better understand the ecological roles of the protists in the below-ground ecosystem.

They discovered that most of the protists are the Pac-Man type that consume other, smaller organisms. But in tropical soils, a larger number of protists were parasites, living inside other organisms. In desert soils, there were more protists capable of photosynthesizing and using sunlight directly as an energy source. The best predictor of what types of protists exist in a sample is the annual precipitation at the site. This may seem intuitive because protists depend on water to move, but it was a surprise, since soil acidity, rather than precipitation, is what usually predicts which bacteria and fungi are in soil.

"Soils are home to an astonishing diversity of organisms, the lives of which we are only beginning to understand," said Ben Turner, STRI staff scientist and co-author of the study. "Soil protists are an understudied group, so this work provides a foundation for future research on their ecology in ecosystems worldwide."

Credit: 
Smithsonian Tropical Research Institute

Can lithium halt progression of Alzheimer's disease?

There remains a controversy in scientific circles today regarding the value of lithium therapy in treating Alzheimer's disease. Much of this stems from the fact that because the information gathered to date has been obtained using a multitude of differential approaches, conditions, formulations, timing and dosages of treatment, results are difficult to compare. In addition, continued treatments with high dosage of lithium render a number of serious adverse effects making this approach impracticable for long term treatments especially in the elderly.

In a new study, however, a team of researchers at McGill University led by Dr. Claudio Cuello of the Department of Pharmacology and Therapeutics, has shown that, when given in a formulation that facilitates passage to the brain, lithium in doses up to 400 times lower than what is currently being prescribed for mood disorders is capable of both halting signs of advanced Alzheimer's pathology such as amyloid plaques and of recovering lost cognitive abilities. The findings are published in the most recent edition of the Journal of Alzheimer's Disease.

Building on their previous work

"The recruitment of Edward Wilson, a graduate student with a solid background in psychology, made all the difference," explains Dr. Cuello, the study's senior author, reflecting on the origins of this work. With Wilson, they first investigated the conventional lithium formulation and applied it initially in rats at a dosage similar to that used in clinical practice for mood disorders. The results of the initial tentative studies with conventional lithium formulations and dosage were disappointing however, as the rats rapidly displayed a number of adverse effects. The research avenue was interrupted but renewed when an encapsulated lithium formulation was identified that was reported to have some beneficial effects in a Huntington disease mouse model.

The new lithium formulation was then applied to a rat transgenic model expressing human mutated proteins causative of Alzheimer's, an animal model they had created and characterized. This rat develops features of the human Alzheimer's disease, including a progressive accumulation of amyloid plaques in the brain and concurrent cognitive deficits.

"Microdoses of lithium at concentrations hundreds of times lower than applied in the clinic for mood disorders were administered at early amyloid pathology stages in the Alzheimer's-like transgenic rat. These results were remarkably positive and were published in 2017 in Translational Psychiatry and they stimulated us to continue working with this approach on a more advanced pathology," notes Dr. Cuello.

Encouraged by these earlier results, the researchers set out to apply the same lithium formulation at later stages of the disease to their transgenic rat modelling neuropathological aspects of Alzheimer's disease. This study found that beneficial outcomes in diminishing pathology and improving cognition can also be achieved at more advanced stages, akin to late preclinical stages of the disease, when amyloid plaques are already present in the brain and when cognition starts to decline.

"From a practical point of view our findings show that microdoses of lithium in formulations such as the one we used, which facilitates passage to the brain through the brain-blood barrier while minimizing levels of lithium in the blood, sparing individuals from adverse effects, should find immediate therapeutic applications," says Dr. Cuello. "While it is unlikely that any medication will revert the irreversible brain damage at the clinical stages of Alzheimer's it is very likely that a treatment with microdoses of encapsulated lithium should have tangible beneficial effects at early, preclinical stages of the disease."

Moving forward

Dr. Cuello sees two avenues to build further on these most recent findings. The first involves investigating combination therapies using this lithium formulation in concert with other interesting drug candidates. To that end he is pursuing opportunities working with Dr. Sonia Do Carmo, the Charles E. Frosst-Merck Research Associate in his lab.

He also believes that there is an excellent opportunity to launch initial clinical trials of this formulation with populations with detectable preclinical Alzheimer's pathology or with populations genetically predisposed to Alzheimer's, such as adult individuals with Down Syndrome. While many pharmaceutical companies have moved away from these types of trials, Dr. Cuello is hopeful of finding industrial or financial partners to make this happen, and, ultimately, provide a glimmer of hope for an effective treatment for those suffering from Alzheimer's disease.

Credit: 
McGill University

Nano-thin flexible touchscreens could be printed like newspaper

image: A sample of the ultra-thin and ultra-flexible electronic material that could be printed and rolled out like newspaper, for the touchscreens of the future.

Image: 
RMIT University

Researchers have developed an ultra-thin and ultra-flexible electronic material that could be printed and rolled out like newspaper, for the touchscreens of the future.

The touch-responsive technology is 100 times thinner than existing touchscreen materials and so pliable it can be rolled up like a tube.

To create the new conductive sheet, an RMIT University-led team used a thin film common in cell phone touchscreens and shrunk it from 3D to 2D, using liquid metal chemistry.

The nano-thin sheets are readily compatible with existing electronic technologies and because of their incredible flexibility, could potentially be manufactured through roll-to-roll (R2R) processing just like a newspaper.

The research, with collaborators from UNSW, Monash University and the ARC Centre of Excellence in Future Low-Energy Electronics Technologies (FLEET), is published in the journal Nature Electronics.

Lead researcher Dr Torben Daeneke said most cell phone touchscreens were made of a transparent material, indium-tin oxide, that was very conductive but also very brittle.

"We've taken an old material and transformed it from the inside to create a new version that's supremely thin and flexible," said Daeneke, an Australian Research Council DECRA Fellow at RMIT.

"You can bend it, you can twist it, and you could make it far more cheaply and efficiently that the slow and expensive way that we currently manufacture touchscreens.

"Turning it two-dimensional also makes it more transparent, so it lets through more light.

"This means a cell phone with a touchscreen made of our material would use less power, extending the battery life by roughly 10%."

DIY: a touchscreen you can make at home

The current way of manufacturing the transparent thin film material used in standard touchscreens is a slow, energy-intensive and expensive batch process, conducted in a vacuum chamber.

"The beauty is that our approach doesn't require expensive or specialised equipment - it could even be done in a home kitchen," Daeneke said.

"We've shown its possible to create printable, cheaper electronics using ingredients you could buy from a hardware store, printing onto plastics to make touchscreens of the future."

Thick and thin: how to turn an old material new

To create the new type of atomically-thin indium-tin oxide (ITO), the researchers used a liquid metal printing approach.

An indium-tin alloy is heated to 200C, where it becomes liquid, and then rolled over a surface to print off nano-thin sheets of indium tin oxide.

These 2D nano-sheets have the same chemical make-up as standard ITO but a different crystal structure, giving them exciting new mechanical and optical properties.

As well as being fully flexible, the new type of ITO absorbs just 0.7% of light, compared with the 5-10% of standard conductive glass. To make it more electronically conductive, you just add more layers.

It's a pioneering approach that cracks a challenge that was considered unsolvable, Daeneke said.

"There's no other way of making this fully flexible, conductive and transparent material aside from our new liquid metal method," he said.

"It was impossible up to now - people just assumed that it couldn't be done."

Patent pending: bringing the tech to market

The research team have now used the new material to create a working touchscreen, as a proof-of-concept, and have applied for a patent for the technology.

The material could also be used in many other optoelectronic applications, such as LEDs and touch displays, as well as potentially in future solar cells and smart windows.

"We're excited to be at the stage now where we can explore commercial collaboration opportunities and work with the relevant industries to bring this technology to market," Daeneke said.

Credit: 
RMIT University

Family caregivers are rarely asked about needing assistance with caring for older adults

Family caregivers usually are not asked by health care workers about needing support in managing older adults' care, according to a study led by researchers at the Johns Hopkins Bloomberg School of Public Health.

Most of the surveyed caregivers who interacted with health care workers reported that the latter always or usually listen to them (88.8 percent) and ask about their understanding of older adults' treatments (72.1 percent). A much smaller proportion (28.2 percent) reported that health care workers always or usually asked them whether they needed help in their care of the older adult.

The figure was significantly higher, 37.3 percent, for the subset of caregivers caring for older adults with dementia.

The study, to be published January 24 in JAMA Network Open, was an analysis of survey data on 1,916 caregivers, mostly spouses or other family members, who provide care to older adults with activity limitations living in community settings such as private homes, apartment buildings, or senior housing.

"These results suggest that we as a society could do a better job of supporting family caregivers, who are providing the lion's share of day-to-day care to older adults with activity limitations," says study lead author Jennifer Wolff, PhD, Eugene and Mildred Lipitz Professor in the Department of Health Policy and Management at the Bloomberg School. Wolff is also director of the Roger C. Lipitz Center for Integrated Health Care at the Bloomberg School.

Nearly 20 million Americans are unpaid, usually in-family caregivers for adults over 64, according to the National Academy of Sciences, Engineering, and Medicine. The care they provide often includes help with taking medication, bringing the older adult patient to a health care facility, and assisting with other health care activities. Given these important functions, the interactions between these caregivers and health care workers can impact the quality of care for the older adult patient.

"It's a potential point of intervention for improving care," Wolff says.

To get a better picture of this caregiver/health care-worker interface, Wolff and her colleagues analyzed 2017 survey data from the National Health and Aging Trends Study (NHATS) and the related National Study of Caregiving (NSOC), including 1,916 caregivers who were assisting 1,203 community-living, activity-limited older adults. The average caregiver age was 59. About 900 of these caregivers reported having interacted with health care workers of the older adult in the prior year, and also provided responses to key questions about those interactions.

The study results, Wolff says, highlight the fact that caregivers are still largely disconnected from the health care system for older adults, which in turn suggests that there is the potential to improve the quality of care. "That could mean identifying caregivers who could use care-related education and training or who simply need a break, for example, through temporary 'respite care' of the older adult patient," she says.

Co-author Vicki Freedman, PhD, with the University of Michigan's Institute for Social Research, co-leads the NHATS and NSOC with Wolff and co-author Judith Kasper, PhD, professor in the Bloomberg School's department of Health Policy and Management.

Wolff and her colleagues are continuing to study the relationship between caregivers and the health care system, as well as interventions that could improve it and thereby improve care for older adults.

"We're developing strategies to more effectively engage family caregivers in care delivery," Wolff says.

"Family Caregivers' Experiences with Health Care Workers in the Care of Older Adults with Activity Limitations" was written by Jennifer Wolff, Vicki Freedman, John Mulcahy, and Judith Kasper.

Credit: 
Johns Hopkins Bloomberg School of Public Health

A new stretchable battery can power wearable electronics

video: People flex and bend. Too bad their gadgets can't. Now an experimental battery described in the Nov. 26 edition of Nature Communications promises to do just that. Shown here powering a tiny light, the soft battery maintained a constant power output even when stretched to nearly two times its original length. In laboratory tests it also provided consistent power when squeezed, folded and stretched multiple times. A team led by graduate student David Mackanic, in the lab of Stanford chemical engineer Zhenan Bao, is currently refining its design to generate more power and to prove that the technology can work outside the lab.

Image: 
Credit Bao Lab, Stanford Engineering.

Electronics are showing up everywhere: on our laps, in pockets and purses and, increasingly, snuggled up against our skin or sewed into our clothing.

But the adoption of wearable electronics has so far been limited by their need to derive power from bulky, rigid batteries that reduce comfort and may present safety hazards due to chemical leakage or combustion.

Now Stanford researchers have developed a soft and stretchable battery that relies on a special type of plastic to store power more safely than the flammable formulations used in conventional batteries today.

"Until now we haven't had a power source that could stretch and bend the way our bodies do, so that we can design electronics that people can comfortably wear," said chemical engineer Zhenan Bao, who teamed up with materials scientist Yi Cui to develop the device they describe in the Nov. 26 edition of Nature Communications.

The use of plastics, or polymers, in batteries is not new. For some time, lithium ion batteries have used polymers as electrolytes -- the energy source that transports negative ions to the battery's positive pole. Until now, however, those polymer electrolytes have been flowable gels that could, in some cases, leak or burst into flame.

To avoid such risks, the Stanford researchers developed a polymer that is solid and stretchable rather than gooey and potentially leaky, and yet still carries an electric charge between the battery's poles. In lab tests the experimental battery maintained a constant power output even when squeezed, folded and stretched to nearly twice its original length.

The prototype is thumbnail-sized and stores roughly half as much energy, ounce for ounce, as a comparably sized conventional battery. Graduate student David Mackanic said the team is working to increase the stretchable battery's energy density, build larger versions of the device and run future experiments to demonstrate its performance outside the lab. One potential application for such a device would be to power stretchable sensors designed to stick to the skin to monitor heart rate and other vital signs as part of the BodyNet wearable technology being developed in Bao's lab.

Credit: 
Stanford University School of Engineering

Principles for a green chemistry future

In the most recent issue of the academic journal Science, the case is made for a future where the materials and chemicals that make up the basis of our society and our economy are healthful rather than toxic, renewable rather than depleting, and degradable rather than persistent.

The issue includes a paper, "Designing for a Green Chemistry Future," that illustrates a clear view into that future. The paper is authored by a Yale-led research team comprised of Julie Zimmerman, professor of green engineering and senior associate dean of academic affairs at F&ES; Paul Anastas, the Teresa and H. John Heinz III Professor in the Practice of Chemistry for the Environment at F&ES; and Hanno Erythropel, an associate research scientist at the Center for Green Chemistry & Green Engineering at Yale.

The team also included Walter Leitner, a leading figure in green chemistry who is a professor at the Max-Planck Institute for Chemical Energy Conversion in Germany.

"The basic idea is that green chemistry should be the basis of how we do any kind of chemistry in the future," said Erythropel. Too often, he explained, the evaluation of chemicals and the processes used to make these are focused solely on how well they function, but don't include considerations about their potential impacts during the whole life cycle. When it comes to chemical production, systems thinking must be used to create sustainable, non-toxic, and recyclable chemicals -- from the design stage, through production and use, to disposal.

In the paper, they argue that the mistakes of the chemical industry over the past century do not need to be repeated in the future -- and cutting-edge research and innovation in green chemistry is proving that. They highlight how green chemistry achievements have already begun the process of reinventing everything from plastics to pharmaceuticals, agriculture to electronics, energy generation and storage, and beyond. The achievements thus far are compelling but, according to Anastas, are only the beginning.

"The astounding accomplishments of green chemistry and green engineering thus far pale in comparison to the power and the potential of the field in the future," he said.

And while many examples exist of green chemistry increasing economic profits while being better for human health, the environment and sustainability, it is still the exception rather than the rule. Instead, Zimmerman says, "Sustainability requires that green chemistry and green engineering be done systematically, so that it is simply the way all chemistry is done in the future."

Credit: 
Yale School of the Environment

MTU engineers examine lithium battery defects

image: Researchers at MTU and Oak Ridge National Lab observe nanoscale defects in lithium metal to better understand how lithium dendrites affect batteries.

Image: 
Sarah Atkinson/Michigan Tech

Historically, as in decades ago, rechargeable lithium metal batteries were dangerous. These batteries were quickly abandoned in favor of Li-ion batteries which contain no metallic lithium and are now widely used. In efforts to continue to drive energy density up and costs down, we are again exploring how to efficiently and safely use lithium metal in batteries. Solid state batteries, free of flammable liquids, may be the solution. However, progress has been slowed because lithium metal still finds a way to short circuit the battery and limit cycle life.

Solid-state lithium batteries are the Holy Grail of energy storage. With potential impacts on everything from personal mobile devices to industrial renewable energy, the difficulties are worth overcoming. The goal: Build a safe and long lived lithium battery. The challenge: Use a solid-state electrolyte and stop short circuiting from the formation and growth of lithium dendrites.

In a new invited feature paper published in the Journal of Materials Research, materials engineers from Michigan Technological University weigh in on the problem. Their take is an unusual one. They focus on the unique mechanics of lithium at dimensions that are a fraction of the diameter of the hair on your head -- much smaller scales than most others consider.

"People think of lithium as being soft as butter, so how can it possibly have the strength to penetrate through a ceramic solid electrolyte separator?" asked Erik Herbert, assistant professor of materials science and engineering at Michigan Tech and one of the study's leads. He says the answer is not intuitive -- smaller is stronger. Tiny physical defects like micro cracks, pores or surface roughness inevitably exist at the interface between a lithium anode and a solid electrolyte separator. Zooming in on the mechanics of lithium metal at length scales commensurate with those tiny interface defects, it turns out that lithium is much stronger than it is at macroscopic or bulk length scales.

"Lithium doesn't like stress any more than you or I like stress, so it's just trying to figure out how to make the pressure go away," Herbert said. "What we're saying is that at small length scales, where the lithium is not likely to have access to the normal mechanism it would use alleviate pressure, it has to rely on other, less efficient methods to relieve the stress."

In every crystalline metal like lithium, atomic level defects called dislocations are needed to relieve significant amounts of stress. At macroscopic or bulk length scales, dislocations get rid of stress efficiently because they allow adjacent planes of atoms to easily slide past one another like a deck of cards. However, at small length scales and high temperatures relative to the metal's melting point, the chance of finding dislocations within the stressed volume is very low. Under these conditions, the metal has to find another way to relieve the pressure. For lithium, that means switching to diffusion. The stress pushes lithium atoms away from the stressed volume - akin to being carried away on an atomic airport walkway. Compared to dislocation motion, diffusion is very inefficient. That means at small length scales, where diffusion controls stress relief rather than dislocation motion, lithium can support more than 100 times more stress or pressure than it can at macroscopic length scales.

Catastrophic problems may occur in what Herbert and his co-lead, MTU professor Stephen Hackney, call the defect danger zone. The zone is a window of physical defect dimensions defined by the stress relief competition between diffusion and dislocation motion. The worst-case scenario is a physical interface defect (a micro crack, pore or surface roughness) that is too big for efficient stress relief by diffusion but too small to enable stress relief by dislocation motion. In this reverse Goldilocks problem, high stresses within the lithium can cause the solid electrolyte and the whole battery to catastrophically fail. Interestingly, the danger zone size is the same size as the observed lithium dendrites.

"The very thin solid-state electrolytes and high current densities required to provide the battery power and short charging times expected by consumers are conditions that favor lithium dendrite failure, so the dendrite problem must be solved for the technology to progress," Hackney said. "But to make the solid-state technology viable, the power capability and cycle life limitations must be addressed. Of course, the first step in solving the problem is to understand the root cause, which is what we are trying to do with this current work."

Hackney points out that the smaller is stronger concept is not new. Materials engineers have studied length scale effect on mechanical behavior since the 1950s, though it has not been widely used in considering the lithium dendrite and solid electrolyte problem.

"We think this 'smaller is stronger' paradigm is directly applicable to the observed lithium dendrite size, and is confirmed by our experiments on very clean, thick Li films at strain rates relevant to the initiation of the dendrite instability during charging," Hackney said.

To rigorously examine their hypothesis, Herbert and Hackney perform nanoindentation experiments in high purity lithium films that are produced by a top battery researcher, Nancy Dudney of the Oak Ridge National Laboratory.

"The bulk properties of lithium metal are well characterized, but this may not be relevant at the scale of defects and inhomogeneous current distributions likely acting in very thin solid state batteries," Dudney said. "The model presented in this paper is the first to map conditions where the much stronger lithium will impact cyclelife performance. This will guide future investigation of solid electrolytes and battery designs."

Among the team's next steps, they plan to examine the effects of temperature and electrochemical cycling on the mechanical behavior of lithium at small length scales. This will help them better understand real-world conditions and strategies to make next-generation batteries immune to the formation and growth of lithium dendrites.

Credit: 
Michigan Technological University

Dance of the honey bee reveals fondness for strawberries

image: When fields of strawberries are next to oilseed rape, honey bees prefer the strawberry field.

Image: 
Svenja Bänsch, University of Göttingen

Bees are pollinators of many wild and crop plants, but in many places their diversity and density is declining. A research team from the Universities of Göttingen, Sussex and Würzburg has now investigated the foraging behaviour of bees in agricultural landscapes. To do this, the scientists analysed the bees' dances, which are called the "waggle dance". They found out that honey bees prefer strawberry fields, even if they flowered directly next to the oilseed rape fields. Only when oilseed rape was in full bloom were fewer honey bees observed in the strawberry field. Wild bees, on the other hand, consistently chose the strawberry field. The results have been published in the journal Agriculture, Ecosystems & Environment.

A team from the Functional Agrobiodiversity and Agroecology groups at the University of Göttingen established small honey bee colonies next to eleven strawberry fields in the region of Göttingen and Kassel. The scientists then used video recordings and decoded the waggle dances. Honey bees dance to communicate the direction and distance of attractive food sources that they have visited. In combination with satellite maps of the landscape, the land use type that they preferred could be determined. The team also studied which plants the bees used as pollen resources and calculated the density of honey bees and wild bees in the study fields.

Their results: honey bees prefer the strawberry fields, even when oilseed rape is flowering abundantly in the area. However, honey bees from the surrounding landscapes are less common in the strawberry fields when oilseed rape is in full bloom. "In contrast, solitary wild bees, like mining bees, are constantly present in the strawberry field", says first author Svenja Bänsch, post-doctoral researcher in the Functional Agrobiodiversity group at the University of Göttingen. "Wild bees are therefore of great importance for the pollination of crops," emphasizes Professor Teja Tscharntke, Head of the Agroecology group.

"With this study, we were able to show that small honey bee colonies in particular can be suitable for the pollination of strawberries in the open field. However, our results also show that wild bees in the landscape should be supported by appropriate management measures", concludes Head of Functional

Credit: 
University of Göttingen

Study: Commercial air travel is safer than ever

It has never been safer to fly on commercial airlines, according to a new study by an MIT professor that tracks the continued decrease in passenger fatalities around the globe.

The study finds that between 2008 and 2017, airline passenger fatalities fell significantly compared to the previous decade, as measured per individual passenger boardings -- essentially the aggregate number of passengers. Globally, that rate is now one death per 7.9 million passenger boardings, compared to one death per 2.7 million boardings during the period 1998-2007, and one death per 1.3 million boardings during 1988-1997.

Going back further, the commercial airline fatality risk was one death per 750,000 boardings during 1978-1987, and one death per 350,000 boardings during 1968-1977.

"The worldwide risk of being killed had been dropping by a factor of two every decade," says Arnold Barnett, an MIT scholar who has published a new paper summarizing the study's results. "Not only has that continued in the last decade, the [latest] improvement is closer to a factor of three. The pace of improvement has not slackened at all even as flying has gotten ever safer and further gains become harder to achieve. That is really quite impressive and is important for people to bear in mind."

The paper, "Aviation Safety: A Whole New World?" was published online this month in Transportation Science. Barnett is the sole author.

The new research also reveals that there is discernible regional variation in airline safety around the world. The study finds that the nations housing the lowest-risk airlines are the U.S., the members of the European Union, China, Japan, Canada, Australia, New Zealand, and Israel. The aggregate fatality risk among those nations was one death per 33.1 million passenger boardings during 2008-2017.

For airlines in a second set of countries, which Barnett terms the "advancing" set with an intermediate risk level, the rate is one death per 7.4 million boardings during 2008-2017. This group -- comprising countries that are generally rapidly industrializing and have recently achieved high overall life expectancy and GDP per capita -- includes many countries in Asia as well as some countries in South America and the Middle East.

For a third and higher-risk set of developing countries, including some in Asia, Africa, and Latin America, the death risk during 2008-2017 was one per 1.2 million passenger boardings -- an improvement from one death per 400,000 passenger boardings during 1998-2007.

"The two most conspicuous changes compared to previous decades were sharp improvements in China and in Eastern Europe," says Barnett, who is the George Eastman Professor of Management at the MIT Sloan School of Management. In those places, he notes, had safety achievements in the last decade that were strong even within the lowest-risk group of countries.

Overall, Barnett suggests, the rate of fatalities has declined far faster than public fears about flying.

"Flying has gotten safer and safer," Barnett says. "It's a factor of 10 safer than it was 40 years ago, although I bet anxiety levels have not gone down that much. I think it's good to have the facts."

Barnett is a long-established expert in the field of aviation safety and risk, whose work has helped contextualize accident and safety statistics. Whatever the absolute numbers of air crashes and fatalities may be -- and they fluctuate from year to year -- Barnett has sought to measure those numbers against the growth of air travel.

To conduct the current study, Barnett used data from a number of sources, including the Flight Safety Foundation's Aviation Safety Network Accident Database. He mostly used data from the World Bank, based on information from the International Civil Aviation Organization, to measure the number of passengers carried, which is now roughly 4 billion per year.

In the paper, Barnett discusses the pros and cons of some alternative metrics that could be used to evaluate commercial air safety, including deaths per flight and deaths per passenger miles traveled. He prefers to use deaths per boarding because, as he writes in the paper, "it literally reflects the fraction of passengers who perished during air journeys."

The new paper also includes historical data showing that even in today's higher-risk areas for commerical aviation, the fatality rate is better, on aggregate, than it was in the leading air-travel countries just a few decades in the past.

"The risk now in the higher-risk countries is basically the risk we used to have 40-50 years ago" in the safest air-travel countries, Barnett notes.

Barnett readily acknowledges that the paper is evaluating the overall numbers, and not providing a causal account of the air-safety trend; he says he welcomes further research attempting to explain the reasons for the continued gains in air safety.

In the paper, Barnett also notes that year-to-year air fatality numbers have notable variation. In 2017, for instance, just 12 people died in the process of air travel, compared to 473 in 2018.

"Even if the overall trendline is [steady], the numbers will bounce up and down," Barnett says. For that reason, he thinks looking at trends a decade at a time is a better way of grasping the full trajectory of commercial airline safety.

On a personal level, Barnett says he understands the kinds of concerns people have about airline travel. He began studying the subject partly because of his own worries about flying, and quips that he was trying to "sublimate my fears in a way that might be publishable."

Those kinds of instinctive fears may well be natural, but Barnett says he hopes that his work can at least build public knowledge about the facts and put them into perspective for people who are afraid of airplane accidents.

"The risk is so low that being afraid to fly is a little like being afraid to go into the supermarket because the ceiling might collapse," Barnett says.

Credit: 
Massachusetts Institute of Technology

Researchers obtain 'high-definition' view of diabetes-related proteins

image: GLP1R visualized in insulin-secreting beta cells at super-resolution.

Image: 
University of Birmingham

Scientists have examined a key receptor for the first time at high resolution - broadening understanding of how it might function, and opening the door to future improvements in treating conditions such as type 2 diabetes.

Glucagon-like peptide-1 receptors (GLP1R) are found on insulin-producing beta cells of the pancreas and neurons in the brain. The receptor encourages the pancreas to release more insulin, stops the liver from producing too much glucose, and reduces appetite. This combination of effects can helps to control blood sugar levels.

As such, GLP1R has become a significant target for the treatment of type 2 diabetes, and a range of drugs are now available that are based on it. But much remains unknown about GLP1R function because its small size makes it difficult to visualise.

An international group of scientists led by experts at the University of Birmingham and the Max Planck Institute for Medical Research, Heidelberg, have now conducted a detailed examination of the receptor in living cells.

Researchers used a number of techniques - including synthesis of marker compounds, immunostaining, super-resolution microscopy, as well as 'in vivo' examination of mice. They were able to label GLP1R with their developed fluorescent probes so as to show its location in the cells and its response to signal molecules.

Publishing their findings in Nature Communications, the researchers - who were partly funded by Diabetes UK - note that they now provide a comprehensively tested and unique GLP1R detection toolbox, which has updated our view of this receptor, with implications for the treatment of conditions such as obesity and type 2 diabetes.

David Hodson, Professor of Cellular Metabolism, at the University of Birmingham, commented: "Our research allows us to visualise this key receptor in much more detail than before. Think about watching a movie in standard definition versus 4k, that's how big the difference is. We believe this breakthrough will give us a much greater understanding of GLP1R distribution and function. Whilst this will not immediately change treatment for patients, it might influence how we design drugs in the future."

Johannes Broichhagen, Departmental Group Leader of the Max-Planck Institute for Medical Research, commented: "Our experiments, made possible by combining expertise in chemistry and cell biology, will improve our understanding of GLP1R in the pancreas and the brain. Our new tools have been used in stem cells and in the living animal to visualize this important receptor, and we provide the first super-resolution characterisation of a class B GPCR. Importantly, our results suggest a degree of complexity not readily appreciated with previous approaches."

Dr Elizabeth Robertson, Director of Research at Diabetes UK commented: "The effects of type 2 diabetes are serious and widespread, so finding more effective treatments to help people manage their condition and reduce their risk of its potentially devastating complications is absolutely vital.

"Through innovative research like this, we can get to grips with key aspects of type 2 diabetes in unprecedented detail, and blaze a trail towards better treatments."

GLP1R is a member of the so-called G protein-coupled receptors (GPCRs), which play a role in many of the body's functions. An increased understanding of how they work has greatly affected modern medicine, and today, it is estimated that between one-third and one-half of all marketed drugs act by binding to GPCRs.

Credit: 
University of Birmingham

What goes up may actually be down

Gravity is the unseen force that dominates our entire lives. It's what makes walking uphill so difficult and what makes parts of our body eventually point downhill. It is unyielding, everywhere, and a force that we battle with every time we make a move. But exactly how do people account for this invisible influence while moving through the world?

A new study in Frontiers in Neuroscience used virtual reality to determine how people plan their movements by "seeing" gravity using visual cues in the landscape around them, rather than "feeling it" through changes in weight and balance. PhD Student Desiderio Cano Porras, who worked in Dr. Meir Plotnik's laboratory at the Sheba Medical Center, Israel and colleagues found that our capability to anticipate the influence of gravity relies on visual cues in order for us to walk safely and effectively downhill and uphill.

In order to determine the influence of vision and gravity on how we move, the researchers recruited a group of 16 young, healthy adults for a virtual reality (VR) experiment. The researchers designed a VR environment that simulated level, uphill, and downhill walking. Participants were immersed in a large-scale virtual reality system in which they walked on a real-life treadmill that was at an upward incline, at a downward decline, or remained flat. Throughout the experiment, the VR visual environment either matched or didn't match the physical cues that the participants experienced on the treadmill.

Using this setup, the researchers were able to disrupt the visual and physical cues we all experience when anticipating going uphill or downhill. So, when participants saw a downhill environment in the VR visual scenery, they positioned their bodies to begin "braking" to go downhill despite the treadmill actually remaining flat or at an upward incline. They also found the reverse - people prepared for more "exertion" to go uphill in the VR environment even though the treadmill remained flat or was pointing downhill.

The researchers showed that purely visual cues caused people to adjust their movements to compensate for predicted gravity-based changes (i.e., braking in anticipation of a downhill gravity boost and exertion in anticipation of uphill gravitational resistance). However, while participants initially relied on their vision, they quickly adapted to the real-life treadmill conditions using something called a "sensory reweighting mechanism" that reprioritized body-based cues over visual ones. In this way the participants were able to overcome the sensory mismatch and keep walking.

"Our findings highlight multisensory interactions: the human brain usually gets information about forces from "touch" senses; however, it generates behavior in response to gravity by "seeing" it first, without initially "feeling" it," says Dr. Plotnik.

Dr. Plotnik also states that the study is an exciting application of new and emerging VR tech as "many new digital technologies, in particular virtual reality, allow a high level of human-technology interactions and immersion. We leveraged this immersion to explore and start to disentangle the complex visual-locomotor integration achieved by human sensory systems."

The research is a step towards the broader goal of understanding the intricate pathways that people use to decide how and when to move their bodies, but there is still work to be done.

Dr. Plotnik states that "This study is only a 'snapshot' of a specific task involving transitioning to uphill or downhill walking. In the future we will explore the neuronal mechanisms involved and potential clinical implications for diagnosis and treatment."

Credit: 
Frontiers

Why eating yogurt may help lessen the risk of breast cancer

image: Dr. Rachael Rigby of Lancaster University.

Image: 
Lancaster University

One of the causes of breast cancer may be inflammation triggered by harmful bacteria say researchers.

Scientists say their idea- as yet unproven - is supported by the available evidence, which is that bacterial induced inflammation is linked to cancer.

The paper in the journal Medical Hypotheses is by Lancaster University medical student Auday Marwaha, Professor Jim Morris from the University Hospitals of Morecambe Bay NHS Trust and Dr Rachael Rigby from Lancaster University's Faculty of Health and Medicine.

The researchers say that: "There is a simple, inexpensive potential preventive remedy; which is for women to consume natural yoghurt on a daily basis."

Yoghurt contains beneficial lactose fermenting bacteria commonly found in milk, similar to the bacteria - or microflora- found in the breasts of mothers who have breastfed.

Dr Rigby said: "We now know that breast milk is not sterile and that lactation alters the microflora of the breast.

"Lactose fermenting bacteria are commonly found in milk and are likely to occupy the breast ducts of women during lactation and for an unknown period after lactation."

Their suggestion is that this lactose fermenting bacteria in the breast is protective because each year of breast feeding reduces the risk of breast cancer by 4.3%.

Several other studies have shown that the consumption of yoghurt is associated with a reduction in the risk of breast cancer, which the researchers suggest may be due to the displacement of harmful bacteria by beneficial bacteria.

There are approximately 10 billion bacterial cells in the human body and while most are harmless, some bacteria create toxins which trigger inflammation in the body.

Chronic inflammation destroys the harmful germs but it also damages the body. One of the most common inflammatory conditions is gum disease or periodontitis which has already been linked to oral, oesophageal, colonic, pancreatic, prostatic and breast cancer.

The researchers conclude that: "The stem cells which divide to replenish the lining of the breast ducts are influenced by the microflora, and certain components of the microflora have been shown in other organs, such as the colon and stomach, to increase the risk of cancer development.

"Therefore a similar scenario is likely to be occurring in the breast, whereby resident microflora impact on stem cell division and influence cancer risk."

Credit: 
Lancaster University

A megalibrary of nanoparticles

image: A simple, modular chemical approach could produce over 65,000 different types of complex nanorods. Electron microscope images are shown for 32 of these nanorods, which form with various combinations of materials. Each color represents a different material.

Image: 
Schaak Laboratory, Penn State

Using straightforward chemistry and a mix-and-match, modular strategy, researchers have developed a simple approach that could produce over 65,000 different types of complex nanoparticles, each containing up to six different materials and eight segments, with interfaces that could be exploited in electrical or optical applications. These rod-shaped nanoparticles are about 55 nanometers long and 20 nanometers wide--by comparison a human hair is about 100,000 nanometers thick--and many are considered to be among the most complex ever made.

A paper describing the research, by a team of Penn State chemists, appears January 24, 2020 in the journal Science.

"There is a lot of interest in the world of nanoscience in making nanoparticles that combine several different materials--semiconductors, catalysts, magnets, electronic materials," said Raymond E. Schaak, DuPont Professor of Materials Chemistry at Penn State and the leader of the research team. "You can think about having different semiconductors linked together to control how electrons move through a material, or arranging materials in different ways to modify their optical, catalytic, or magnetic properties. We can use computers and chemical knowledge to predict a lot of this, but the bottleneck has been in actually making the particles, especially at a large-enough scale so that you can actually use them."

The team starts with simple nanorods composed of copper and sulfur. They then sequentially replace some of the copper with other metals using a process called "cation exchange." By altering the reaction conditions, they can control where in the nanorod the copper is replaced--at one end of the rod, at both ends simultaneously, or in the middle. They can then repeat the process with other metals, which can also be placed at precise locations within the nanorods. By performing up to seven sequential reactions with several different metals, they can create a veritable rainbow of particles--over 65,000 different combinations of metal sulfide materials are possible.

"The real beauty of our method is its simplicity," said Benjamin C. Steimle, a graduate student at Penn State and the first author of the paper. "It used to take months or years to make even one type of nanoparticle that contains several different materials. Two years ago we were really excited that we could make 47 different metal sulfide nanoparticles using an earlier version of this approach. Now that we've made some significant new advances and learned more about these systems, we can go way beyond what anyone has been able to do before. We are now able to produce nanoparticles with previously unimaginable complexity simply by controlling temperature and concentration, all using standard laboratory glassware and principles covered in an Introductory Chemistry course."

"The other really exciting aspect of this work is that it is rational and scalable," said Schaak. "Because we understand how everything works, we can identify a highly complex nanoparticle, plan out a way to make it, and then go into the laboratory and actually make it quite easily. And, these particles can be made in quantities that are useful. In principle, we can now make what we want and as much as we want. There are still limitations, of course--we can't wait until we are able to do this with even more types of materials--but even with what we have now, it changes how we think about what is possible to make."

Credit: 
Penn State

Revealed: The explosive origin of superluminous supernova SN 2006gy

Providing answers about its curious supreme brightness, researchers say the superluminous supernova SN 2006gy - one of the brightest stellar explosions ever studied, and discovered in 2006 - gained its exceptional luster when a normal Type Ia explosion smashed into a surrounding shell of ejected stellar material. Superluminous supernovae (SNe) are as much as 100 times more luminous than normal SNe, far more than can be explained by standard astrophysical mechanisms. While several models have been proposed to explain these rare, brilliant transients, the origin of their energy and the nature of the stars that produce them remain unclear. Early observations of SN 2006gy - one of the first observed superluminous SNe - indicated that the transient was a Type IIn supernova. However, a little more than a year after the explosion, SN 2006gy produced an unusual spectrum with unidentified emission lines. No one had deciphered this mysterious spectrum. Through modeling of supernova spectral scenarios, Anders Jerkstrand and colleagues identified these lines as due to a large amount of iron. Using spectral and radiation hydrodynamic modeling, Jerkstrand et al. simulated various possible mechanisms that could have produced SN2006gy's unusual spectrum and brightness, featuring these iron lines. They found only one scenario consistent with the observations: a normal Type Ia supernova interacting with a dense shell of circumstellar material, probably ejected by the progenitor star roughly a century before the supernova explosion. Other superluminous SNe share similar properties with SN 2006gy and might also be caused by the same underlying mechanics.

Credit: 
American Association for the Advancement of Science (AAAS)

Chemicals in the environment: A focus on mixtures

image: The CITEPro technology platform allows researchers at the UFZ to perform an efficient (bio)analysis and evaluation of environmental chemicals.

Image: 
Bodo Tiedemann

Chemicals have improved our quality of life. But at the same time, they pose a considerable risk for humans and the environment: pesticides, pharmaceuticals and plasticisers enter the environment and the food chain, causing unwanted effects in addition to the desired ones. Despite the legislation in place, risk assessment and monitoring remain insufficient.

This is due, among other factors, to the current approach to evaluating the potential hazards from chemicals being based on a relatively small number of individual components. Today, there is increased awareness of the fact that humans and the environment are exposed to a cocktail of tens of thousands of chemicals. Only a fraction of these chemicals have been identified to date; the effect on biological systems and the role of individual chemicals and degradation products in the cocktail remain largely unclear. At the same time, the number of new chemicals registered is rapidly rising: from 20 million to 156 million between 2002 and 2019. All this makes it difficult to detect cause-effect relationships and necessitates new theoretical models and methodological approaches.

For this reason, the review by the group of authors led by Prof. Beate Escher of the UFZ provides an overview of the technologies suitable for identifying chemicals in complex mixtures and capturing their effects. They further evaluate their potential and limitations.

The publication clarifies that this is not only a question of analytical methods - the success of analytical procedures is also dependent on "what" samples are taken and "how" they are processed. Using the same approaches for different types of samples - ranging from water and soil to blood or tissue - makes it possible to compare the results later. Wipes or silicone wristbands, among other things, are underscored as being especially innovative in their ability to capture individuals' personal exposure to pollutants.

The possibilities provided by chemical analysis have seen enormous improvements thanks to the growth, evolution and accessibility of high-resolution mass spectrometry (HR-MS). Often coupled with further technologies, HR-MS can detect tens of thousands of signals in biological and environmental samples. It also forms the basis for "suspect screening" to identify unknown chemicals in complex mixtures. "This enables us, among other things, to detect new problematic pollutants in the environment," says Beate Escher. "But it will never be able to capture every single substance. Even substances present below the instrumental detection limit or below the effect threshold may contribute to risk."

The group of researchers consequently recommends supplementing chemical analysis procedures with bioanalytical tools that are specifically able to capture mixture effects in the evaluation of the toxicity of waste-water effluent and sediments. Traditionally, whole-organism in vivo bioassays were used for this purpose, but such bioassays suffered from limited sample throughput, among other disadvantages. The advancement of in vitro cellular bioassays has now opened up further possibilities that not only reduce the need for animal testing but are also amenable to high-throughput robotics. "The application of high-throughput in vitro assays for environmental risk assessment of mixtures and complex environmental samples is only emerging but has great potential," says Beate Escher with conviction.

Supplementing high-resolution mass spectrometry with bioanalytical tools makes it possible to capture information on the effects of all chemicals in a sample. Prof. Escher is of the opinion that a combination of these two tools has the potential to revolutionise environmental monitoring. This is one of the reasons why the CITEPro technology platform (Chemicals in the Environment Profiler) was established at the UFZ. This platform permits the preparation and testing of samples by means of high-throughput analytical and bioanalytical procedures. But CITEPro is more than mere hardware. It is a concept designed to characterise the exposome - in other words, capture the entirety of all environmental influences to which an individual is exposed over their lifetime. This includes external factors (chemicals in the air, in water or foodstuffs) and internal chemicals produced by an organism in response to various stressors.

Conclusion:

The number of chemicals identified in environmental samples using sophisticated instrumental analysis is steadily increasing. Over recent years, better tools have been developed to investigate their combined effects and mechanisms of toxicity. It nevertheless remains difficult to elucidate the drivers of chemical stress in the environment. The links between the environment, wildlife and humans can only be made by applying an integrated approach to monitoring and evaluation.

Tracking chemicals and their transformation products in the environment and in our bodies is an immense (bio)analytical challenge: sampling, extraction, chemical detection and data analysis all need to be fine-tuned to each other to obtain robust information.

Quantifying mixture effects is one way to capture all chemicals present and their bioactive transformation products. The clear relevance of mixtures and the fact that there are thousands of chemicals present in the environment and our bodies means that a shift in the existing regulatory paradigm towards mixture effects is urgently needed.

Credit: 
Helmholtz Centre for Environmental Research - UFZ