Earth

Study provides new insight into origin of Canadian Rockies

The Canadian Rocky Mountains were formed when the North American continent was dragged westward during the closure of an ocean basin off the west coast and collided with a microcontinent over 100 million years ago, according to a new study by University of Alberta scientists.

The research, based on high resolution data of Earth's subsurface at the Alberta-British Columbia (BC) border, favours an interpretation different from the traditional theory of how the Canadian Rocky Mountains formed. The traditional theory, known as the accretion model, suggests that a gradual accumulation of additional matter eventually formed the Canadian Rockies--unlike the sudden collision event proposed by this research.

"This research provides new evidence that the Canadian section of this mountain range was formed by two continents colliding," said Jeffrey Gu, professor in the Department of Physics and co-author on the study. "The proposed mechanism for mountain building may not apply to other parts of the Rocky Mountains due to highly variable boundary geometries and characteristics from north to south."

The study involved seismic data collected from a dense network of seismic stations in western Alberta and eastern BC, combined with geodynamic calculations and geological observations. The results suggest that an ocean basin off North America's west coast descended beneath the ribbon-shaped microcontinent, dragging North America westward, where it collided with the microcontinent.

"This study highlights how deep Earth images from geophysical methods can help us to understand the evolution of mountains, one of the most magnificent processes of plate tectonics observed at the Earth's surface," said Yunfeng Chen, who conducted this research during his PhD studies under the supervision of Gu. Chen received the Faculty of Science Doctoral Dissertation Award in 2018.

"There are other mountain belts around the world where a similar model may apply," said Claire Currie, associate professor of physics and co-author on the study. "Our data could be important for understanding mountain belts elsewhere, as well as building our understanding of the evolution of western North America."

Alberta and British Columbia communities supported these research efforts by hosting seismic stations on their land. This research is also supported by the Alberta Energy Regulator.

Credit: 
University of Alberta

Understanding the (ultra-small) structure of silicon nanocrystals

image: Silicon nanocrystals glowing in front of the solid-state nuclear magnetic resonance spectrometer used to illuminate their unique layered structure.

Image: 
Haoyang (Emmett) Yu

New research provides insight into the structure of silicon nanocrystals, a substance that promises to provide efficient lithium ion batteries that power your phone to medical imaging on the nanoscale.

The research was conducted by a team of University of Alberta chemists, lead by two PhD students in the Department of Chemistry, Alyx Thiessen and Michelle Ha.

"Silicon nanocrystals are important components for a lot of modern technology, including lithium ion batteries," said, Thiessen, who is studying with Professor Jonathan Veinot. "The more we know about their structure, the more we'll understand about how they work and how they can be used for various applications."

In two recently published papers, the research team characterized the structure of silicon nanocrystals more quickly and accurately than ever before, using a cutting-edge technique known as dynamic nuclear polarization (DNP).

"Using the DNP technology, we were able to show that larger silicon nanocrystals have a layered structure that is disordered on the surface, with a crystalline core that is separated by a middle layer," explained Ha, who is studying under the supervision of Assistant Professor Vladimir Michaelis. "This is the first time this has been documented in silicon nanocrystals."

Silicon nanocrystals have proliferated through the world of scientific research. From applications in developing ultra-high capacity batteries to the next generation of medical imaging at the cellular level, their potential is seemingly endless.

"Understanding the structure of silicon nanocrystals is very useful," explained Thiessen. "By thoroughly examining the structure, we build our understanding of the properties of the crystals, which can in turn be used to optimize their function."

"And this will allow us to tailor the silicon nanocrystals to whatever application or field we want to," added Ha. "This research can impact many different areas of research, including the development of more accurate medical imaging technology to new, more efficient batteries. These silicon nanocrystals are extremely versatile."

Both Thiessen and Ha are students in the Alberta/Technical University of Munich International Graduate School for Hybrid Functional Materials (ATUMS) program, which allows them to experience an international cross-disciplinary research environment ans conduct aspects of their research in Munich, Germany.

Credit: 
University of Alberta

Atrial fibrillation set to affect more than 14 million over-65s in the EU by 2060

Sophia Antipolis, 6 June 2019: Urgent action is needed to prevent, detect and treat atrial fibrillation to stop a substantial rise in disabling strokes. That's the main message of a paper published today in EP Europace, a journal of the European Society of Cardiology (ESC),1 during World Heart Rhythm Week.

Atrial fibrillation is the most common heart rhythm disorder2 and accounts for 0.28% to 2.6% of healthcare spending in European countries. Patients with atrial fibrillation have a five times higher risk of stroke. And 20% to 30% of strokes are caused by atrial fibrillation. Strokes due to atrial fibrillation are more disabling and more often fatal than strokes with other causes.

The study estimates that 7.6 million people over 65 in the EU had atrial fibrillation in 2016 and this will increase by 89% to 14.4 million by 2060. Prevalence is set to rise by 22%, from 7.8% to 9.5%. The proportion of these patients who are over 80 will rise from 51% to 65%.

"Atrial fibrillation patients over 80 have even greater risks of stroke so this shift in demography has enormous implications for the EU," said study author Dr Antonio Di Carlo, of the Italian National Research Council, Florence, Italy. "Older patients also have more comorbidities linked to atrial fibrillation such as heart failure and cognitive impairment."

Prevention of atrial fibrillation is the same as for other cardiovascular conditions. This includes not smoking, exercise, a healthy diet, keeping alcohol under moderation, and controlling blood pressure and diabetes.

Screening for atrial fibrillation is important because oral anticoagulation effectively prevents strokes in these patients. Dr Di Carlo said GPs should opportunistically screen for atrial fibrillation by performing pulse palpation during every consultation. Patients with an irregular pulse would have an electrocardiogram (ECG) for confirmation. "The majority of older people see their GP at least once a year, so this is an efficient and effective method to diagnose atrial fibrillation and prevent complications," he said.

GPs can inform patients about symptoms of atrial fibrillation such as palpitations, racing or irregular pulse, shortness of breath, tiredness, chest pain and dizziness.3 And they can teach patients how to check for an irregular pulse using the fingertips, which can be reported and followed-up with an ECG. "I recommend this approach for now," said Dr Di Carlo. "In future there may be reliable devices for first line screening by the public such as smartwatch apps, but these technologies are not ready for widespread use."

To calculate the numbers of atrial fibrillation patients over 65 anticipated in the EU in the next four decades, the researchers first measured the prevalence in a representative sample of people over 65 in Italy. They then used population projections from the statistical office of the EU (Eurostat) for all 28 Member States. The study was funded by the Italian Ministry of Health, National Centre for Disease Prevention and Control.

Credit: 
European Society of Cardiology

Researchers uncover indoor pollution hazards

image: Tom Jobson.

Image: 
WSU

PULLMAN, Wash - When most people think about air pollution, they think of summertime haze, traffic or smokestack exhaust, wintertime inversions, or wildfire smoke.

They rarely think of the air that they breathe inside their own homes.

In a new study of indoor air quality, a team of WSU researchers has found surprisingly high levels of pollutants, including formaldehyde and possibly mercury, in carefully monitored homes, and that these pollutants vary through the day and increase as temperatures rise. Their study, led by Tom Jobson, professor in the Department of Civil and Environmental Engineering, and graduate student Yibo Huangfu, was published in the journal, Building and Environment.

Researchers know that air pollution, whether inside or outside, has a significant impact on people's health, including their heart, lungs, brain, and neurological health. But, while the government has increased regulation of outdoor air pollution over the past 40 years, there is little regulation of the air in people's homes. Building laws generally require that homes are structurally sound and that people are comfortable -- with minimal impacts from odors and humidity.

"People think of air pollution as an outdoor problem, but they fail to recognize that they're exposing themselves to much higher emission rates inside their homes," Jobson said.

These emissions come from a variety of sources, such as building materials, furniture, household chemical products, and from people's activities like cooking.

One of the ways to clear out harmful chemicals is with ventilation to the outdoors. But, with increased concern about climate change and interest in reducing energy use, builders are trying to make homes more airtight, which may inadvertently be worsening the problem.

In their study, the researchers looked at a variety of homes - meant to reflect the typical housing styles and age in the U.S. They found that formaldehyde levels rose in homes as temperatures increased inside - between three and five parts per billion every time the temperature increased one degree Celsius.

"As a home gets hotter, there is a lot more formaldehyde in the home. The materials are hotter and they off-gas at higher rates," Jobson said.

The work shows how heat waves and changing regional climate might affect indoor air quality in the future.

"As people ride out a hot summer without air conditioning, they're going to be exposed to much higher concentrations of pollutants inside," he said.

The researchers also found that pollution levels varied throughout the day - they were highest in the afternoon and lowest in the early morning. Until now, manufacturers and builders have assumed that pollutants stay the same throughout the day as they consider the emissions from their materials, so they may not be getting a true picture of how much pollution people are exposed to indoors, he said.

The researchers also were surprised to find in one home that gypsum wallboard emitted high levels of formaldehyde and possibly mercury when it's heated. That home, built in the early 1970s, had radiant heating in its ceiling, which was a popular heating system at that time.

After finding high levels of formaldehyde in the home, the researchers suspected the gypsum wallboard radiant ceiling in the home. About half of the gypsum used in homes as drywall is made from waste products of the coal industry. They pulled a piece from the home, heated it up in their laboratory, and measured high levels of formaldehyde - as much as 159 parts per billion.

Household formaldehyde exposure is notregulated in the United States, but the US Agency for Toxic Substances and Disease Registry, part of the Centers for Disease Control, has set eight parts per billion as posing a minimum risk level.

"Exposure to these chemicals impacts people's ability to think and learn," said Jobson. "It's important for people to be more cognizant of the risk -- Opening a window is a good thing."

The researchers plan to continue looking at ways to reduce exposure to indoor air pollutants, such as using green building materials.

"We have to balance making more energy efficient homes with protecting our health and cognitive function," he said.

Credit: 
Washington State University

The universal beauty of the mountains can be seen in graphs

image: Earth's mountain ranges share the same universal features. They become visible when the topographic map (here: the Ligurian Alps) is transformed into a ridge map. (Source: IFJ PAN)

Image: 
Source: IFJ PAN

Mountains have character. The continuous gentle, wavy hills and wide valleys of the Carpathians, Appalachians or lower parts of the Alps contrast strongly with the soaring peaks, ragged ridges and deep ravines of the high Tatra mountains and Pyrenees, which are, in turn, different from the inaccessible, snow-covered Himalayan or Andean giants, along whose slopes flow long tongues of glaciers instead of water. Beneath this great diversity, however, lies a surprisingly similar structure.

Using graphs and fractals, scientists from the Institute of Nuclear Physics of the Polish Academy of Sciences (IFJ PAN) in Cracow looked at the structure of the massifs of our planet. Such diverse ranges as the Alps, the Pyrenees, the Scandinavian Mountains, the Baetic Mountains, the Himalayas, the Andes, the Appalachians, the Atlas Mountains and the Southern Alps all went under the statistical magnifying glass. The analysis, presented in the Journal of Complex Networks, resulted in an unexpected observation. It turns out that there is a universal similarity in the structure of the Earth's massifs. It can be seen in mountain ranges on all continents, regardless of the size of the peaks, their age, or even whether they are of tectonic or volcanic origin.

"It would seem that the only thing that the various mountain ranges have in common is that when you look at them, you have to really bend your head back. The real similarity only becomes visible when we transform a simple topographic map of the mountains into a ridge map, i.e. one that shows the axes of all the ridges," says Dr. Jaroslaw Kwapien (IFJ PAN), and then adds: "The axis of the ridge is a line running along the top of the mountain ridge in such a way that on both its sides the terrain falls downwards. It is thus the opposite of the axis of a valley."

Mountain ridges are not discrete creations. They merge into a large, branched structure, resembling a tree: from the main ridge ("the trunk") there are longer or shorter side ridges of the first order ("branches"), from them arise side ridges of the second order, and from these subsequent ones again and again. The whole has a clearly hierarchical structure and the number of degrees of complexity depends on the size of the area covered with mountains and can reach even several dozen. Structures of this type are presented in the form of various graphs. For example, each ridge of a given massif can be treated as a node. Two nodes are connected by lines (edges of the graph) when the corresponding ridges are also connected. In this sort of graph, some nodes are connected to many nodes, whereas others are connected to only a few.

Graphs constructed for different massifs have different structures (topology). One way of studying these is the node degree distribution, containing information about the number of nodes of a given degree. In typical distributions, large values appear at nodes of a low degree, because they are the most numerous. There are usually not many nodes of a high degree - hubs. In the case of mountains, the main hub, usually corresponding to the longest ridge of the studied mountain chain, has a degree of several thousand. Second order hubs, i.e. side ridges of the main ridge, have degrees of several hundred. The most numerous are nodes with a degree of one. There can be even several hundred thousand of these.

"The node degree distribution of the ridges turns out to be of a power-law nature. This means that the number of nodes of a certain degree and, for example, the number of nodes with a degree that is half as much are in a constant relation, regardless of the selected degree. Each fragment of the distribution increased by a certain constant factor looks like a whole, which means that no scale is distinguished," says Dr. Kwapien.

Power-law distributions are found in graphs representing systems occurring in nature (e.g. when studying the links between proteins and enzymes in a living cell), as well as in our own activities (such as citations of scientific papers, the cooperation of actors in films, the neighbourhood of words in texts, links between websites). They often describe self-similar, fractal structures. One of the model examples of natural fractals are mountains. Their computer models are even generated by algorithms using fractal geometry, so the power-law topology of ridge graphs should not surprise anyone. However, the value of the power exponent did turn out to be a surprise.

"Regardless of the type of mountains, the exponent of the power-law distribution took on values over a very narrow range around the number 5/3. If we take into account the accuracy of our methodology, this narrow range of values may even mean that the exponents in each examined case were the same," notes Dr Kwapien.

The observed homogeneity results from the fact that in every part of our planet the main mechanisms responsible for mountain sculpture are basically the same. Tectonic or volcanic movements are necessary to raise the terrain, but the most important sculpting factor is water and glacial erosion. Water and ice lead to cracking and crushing of rocks and transfer the fragmented material to the lowlands. This results in gullies, canyons and mountain valleys, and therefore also ridges. Since the watercourses that form the drainage system of a given area are of a dendritic structure by nature (outside desert areas, of course), a similar structure also occurs in the case of the ridge systems. But why are the mutual relations between the numbers of ridges with a different number of branches so similar for different types of mountains?

"The situation becomes clearer when we consider gravity in addition to water," explains Dr. Kwapien. "When rock material is crushed, it becomes subject to the dynamics of loose bodies regardless of its chemical composition. Loose bodies on slopes can only remain there if the angles of inclination are not too great. The slopes must not be too steep. This is why the depth of valleys in nature is limited by their own width. Narrow river canyons with almost vertical walls only exist at an early stage of sculpture formation. They are rare in mature mountain formations because their walls have already undergone slanting."

The existence of river systems draining water from a given area, erosion crushing rocks and carving valleys, as well as gravitational landslides of rock rubble mean that the ridges cannot be arbitrarily close or far from each other. There is an optimal arrangement, independent of the properties of the mountain range and giving the mountains some universal features.

The above observations are complemented by another observation made by the IFJ PAN physicists, concerning the dimensions of the fractal ridge structures. The fractal dimension describes how rough the structure of the object is. The line of a single ridge has a dimension of 1. If the lines (ridges) were placed extremely densely, their fractal dimension would correspond to the dimension of the surface, and therefore would be equal to 2. The researchers showed that if the ridge structures are presented as graphs whose nodes are the intersections of the ridges (it is in these intersections that peaks are most common), and the edges of the graphs are the ridges connecting the peaks, then the fractal dimensions of such graphs would be with a good approximation equal to the number... 5/3.

"In some graphs we see the hierarchy of mountain structures, in others their fractality. In both cases, for all types of mountains we encounter the same values of the appropriate numbers. This universalism gives food for thought," states Prof. Stanislaw Drozdz (IFJ PAN, Cracow University of Technology).

If different mountain ranges are so similar in terms of size, where are the sources of mountain diversity? Will it be possible to study them using graph theory and fractal geometry? Will it be possible to create a model in which an evolving graph will imitate the successive stages of the formation of a mountain sculpture? Finally, will it be possible to apply the transformation of ridge maps into graphs in practice, for example in cartography? These questions - and many others - will only be answered by future research.

Credit: 
The Henryk Niewodniczanski Institute of Nuclear Physics Polish Academy of Sciences

Cancer research using mini-organs from tumors and healthy tissue

video: Hans Clevers explains the use of human organoids, or mini-organs, to study tumor biology, model tumor development, and to test existing and new therapies in a patient specific way.

Image: 
© Hubrecht Institute, Video made by Esther Pardijs, Animations made by DEMCON | nymus3D

Hans Clevers (Hubrecht Institute) and David Tuveson (Cold Spring Harbor Laboratory), experts in the field of stem cells and organoids, have written a review that summarizes the use of organoids in cancer research and shines a light on prospects for the future. These mini-organs can be used to study tumor biology, model tumor development, and to test existing and new therapies in a patient specific way. The main points of the review are explained by Hans Clevers in the attached video. The review was published in the scientific journal Science, on the 6th of June.

Credit: 
Hubrecht Institute

Parents of depressed teens in treatment may also benefit from counseling

'Families might be putting their own issues on the back burner while their kid gets help'

Very little research has examined the effect of treating teens, with medication or psychotherapy, on family relationships

CHICAGO - Parents often seek mental health treatment for a child struggling with depression, but the treatment shouldn't stop with the depressed teen, suggests a new Northwestern Medicine study.

The study found that while depressed teens were involved in active treatment, parents' marriages and parent-child conflict remained stable. Once the teens' treatment had finished, however, parents' marital relationships slightly worsened, the study found.

"Families might be putting their own issues on the back burner while their teen gets help," said first author Kelsey Howard, a doctoral candidate in clinical psychology at Northwestern University Feinberg School of Medicine. "Once the treatment ends, they're forced to face issues in their marriage or family that might have been simmering while their depressed teen was being treated."

To address this, Howard and her co-authors recommend that parents of teens who are depressed also have a check-in for their marital relationship.

"Families are interactive, fragile ecosystems, and a shift in a teenager's mood can undoubtedly alter the family's balance -- negatively or positively," Howard said.

While adolescent depression is well known to be a stressor for parents and families, this is one of only a few studies to examine how adolescent depression impacts family relationships and, in turn, how family relationships impact adolescent depression.

The study was published in the Journal of Abnormal Child Psychology.

The study found that parents of teens who had higher depressive symptoms at the end of their treatment experienced more marital problems and more parent-child conflict at later study visits. Conversely, parents whose kids showed fewer depressive symptoms at the end of treatment saw an improvement in later parent-child conflict.

"This study is important in that very little research has examined the effect of treating teens, with medication or psychotherapy, on family relationships," said Mark A. Reinecke, chief of psychology in the department of psychiatry and behavioral sciences at Feinberg. "Findings in this area have been inconsistent, and the effects can be subtle.

"The take-home message - that teen depression can affect families, and that parents of depressed teens may need support - is entirely sensible. It's something we should all keep in mind."

The study was a secondary analysis of data from 322 clinically depressed youths who participated in the 2007 Treatment for Adolescents with Depression Study, a landmark study on treating adolescent depression. As part of this study, adolescents' depression was measured during the treatment period, which lasted 36 weeks, and for one year afterward.

Credit: 
Northwestern University

Breaking down pathological protein aggregates

image: Alpha-synuclein fibrils. Shown here in an electron microscope image are fibrils produced in the laboratory for research purposes. The diameter of the fibrils is about 10 nanometres - 5000 times less than that of a human hair.

Image: 
ETH Zurich / Juan Gerez

Aggregates of the protein alpha-synuclein in the nerve cells of the brain play a key role in Parkinson's and other neurodegenerative diseases. These protein clumps can travel from nerve cell to nerve cell, causing the disease to progress. A team of researchers led by ETH scientists have now discovered how the body can rid itself of these damaging aggregates. Their findings might open up new approaches in the treatment of neurodegenerative diseases.

Relevant for these diseases are long but yet microscopic fibres, or fibrils, to which large numbers of the alpha-synuclein molecules can aggregate. Individual, non-aggregated alpha-synuclein molecules, however, are key to the functioning of a healthy brain, as this protein plays a key role in the release of the neurotransmitter dopamine in nerve cell synapses. When the protein aggregates into fibrils in a person's nerve cells - before which it must first change its three-dimensional shape - it can no longer carry out its normal function. The fibrils are also toxic to the nerve cells. In turn, dopamine-producing cells die, leaving the brain undersupplied with dopamine, which leads to typical Parkinson's clinical symptoms such as muscle tremors.

Breakdown mechanism deciphered

Using cell culture experiments, the researchers were able to show that it is the alpha-synuclein fibrils that can enter healthy cells and accumulate within these. "Once the fibrils enter a new cell, they 'recruit' other alpha-synuclein molecules there, which then change their shape and aggregate together. This is how the fibrils are thought to infect cells one by one and, over time, take over entire regions of the brain," explains Paola Picotti, Professor of the Biology of Protein Networks at ETH Zurich. She conceived the study, published in the latest issue of the journal Science Translational Medicine, which was led by Juan Gerez, a former postdoc in her group.

The team of scientists from ETH Zurich, University Hospital Zurich and the University of California San Diego was also able to decipher a cellular mechanism that breaks down alpha-synuclein fibrils naturally. A protein complex called SCF detects the alpha-synuclein fibrils specifically and targets them to a known cellular breakdown mechanism. In this way, the spread of fibrils is blocked, as the researchers demonstrated in tests on mice: when the researchers switched off SCF's function, the alpha-synuclein fibrils were no longer cleared up in the nerve cells. Instead, they accumulated in the cells and spread throughout the brain.

Stem cell or gene therapy

Picotti and Gerez see promising opportunities in how this SCF breakdown mechanism could be applied in therapy. "The more active the SCF complex, the more the alpha-synuclein fibrils are cleared, which could slow down or eventually stop the progression of such neurodegenerative diseases," Gerez says. He goes on to explain that the SCF complex is very short-lived, dissipating within minutes. Therapeutic approaches would focus on stabilising the complex and increasing its ability to interact with alpha-synuclein fibrils. For example, drugs could be developed for this purpose.

Another approach to help Parkinson's patients would be to transplant nerve stem cells into their brains, as Picotti says. Previous attempts haven't been very successful, she explains, because the alpha-synuclein fibrils in the brain infected the healthy cells. "If we can manage to modify the stem cells in such a way that they either don't let fibrils in or that they immediately break down any fibrils they do let in, this could progress stem cell therapy," she concludes. Finally, gene therapy could be used to stabilise the SCF complex in nerve cells and thus increase its activity. "However, when it comes to potential therapies, we're still right at the beginning," Gerez says, "whether effective therapies can be developed is still unclear."

Credit: 
ETH Zurich

Social interactions impact climate change predictions

image: Prof. Madhur Anand.

Image: 
University of Guelph

Something as simple as chatting with your neighbours about their new energy-efficient home renovations can affect wider climate change predictions, a new University of Guelph study reveals.

Using a new model that couples human behaviour to climate systems, Canadian researchers including a U of G ecologist have discovered that including social processes can alter climate change predictions, a finding that may hold a way to stem or even reduce global warming.

Environmental sciences professor Madhur Anand worked with colleagues at the University of Waterloo to develop a new mathematical model that, for the first time, accounts for social processes such as social learning in climate predictions.

Their results appear in a paper published in PLoS Computational Biology.

Human behaviour affects natural systems including climate, and that climate systems in turn affect behaviour, said Anand, head of U of G's Global Ecological Change and Sustainability Laboratory and senior author of the new paper. But social processes are often neglected in climate models, she said.

"Climate change is a human-made problem. That's very well-understood by scientists. But we're stuck in terms of uptake of that knowledge and response. We've established the science of climate change, and we understand many of the impacts. But what do we need to do to slow it down?"

The researchers believe much of the answer lies in coupling climate change models with social learning, or how learning from others affects our opinions or actions.

Referring to work during the past decade on coupled human-environment systems with University of Waterloo mathematician Chris Bauch, Anand said, "We've studied everything from pest management to forest sustainability to human disease spread and found that human behaviour is key. So we decided to apply the framework to climate science."

For this new study, they combined a common climate prediction model with a new human behaviour model to look at interactions.

They found that social learning about mitigation strategies such as hearing that a friend has bought a new hybrid car or adopted a plant-based diet can influence social norms in ways that ultimately affect climate outcomes.

Anand said the rate of social learning is key. If that rate is low, with only a few people attempting to mitigate carbon emissions, it will take longer to change social norms and, in turn, to alter climate change predictions.

The more people become mitigators through social learning such as attending town hall meetings, taking courses or talking with neighbours, she said, "the faster the population will switch, and that will have a direct effect on reducing CO2 emissions."

Using the model to simulate steps needed to hold global warming to 1.5 degrees Celsius over pre-industrial levels as called for last fall in a special report by the Intergovernmental Panel on Climate Change (IPCC), the team found that a low social learning rate would ultimately fall short of the target.

A higher rate is needed to bring this target within reach, she said.

Anand said the socio-climate model suggests the best approach combines high social learning rates with novel mitigation measures such as government regulation or technology development. For example, widespread media coverage of last year's IPCC report and subsequent climate marches was followed by the announcement of Ottawa's new carbon tax on fuels - and rebates -- in provinces and territories lacking emissions pricing plans, including Ontario.

Co-author Thomas Bury, a University of Waterloo graduate student, said, "Our socio-climate model indicates that an increase in social media and other campaigns to raise awareness, such as climate marches and international reports, should ideally be followed by governmental and other incentives to reduce carbon emissions."

Anand said the team's simulation also highlights the need to consider climate actions and outcomes as far as five decades from now. "If humans only think about the impacts of their behaviour on today or even tomorrow, we will never achieve the 1.5-degree target. As a society, we need to get used to thinking 50 years into the future with climate change."

The model also found that social variables are far more important than geophysical factors -- soil or plant respiration, surface heat reflectivity -- for meeting IPCC warming limits. That result was not unexpected, said Anand, but "it was surprising to see it captured so clearly and unequivocally."

Referring to human interactions, from word of mouth to social and traditional media, she added, "By looking at unique aspects of humans, maybe we can tap into these aspects to lead to the dramatic and widespread change that is urgently needed."

Credit: 
University of Guelph

Elasmobranches getting slammed

image: Researchers analyzed four years of catch data from Tanjung Luar -- a fishing village specifically targeting sharks -- to identify catch abundance and seasonality of vulnerable or endangered species, and found that catch per unit effort (CPUE) of sharks and rays from 2014 to 2017 fluctuated but was not significantly different.

Image: 
Benaya Simeon

Elasmobranches - sharks, rays, and skates - are at an elevated risk of extinction due to overfishing, and Indonesia is a global hub for commercial fishing for these slow-growing, cartilaginous fishes.

Researchers analyzed four years of catch data from Tanjung Luar - a fishing village specifically targeting sharks - to identify catch abundance and seasonality of vulnerable or endangered species, and found that catch per unit effort (CPUE) of sharks and rays from 2014 to 2017 fluctuated but was not significantly different.

The results suggested that management measures should focus on gear control and fishery closures, which could have significant benefits for the conservation of elasmobranch species, and may help to improve the overall sustainability of the fishery.

Credit: 
Wildlife Conservation Society

Recreating embryonic conditions at break sites can help bones heal faster

Researchers at the University of Illinois at Chicago and the University of Pennsylvania have developed a unique technique that uses stem cells and flexible implantable bone-stabilizing plates to help speed the healing of large breaks or defects.

The technique allows the stem cells applied to break sites to experience some mechanical stress, as they do in developing embryos. These forces may help encourage stem cells to differentiate into cartilage and bone, as well as encourage other cells in the bone to regenerate.

Their findings are reported in the journal Science Translational Medicine.

Stem cells need environmental cues to differentiate into cells that make up unique tissues. Stem cells that give rise to bone and cartilage are subject to mechanical forces during development and healing, explained Eben Alsberg, the Richard and Loan Hill Professor of Bioengineering and Orthopaedics at the University of Illinois at Chicago and a senior author on the paper.

When a bone heals, stem cells in the marrow near the break site first become cartilage cells and later bone cells -- ultimately knitting together the break. When there are large gaps between broken or deformed bones, applying additional stem cells to break sites can help bones heal faster by either actively participating in the regenerative process or stimulating bone formation by neighboring cells.

But to use stem cells for bone regeneration, they need to be delivered to the defect site and differentiate appropriately to stimulate repair.

Alsberg and colleagues developed a unique preparation of the cells that can be handled and manipulated easily for implantation and that supports the cellular differentiation events that occur in embryonic bone development.

In Alsberg's preparation, stem cells are cultured so that they link to each other to form either sheets or plugs. The preparation also contains gelatin microparticles loaded with growth factors that help the stem cells differentiate. These sheets or plugs can be manipulated and implanted and reduce the tendency for cells to drift away. Alsberg calls these materials "condensates."

In previous studies, Alsberg and colleagues used condensates in a rodent model to help heal bone defects in the skull. They saw that the condensates stayed in place and were able to improve the rate and extent of bone regeneration.

More recently, Alsberg teamed up with Joel Boerckel, assistant professor of orthopedic surgery and bioengineering at Penn Medicine and a senior author on the paper, to take the idea one step further.

Boerckel has developed a unique, flexible "fixator." Fixators, as they are known to orthopedic surgeons, usually are stiff metal plates or bars that are used to stabilize bones at break sites. These kinds of fixators minimize the amount of mechanical stress breaks experience as they are healing.

Boerckel's flexible fixator would allow the cells in Alsberg's condensates to experience the compressive forces that are critical for stimulating enhanced cartilage and bone formation.

The researchers used a rat model to determine how the mechanical forces present within bone defects affected the ability of condensates to contribute to bone regeneration. When the researchers used condensate sheets together with a flexible fixator in rats with a defect in their femur, they saw that there was enhanced healing and the bones had better mechanical function compared with control rats that received condensates and stiff, traditional fixators.

"Devices and techniques we develop out of this research could also influence the way we implement physical therapy after injury," Boerckel said. "Our findings support the emerging paradigm of 'regenerative rehabilitation,' a concept that marries principles from physical therapy and regenerative medicine. Our goals are to understand how mechanical stimuli influence cell behavior to better impact patient outcomes without additional drugs or devices."

Anna McDermott, Devon Mason and Joseph Collins of the University of Pennsylvania; Samuel Herberg and Rui Tang of Case Western Reserve University; Hope Pearson and James Dawahare of the University of Notre Dame; Amit Patwa and Mark Grinstaff of Boston University, and Daniel Kelly of Trinity College Dublin are co-authors on the paper.

Credit: 
University of Illinois Chicago

'Lubricating' sediments were critical in making the continents move

image: Photo of Grand Canyon, Colorado. The "Great Unconformity" indicates the major global surface erosion event. (public domain / pixabay.com)

Image: 
public domain / pixabay.com

Plate tectonics is a key geological process on Earth, shaping its surface, and making it unique among the planets in the Solar System. Yet, how plate tectonics emerged and which factors controlled its evolution remains controversial. Now, Stephan V. Sobolev from the German Research Centre for Geosciences GFZ and the University of Potsdam and Michael Brown from the University of Maryland take a new approach to solving this riddle. In a study published in the journal Nature, they propose that natural lubrication by debris from surface erosion was crucial in starting and maintaining plate tectonics.

Since the 1960s it is known that plate tectonics is driven by so-called deep mantle convection, a process that stirs the hotter and colder matter inside Earth according to the laws of thermodynamics. Therefore, according to common thinking, plate tectonics must depend only on so-called deep-Earth processes. A likely control mechanism in this process was the cooling of the Earth's mantle. Stephan V. Sobolev and Michael Brown recognize that this process is important, but suggest that surface erosion events were at least as important for the evolution of plate tectonics. "Our hypothesis is counter-intuitive", GFZ's Stephan V. Sobolev explains. "That was the main problem for us and we expect will be the main problem for the community to accept our ideas."

Based on geodynamic modelling, Sobolev and Brown suggest that the emergence and evolution of plate tectonics on Earth was controlled by the rise of the continents above sea level and the following major surface erosion events. Erosion processes produce continental sediments working as a lubricant for subduction - a key process of plate tectonics. Like engine oil that reduces friction between the moving parts of an engine, continental sediments reduce friction between the subducting plate and overriding plate, according to Sobolev and Brown.

A multidisciplinary and multi-scale approach

The researchers tested their hypothesis using geological and geochemical data that were already published. These data show that the first clear evidence for plate tectonics stems from 2.5 to 3 billion years ago. That was also around the time when the Earth's continents rose above sea level and the first major glaciations occurred on the planet. The first supercontinent in Earth's history called Columbia assembled at about 2.2 to 1.8 billion years ago, following the global glaciation and a major surface erosion event.

Later on, the largest surface erosion event in Earth's history followed the global 'Snowball Earth' glaciations 700 to 600 million years ago. It produced the famous global geological gap called 'Great Unconformity'. The huge amount of continental sediments produced during this erosion event were transported to the oceans, lubricating subducting slabs and kick-starting the modern active phase of plate tectonics, Stephan V. Sobolev and Michael Brown write in their study.

More geochemical data needed

The study's key feature is a multidisciplinary and multi-scale approach. "We suggested our hypothesis based on global geodynamic models of plate tectonics and regional models of subduction in the South American Andes", Stephan V. Sobolev explains. "Then we used published geological and geochemical data to verify the hypothesis. Only the synergetic combination of all these disciplines made this study possible."

Despite of the support from existing data, more geochemical data is required to conclusively test the hypothesis, Sobolev and Brown say. "It must be fully quantified, which in turn will require coupled modelling of deep mantle convection and plate tectonics, surface processes and even climate, which is an important factor controlling surface erosion. That is an exciting challenge for the Earth system modeling community", Stephan V. Sobolev says. "This will require building new types of models tightly linking deep Earth and surface processes."

Credit: 
GFZ GeoForschungsZentrum Potsdam, Helmholtz Centre

As hot as the sun's interior

image: Dr. Zhanna Samsonova and Dr. Daniil Kartashov are preparing an experiment on the JETI laser in a laboratory of the Institute of Optics and Quantum Electronics at the Friedrich Schiller University Jena.

Image: 
Jan-Peter Kasper / University Jena

The three classic physical states - solid, liquid and gaseous - can be observed in any normal kitchen, for example when you bring an ice cube to the boil. But if you heat material even further, so that the atoms of a substance collide and the electrons separate from them, then another state is reached: plasma. More than 99 per cent of material in space is present in this form, inside stars for instance. It is therefore no wonder that physicists are keen to study such material. Unfortunately, creating and studying plasmas on Earth using the high temperature and pressure that exist inside stars is extremely challenging for various reasons. Physicists at Friedrich Schiller University in Jena have now managed to solve some of these problems, and they have reported on their results in the renowned research journal Physical Review X.

Nanowires let light through

"To heat material in such a way that plasma is formed, we need correspondingly high energy. We generally use light in the form of a large laser to do this," explains Christian Spielmann of the University of Jena. "However, this light has to be very short-pulsed, so that the material does not immediately expand when it has reached the appropriate temperature, but holds together as dense plasma for a brief period." There is a problem with this experimental setup, though: "When the laser beam hits the sample, plasma is created. However, it almost immediately starts to act like a mirror and reflects a large part of the incoming energy, which therefore fails to penetrate the matter fully. The longer the wavelength of the laser pulse, the more critical the problem," says Zhanna Samsonova, who played a leading role in the project.

To avoid this mirror effect, the researchers in Jena used samples made of silicon wires. The diameter of such wires - a few hundred nanometres - is smaller than the wavelength of around four micrometres of the incoming light. "We were the first to use a laser with such a long wavelength for the creation of plasma," says Spielmann. "The light penetrates between the wires in the sample and heats them from all sides, so that for a few picoseconds, a significantly larger volume of plasma is created than if the laser is reflected. Around 70 per cent of the energy manages to penetrate the sample." Furthermore, thanks to the short laser pulses, the heated material exists slightly longer before it expands. Finally, using X-ray spectroscopy, researchers can retrieve valuable information about the state of the material.

Maximum values for temperature and density

"With our method, it is possible to achieve new maximum values for temperature and density in a laboratory," says Spielmann. With a temperature of around 10 million Kelvin, the plasma is far hotter than material on the surface of the Sun, for example. Spielmann also mentions the cooperation partners in the project. For the laser experiments, the Jena scientists used a facility at the Vienna University of Technology; the samples come from the National Metrology Institute of Germany in Braunschweig; and computer simulations for confirming the findings come from colleagues in Darmstadt and Düsseldorf.

The Jena team's results are a ground-breaking success, offering a completely new approach to plasma research. Theories on the state of plasma can be verified through experiments and subsequent computer simulations. This will enable researchers to understand cosmological processes better. In addition, the scientists are carrying out valuable preparatory work for the installation of large-scale apparatus. For example, the international particle accelerator, 'Facility for Antiproton and Ion Research' (FAIR), is currently being set up in Darmstadt and should become operational around 2025. Thanks to the new information, it will be possible to select specific areas that merit closer examination look.

Credit: 
Friedrich-Schiller-Universitaet Jena

Study: Cholesterol in eggs tied to cardiac disease, death

LOWELL, Mass. - The risk of heart disease and death increases with the number of eggs an individual consumes, according to a UMass Lowell nutrition expert who has studied the issue.

Research that tracked the diets, health and lifestyle habits of nearly 30,000 adults across the country for as long as 31 years has found that cholesterol in eggs, when consumed in large quantities, is associated with ill health effects, according to Katherine Tucker, a biomedical and nutritional sciences professor in UMass Lowell's Zuckerberg College of Health Sciences, who co-authored the analysis. The study was published in the Journal of the American Medical Association.

The study results come as egg consumption in the country continues to rise. In 2017, people ate an average of 279 eggs per year, compared with 254 eggs in 2012, according to the U.S. Department of Agriculture.

Current U.S. Dietary Guidelines for Americans do not offer advice on the number of eggs individuals should eat each day. The guidelines, which are updated every five years, do not include this because nutrition experts had begun to believe saturated fats were the driving factor behind high cholesterol levels, rather than eggs, according to Tucker. However, prior to 2015, the guidelines did recommend individuals consume no more than 300 milligrams of cholesterol a day, she said.

One large egg contains nearly 200 milligrams of cholesterol, roughly the same amount as an 8-ounce steak, according to the USDA. Other foods that contain high levels of cholesterol include processed meats, cheese and high-fat dairy products.

While the new research does not offer specific recommendations on egg or cholesterol consumption, it found that each additional 300 milligrams of cholesterol consumed beyond a baseline of 300 milligrams per day was associated with a 17 percent higher risk of cardiovascular disease and an 18 percent higher risk of death.

Eating several eggs a week "is reasonable," said Tucker, who noted they include nutrients beneficial to eye and bone health. "But I recommend people avoid eating three-egg omelets every day. Nutrition is all about moderation and balance."

Research results also determined that study participants' exercise regimen and overall diet quality, including the amount and type of fat they consumed, did not change the link between cholesterol in one's diet and risk of cardiovascular disease and death.

"This is a strong study because the modeling adjusted for factors such as the quality of the diet," Tucker said. "Even for people on healthy diets, the harmful effect of higher intake of eggs and cholesterol was consistent."

Credit: 
University of Massachusetts Lowell

UV light may illuminate improvements for next generation electronic devices

image: Determining the graphene-GaN heterojunction interface under ultraviolet illumination. Researchers of the study shows the fabrication process of vertical Schottky junction with monolayer graphene on free-standing GaN.

Image: 
Golap Kalita, Ph.D., Nagoya Institute of Technology, Japan

By adding one more layer of atoms to already infinitesimal semiconductors, a next-level generation of electrical devices becomes possible. This work to build better and faster electronics is well underway, but little was known about how to test the ingredients of these devices to ensure performance. Now, researchers from the Nagoya Institute of Technology (NITech) in Japan have developed a method to make sure the connections between the two-dimensional layer of atoms and the semiconductors are as perfect as possible.

The researchers published their results on April 15 in Applied Physics Letters.

They applied a layer of graphene to gallium nitride, a commonly used semiconductor. The graphene is made of a single layer of atoms, while the gallium nitride is a three-dimensional structure. Together, graphene and gallium nitride are known as a heterojunction device, with significant sensitivity to the interface properties of metal and semiconductors.

According to Golap Kalita Ph.D., an associate professor at NITech, understanding GaN heterojunction devices and how to improve them is critical for better device performance.

"Our team found a way to determine the interface properties of the graphene and gallium nitride heterojunction by characterizing the device under ultraviolet illumination," Kalita said.

The interface between the graphene and the gallium nitride should be free of impurities, especially ones that gain energy from light. When the researchers shined ultraviolet (UV) light on the heterojunction device, they found photo-excited electrons (excitons) trapped at the interface and interfering with the transfer of information.

The gallium nitride contains surface-level defects and other imperfections that allow such photo-excited electrons to become trapped at the interface.

"We found that the interface states of graphene and gallium nitride have a significant influence on the junction behavior and device properties," Kalita said.

One such property is called electrical hysteresis - it is a phenomenon in which electrons get trapped at the interface leading to behavioral shift in the device. The trapping of electrons is extremely sensitive to UV light. It means that once the UV light is shined on the heterojunction, the excited electrons get populated at the interface and remain as trapped, creating large hysteresis window.

However, when the researchers applied a more refined layer of graphene to gallium nitride, they didn't see any the hysteresis effect without light illumination, implying a cleaner match at the interface. But it wasn't perfect -- UV illumination instigated the photo-excited electrons into a frenzy behavior due to inherent defects in gallium nitride.

"This finding showed that the graphene/GaN heterojunction interface can be evaluated by the ultraviolet illumination process," Kalita said.

The ability to evaluate the purity of the interface is invaluable in the development of high-performance devices, according to the researchers.

"This study will open up new possibilities to characterize other heterojunction interfaces by an ultraviolet light illumination process," Kalita said. "Ultimately, our goal is to understand interface of various two- and three-dimensional heterostructures to develop novel optoelectronic devices with graphene."

Credit: 
Nagoya Institute of Technology