Brain

A hint of new physics in polarized radiation from the early universe

image: As the light of the cosmic microwave background emitted 13.8 billion years ago (left image) travels through the Universe until observed on Earth (right image), the direction in which the electromagnetic wave oscillates (orange line) is rotated by an angle β. The rotation could be caused by dark matter or dark energy interacting with the light of the cosmic microwave background, which changes the patterns of polarization (black lines inside the images). The red and blue regions in the images show hot and cold regions of the cosmic microwave background, respectively.

Image: 
Y. Minami / KEK

Using Planck data from the cosmic microwave background radiation, an international team of researchers has observed a hint of new physics. The team developed a new method to measure the polarization angle of the ancient light by calibrating it with dust emission from our own Milky Way.

While the signal is not detected with enough precision to draw definite conclusions, it may suggest that dark matter or dark energy causes a violation of the so-called "parity symmetry."

The laws of physics governing the Universe are thought not to change when flipped around in a mirror. For example, electromagnetism works the same regardless of whether you are in the original system, or in a mirrored system in which all spatial coordinates have been flipped.

If this symmetry, called "parity," is violated, it may hold the key to understanding the elusive nature of dark matter and dark energy, which occupy 25 and 70 percent of the energy budget of the Universe today, respectively. While both dark, these two components have opposite effects on the evolution of the Universe: dark matter attracts, while dark energy causes the Universe to expand ever faster.

A new study, including researchers from the Institute of Particle and Nuclear Studies (IPNS) at the High Energy Accelerator Research Organization (KEK), the Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU) of the University of Tokyo, and the Max Planck Institute for Astrophysics (MPA), reports on a tantalizing hint of new physics--with 99.2 percent confidence level--which violates parity symmetry.

Their findings were published in the journal Physical Review Letters on November 23, 2020; the paper was selected as the "Editors' Suggestion," judged by editors of the journal to be important, interesting, and well written.

The hint to a violation of parity symmetry was found in the cosmic microwave background radiation, the remnant light of the Big Bang. The key is the polarized light of the cosmic microwave background. Light is a propagating electromagnetic wave. When it consists of waves oscillating in a preferred direction, physicists call it "polarized." The polarization arises when the light is scattered.

Sunlight, for instance, consists of waves with all possible oscillating directions; thus, it is not polarized. The light of a rainbow, meanwhile, is polarized because the sunlight is scattered by water droplets in the atmosphere. Similarly, the light of the cosmic microwave background initially became polarized when scattered by electrons 400,000 years after the Big Bang. As this light traveled through the Universe for 13.8 billion years, the interaction of the cosmic microwave background with dark matter or dark energy could cause the plane of polarization to rotate by an angle (see figure).

"If dark matter or dark energy interact with the light of the cosmic microwave background in a way that violates parity symmetry, we can find its signature in the polarization data," points out Yuto Minami, a postdoctoral fellow at IPNS, KEK.

To measure the rotation angle, the scientists needed polarization-sensitive detectors, such as those onboard the Planck satellite of the European Space Agency (ESA). And they needed to know how the polarization-sensitive detectors are oriented relative to the sky. If this information was not known with sufficient precision, the measured polarization plane would appear to be rotated artificially, creating a false signal.

In the past, uncertainties over the artificial rotation introduced by the detectors themselves limited the measurement accuracy of the cosmic polarization angle (see figure).

"We developed a new method to determine the artificial rotation using the polarized light emitted by dust in our Milky Way," said Minami. "With this method, we have achieved a precision that is twice that of the previous work, and are finally able to measure the (polarization angle)."

The distance traveled by the light from dust within the Milky Way is much shorter than that of the cosmic microwave background. This means that the dust emission is not affected by dark matter or dark energy, i.e. (the polarization angle) is present only in the light of the cosmic microwave background, while the artificial rotation affects both. The difference in the measured polarization angle between both sources of light can thus be used to measure (the angle).

The research team applied the new method to measure (the angle) from the polarization data taken by the Planck satellite. They found a hint for violation of parity symmetry with 99.2 percent confidence level. To claim a discovery of new physics, much greater statistical significance, or a confidence level of 99.99995 percent, is required.

Eiichiro Komatsu, director at the MPA and Principal Investigator at the Kavli IPMU, said: "It is clear that we have not found definitive evidence for new physics yet; higher statistical significance is needed to confirm this signal. But we are excited because our new method finally allowed us to make this 'impossible' measurement, which may point to new physics."

To confirm this signal, the new method can be applied to any of the existing--and future--experiments measuring polarization of the cosmic microwave background, such as Simons Array and LiteBIRD, in which both KEK and the Kavli IPMU are involved.

Credit: 
Kavli Institute for the Physics and Mathematics of the Universe

New microscope technique reveals details of droplet nucleation

Nucleation is a ubiquitous phenomenon that governs the formation of both droplets and bubbles in systems used for condensation, desalination, water splitting, crystal growth, and many other important industrial processes. Now, for the first time, a new microscopy technique developed at MIT and elsewhere allows the process to be observed directly in detail, which could facilitate the design of improved, more efficient surfaces for a variety of such processes.

The innovation uses conventional scanning electron microscope equipment but adds a new processing technique that can increase the overall sensitivity by as much as tenfold and also improves contrast and resolution. Using this approach, the researchers were able to directly observe the spatial distribution of nucleation sites on a surface and track how that changed over time. The team then used this information to derive a precise mathematical description of the process and the variables controlling it.

The new technique could potentially be applied to a wide variety of research areas. It is described today in the journal Cell Reports Physical Science, in a paper by MIT graduate student Lenan Zhang; visiting research scientist Ryuichi Iwata; professor of mechanical engineering and department head Evelyn Wang; and nine others at MIT, the University of Illinois at Urbana-Champaign, and Shanghai Jiao Tong University.

"A really powerful opportunity"

When droplets condense on a flat surface, such as on the condensers that cycle the steam in electric power plants back into water, each droplet requires an initial nucleation site, from which it builds up. The formation of those nucleation sites is random and unpredictable, so the design of such systems relies on statistical estimates of their distribution. According to the new findings, however, the statistical method that's been used for these calculations for decades is incorrect, and a different one should be used instead.

The high-resolution images of the nucleation process, along with mathematical models the team developed, make it possible to describe the distribution of nucleation sites in strict quantitative terms. "The reason this is so important," Wang says, "is because nucleation pretty much happens in everything, in a lot of physical processes, whether it's natural or in engineered materials and systems. Because of that, I think understanding this more fundamentally is a really powerful opportunity."

The process they used, called phase-enhanced environmental scanning electron microscopy (p-ESEM), makes it possible to peer through the electronic fog caused by a cloud of electrons scattering from moving gas molecules over the surface being imaged. Conventional ESEM "can image a very wide sample of material, which is very unique compared to a typical electron microscope, but the resolution is poor" because of this electron scattering, which generates random noise, Zhang says.

Taking advantage of the fact that electrons can be described as either particles or waves, the researchers found a way to use the phase of the electron waves, and the delays in that phase generated when the electron strikes something. This phase-delay information is extremely sensitive to the slightest perturbations, down to the nanometer scale, Zhang says, and the technique they developed makes it possible to use these electron-wave phase relationships to reconstruct a more detailed image.

By using this method, he says, "we can get much better enhancement for the imaging contrast, and then we are capable of reconstructing or directly imaging the electrons at a few microns or even a submicron scale. This allows us to see the nucleation process and the distribution of the huge number of nucleation sites."

The advance enabled the team to study fundamental problems about the nucleation process, such as the difference between the site density and the closest distance between sites. It turns out estimates of that relationship that have been used by engineers for over a half century have been incorrect. They have been based on a relationship called a Poisson distribution, for both the site density and the nearest-neighbor function, when in fact the new work shows that a different relationship, the Rayleigh distribution, more accurately describes the nearest-neighbor relationship.

Zhang explains that this is important, because "nucleation is a very microscopic behavior, but the distribution of nucleation sites on this microscopic scale actually determines the macroscopic behavior of the system." For example, in condensation and boiling, it determines the heat transfer coefficient, and in boiling even the critical heat flux," the measure that determines how hot a boiling-water system can get before triggering a catastrophic failure.

The findings also relate to far more than just water condensation. "Our finding about the nucleation site distribution is universal," Iwata says. "It can be applied to a variety of systems involving a nucleation process, such as water splitting and material growth." For example, he says, in water splitting systems, which can be used to generate fuel in the form of hydrogen out of electricity from renewable sources. The dynamics of the formation of bubbles in such systems is key to their overall performance, and is determined in large part by the nucleation process.

Iwata adds that "it sounds like water splitting and condensation are very different phenomena, but we found a universal law amongst them. So we are so excited about that."

Diverse applications

Many other phenomena also rely on nucleation, including such processes as the growth of crystalline films, including diamond, across surfaces. Such processes are increasingly important in a wide variety of high-tech applications.

In addition to nucleation, the new p-ESEM technique the team developed can also be used to probe a variety of different physical processes, the researchers say. Zhang says it could be applied also to "electrochemical processes, polymer physics, and biomaterials, because all these kinds of material are widely studied using the conventional ESEM. Yet, by using the p-ESEM, we can definitely get a much better performance due to the intrinsic high sensitivity" of this system.

The p-ESEM system, Zhang says, by improving contrast and sensitivity, can improve the intensity of the signal in relation to background noise by up to 10 times.

Credit: 
Massachusetts Institute of Technology

Carbon dioxide converted to ethylene -- the 'rice of the industry'

image: Real-time analysis of catalytic surface in the process of electrochemical carbon dioxide conversion ethylene generation.

Image: 
Korea Institue of Science and Technology(KIST)

In recent times, "electrochemical conversion (e-chemical)" technology-which converts carbon dioxide to high-value-added compounds using renewable electricity-has gained research attention as a carbon capture utilization (CCU) technology. This green carbon resource technology employs electrochemical reactions using carbon dioxide and water as the only feedstock chemical to synthesize various compounds, instead of conventional fossil fuels. Electrochemical CO2 conversion can produce value-added important molecules in a petrochemical industry such as carbon monoxide and ethylene. Ethylene, referred to as the "rice of the industry", is widely used to produce various chemical products and polymers, but it is more challenging to produce from electrochemical CO2 reduction. The lack of understanding of the reaction pathway by which carbon dioxide is converted to ethylene has limited to develop high-performance catalyst systems and to advance its application to produce more valuable chemicals.

To overcome this limitation, a domestic research team in South Korea has made a breakthrough in unveiling a key path-triggering intermediate in the ethylene production reaction. Dr. Yun-Jeong Hwang and her team at the Clean Energy Research Center of the Korea Institute of Science and Technology (KIST) has announced that they have successfully observed the key intermediates adsorbed on the surface of a copper-based catalyst during electrochemical CO2 reduction to ethylene production and analyzed its behavior in real time. This was research was conducted in collaboration with Professor Woo-Yul Kim and his team at the Department of Chemical and Biological Engineering, Sookmyung Women's University (President: Yoon-Geum Jang), with the support of the climate change response technology development project (Next Generation Carbon Upcycling Project Group, led by Ki-Won Jun).

It has been reported that copper-based catalysts can promote carbon dioxide conversion to synthesize not only relatively simple carbon monoxide or formic acid but also multi-carbon compounds such as ethylene and ethanol. Nevertheless, the development of control technology for selectively synthesizing high-value-added compounds has been limited because of the absence of information on major intermediates and pathways of the carbon-carbon bond forming reaction.

Through infrared spectroscopy, the research team observed the intermediate responsible for the formation of the ethylene intermediate (*OCCO) as well as the one responsible for the production of methane (*CHO). The intermediate is a dimer of carbon monoxide formed during the carbon dioxide conversion reaction on the surface of the copper nanoparticle catalyst. As a result, carbon monoxide and the ethylene intermediate (*OCCO) were produced at the same time, whereas the methanol intermediate (*CHO) was produced relatively slower than the two other intermediates, suggesting the possibility of further improving the selectivity of compound formation on the catalyst surface by controlling the reaction pathway.

In addition, copper hydroxide (Cu(OH)2) nanowire was proposed as a promising catalyst that exhibits excellent performance toward ethylene production by accelerating carbon-carbon bond formation. The research team found that there were multiple catalytic sites on which carbon monoxide can be adsorbed on the surface of the catalyst derived from copper hydroxide and that carbon monoxide adsorbed on a specific site quickly forms an intermediate through carbon-carbon bond formation. Further research on this intermediate is expected to contribute significantly to the identification of the active sites for the carbon-carbon bond forming reaction, which has been a subject of debate.

"The success of this study is significant in that it has presented a key direction for basic research related to artificial photosynthesis that has been unexplored in Korea through a joint investigation by the research institute as well as the university," said Dr. Yun-Jeong Hwang of KIST. "Based on this, we will be able to contribute significantly to the growth of next-generation carbon resource conversion technology based on sustainable energy in response to climate change."

Credit: 
National Research Council of Science & Technology

Research reveals how a fungal infection activates inflammation

image: Benoit Briard, Ph.D. and Thirumala-Devi Kanneganti, Ph.D.

Image: 
St. Jude

Scientists at St. Jude Children's Research Hospital have identified the mechanisms behind inflammasome activation driven by infection with the fungal pathogen Aspergillus fumigatus. Fungal infection, especially with A. fumigatus, is a leading cause of infection-associated deaths in people with compromised immune systems. The work provides clues to a potential therapeutic approach for treating infectious and inflammatory disorders. The findings were published online today in Nature.

"Inflammasomes are important sentinels of an organism's innate immune defense system," said corresponding author and founding member of the inflammasome field Thirumala-Devi Kanneganti, Ph.D., of the St. Jude Immunology department. "Our prior work showed that fungal pathogens activate the inflammasome, but the exact mechanism of action for inflammasome engagement was unknown."

To understand these mechanisms for A. fumigatus, the scientists looked for pathogen-associated molecular patterns, which can stimulate the innate immune response by activating the inflammasome. The scientists focused on NLRP3, the most-studied inflammasome sensor.

The research identified galactosaminogalactan (GAG), a novel fungal pathogen-associated molecular pattern. GAG is essential for A. fumigatus-induced NLRP3 inflammasome activation. The scientists showed that A. fumigatus deficient in GAG fail to induce inflammasome activation. Conversely, over-production of GAG by A. fumigatus increases inflammasome activation.

Additionally, inflammasome activation is critical for clearing A. fumigatus infections in animals. The A. fumigatus fungal strain that failed to produce GAG was more virulent in mice, while the strain that over-produced GAG was less virulent.

Similarly, inflammasome activation is protective during gut inflammation in a mouse model of colitis, an inflammatory disease. Treatment with purified GAG provided protection against colitis.

"We showed that protection against this inflammatory disease was dependent on the ability of GAG to induce inflammasome activation," said first author Benoit Briard, Ph.D., formerly of St. Jude Immunology. "These findings demonstrate the mechanism for the therapeutic potential of GAG in inflammatory diseases."

Credit: 
St. Jude Children's Research Hospital

Academic dishonesty: Fear and justifications

Why do some students cheat by looking over someone's shoulder, furtively searching for test answers on the internet, using cheat sheets during exams or paying others to complete their coursework? How do they rationalise their behaviour to continue to think of themselves as decent people? A study https://monitoringjournal.ru/index.php/monitoring/article/view/972 conducted by the HSE Centre for Sociology of Higher Education offers some answers.

Cheating Is Contagious

According to studies performed in many countries, the vast majority of students have at least once committed academic fraud such as plagiarism, using cheat sheets during exams, 'outsourcing' one's homework, sharing information between peers regarding test answers, etc. There are many reasons why academic dishonesty is so widespread.

Often students' perception of their peers' behaviour has an effect on the likelihood of cheating. Students who believe that most of their classmates do it are more inclined to cheat.

A recent study by the Centre for Sociology of Higher Education of the HSE Institute of Education suggests that cheating students use various mental strategies to rationalise and justify their dishonesty, indicating their awareness that cheating is wrong and their attempts to resolve an internal conflict.

Dremova https://www.hse.ru/en/org/persons/211430993, Maloshonokhttps://www.hse.ru/en/staff/maloshonok and Terentiev https://www.hse.ru/en/org/persons/13869964interviewed a number of undergraduates in Russia and the U.K. at both highly selective (two in each country) and medium-level universities (one in each, in different regions), most of them large and multidisciplinary ones. The students interviewed were predominantly economics and business undergraduates whom other studies found to be more prone to academic fraud.

Cheating such as copying from other students' papers was found in both countries. But the study did not involve a cross-country comparison, rather, its main purpose was to provide a generalised classification of reasons why undergraduates may either judge or justify cheating and to suggest appropriate measures against academic fraud for various national contexts.

The researchers identified six main types of logic - or 'modes' - of dishonest conduct, based on Laurent Thévenot and Luc Boltanski's sociology of critical capacity.

Modes of Justification

In their seminal work De la justification: Les économies de la grandeur, Boltanski and Thévenot identify six 'modes' - or 'regimes' - of criticism and/or justification, listed below with examples from the sphere of academic dishonesty:

the inspiration mode, involving an emotional aspect, e. g. the study content evokes either interest or boredom;

the domestic (traditional) mode, instilled by family or school, e. g. cheating is considered unacceptable (or okay) in the family;

the opinion (reputation) mode, based on external assessment of one's actions, e. g. successful cheating is admired but being caught causes a student to lose points with peers;

the civic mode, which is community-driven, e. g. peer cover-up, (un)willingness to share assignments;

the market mode, seeking to obtain results at a relatively small cost;

the industrial (functional) regime, e. g. is there any benefit in taking a course? If none is expected, cheating is okay.

In a more recent paper, Boltanski and Eve Chiapello added a project-oriented mode, in which the equivalency principle is based on whether one is active and likely to initiate projects. This mode, however, can hardly be applied to academic dishonesty, because cheating and plagiarism are associated with precisely the opposite: an unwillingness to be active at school.

Interesting vs Boring

Being in the inspiration mode often means that the student is interested in the subject and finds it easy to engage with the teaching and learning materials and the teacher's presentation. Students who are motivated and passionate about a subject are not likely to cheat. 'Writing it on your own is better, because you are starting to really understand [the subject]', according to a respondent in Russia.

On the opposite end are negative feelings, such as extreme anxiety at the exam, fear of failure, boredom and aversion to the subject or to the teacher. Students experiencing such feelings are more likely to cheat and often rationalise their dishonesty by being too nervous, finding the subject too complicated and the teacher overly demanding, and saying that 'you cannot retain such a huge amount of information in your head anyway'.

According to a study participant, 'Some teachers give lectures in a monotonous manner, so following them is virtually impossible . Also, some teachers are not really involved in the process during seminars, and their students answer by reading out papers downloaded from the internet and no one cares'.

But sometimes people are motivated to be honest because they want to avoid negative feelings. 'I almost bought [an essay] once', says a Russian university undergraduate. 'But then I felt it was kind of shameful humiliating. I do not consider myself too stupid to write an essay'.

As far as the inspiration mode is concerned, teachers need to know how to engage students in their subject, in particular by soliciting feedback from students about the content and delivery of the courses they take.

The researchers also advise teachers to consider using close supervision and strict sanctions to discourage students from cheating by creating negative emotional associations with dishonest behaviour.

Cheating Habits

In the traditional mode, dishonest conduct is either justified or rejected based on students' pre-existing attitudes. Thus, some undergraduates justify their cheating by saying that it was tolerated in their family or secondary school. 'When we come to university, we are already prepared to cheat, just like we had been doing for the 11 years before that', according to a Russian student.

Another respondent argues, '[We learn] all of this from adults, from our older sisters and brothers, from our parents, who tell us stories about getting stuff without paying or about outwitting someone. So you come to university and cheat to avoid studying hard, just as you did before '.

For other students, integrity is a value instilled by their family. 'I was raised to be honest', says a U.K. respondent. 'I want to be proud of the work I do and to be able to say that I did this myself'.

Since most undergraduates' attitudes are already well-established, there is not much a university can do to combat cheating justified by 'tradition'.

Success at Any Cost

In the reputation mode, other people's opinion is the main consideration for those who cheat to get a good grade or avoid a bad one. 'Given a chance to look up the right answer, I don't think anyone would miss out on it by saying that they never cheat on principle', according to a Russian undergraduate. Another reason to cheat is to avoid upsetting one's family.

A student from the U.K. explains, 'If our parents are only concerned about our academic performance and pressure gets too high, cheating looks like an increasingly interesting option'.

On the other hand, the fear of damaging one's reputation by being caught can discourage academic fraud. 'If your school finds out that you have been cheating, you will be punished', a U.K. respondent explains.

In order to respond to this type of justification, any academic success gained by cheating should be declared unacceptable and damaging to the cheater's reputation.

Common Good and Punishment

This mode is based on collectivism and informal rules established among peers. 'It's a matter of mutual help,' says a respondent from Russia. 'I allow you to copy from my paper and you allow me to copy from yours. Either all of us should avoid cheating or we all agree to cheat'.

A few other studies found that students often interpret cheating behaviour as acceptable peer support. "If you peek at someone's paper just a little bit to compare your answers, I don't think of it as something bad', a respondent says. 'We need to help each other'.

Those who are against cheating often refer to broader responsibility before society. 'I believe that someone [who completes an assignment on behalf of someone else] harms society by enabling that person to get through university without gaining the knowledge'.

Sanctions imposed on the entire group rather than the individual cheater may be effective in dealing with this type of dishonest conduct. 'It's like in the army - one person messes up, the entire team is made to do push-ups together or mop the floor', according to one respondent.

Big Gain with Least Effort

In the market mode, students hope to achieve their goals with the least possible investment of time and effort. They rationalise dishonesty by saying that it is okay to save one's resources while still getting the desired result. 'The main thing [for some people] is to get a degree, so they choose to pay [for a term paper, a thesis or an essay]', a Russian respondent explains.

He is echoed by a U.K. undergraduate who says, 'If there were a really big difference between passing and failing, I would cheat because the cost of failure would be too high,' and summarises, 'Do whatever it takes to pass the exam'.

Another argument may be that cheating is tolerated in university. 'I am not aware of anyone getting kicked out [for cheating]. Everyone does it and everyone gets away with it, so why not me?', a Russian respondent says.

The attitude of the faculty can also play a role. According to some students, teachers prefer to look the other way because they will be worse off by exposing cheating. 'If they catch someone with a cheat sheet, they will need to reschedule the exam at the cost of their personal time - which perhaps will not be compensated'.

According to another respondent, 'If all your teacher needs is some kind of paper from you, I don't consider [cheating in this situation] to be academic fraud'.

But cheating can be risky. 'I was afraid to pull out and use my cheat sheet', a student admits. 'I knew that I could be kicked out of the exam if caught'.

Justifications of this type rely on the perceived balance between cost and benefit. Codes of conduct, group discussions and similar approaches to changing students' minds are not likely to work in this situation. 'Stricter supervision to make it more difficult for students to achieve their goals by cheating may be more effective', according to the researchers.

Career Benefits

In the industrial mode, students make decisions based on whether or not a course is likely to contribute to their future career. When cheating is perceived as an obstacle to useful learning, it is avoided, because cheaters make incompetent employees, as potential employers will quickly discover.

'Those who thoughtlessly download papers from the internet deny themselves the opportunity to think independently, to learn how to express their ideas, search for information and reorganise it', according to a Russian undergraduate. 'As a result, their degrees do not reflect their actual abilities'. Another respondent agrees that someone who refuses to complete assignments 'will simply fail to learn anything of value before their graduation', stressing that 'when studying is a tick-box exercise, one is not getting any real value out of it'.

On the other hand, some students justify academic dishonesty by claiming that the content of certain university courses has little relevance to their future occupation. 'A lot of words but not much meat - no practice, only theory', a respondent from Russia complains. Another undergraduate argues that 'academic fraud is okay' in a course which 'has no effect on your future' but is only needed for the degree.

Making courses interesting and relevant is a universal response to this and other rationalisations of dishonesty, because student engagement is the best cure for cheating. This may not work, however, in schools where academic fraud has acquired epidemic proportions to become the new norm.

According to the authors, more research could determine the prevalence of different cheating justification modes across universities and help design effective responses for each case.

Credit: 
National Research University Higher School of Economics

To increase organs available for transplant, reassess organ procurement organizations' metrics

ANN ARBOR, Mich. - Organ procurement organizations are a critical component in organ transplantation in the United States. But, what makes an organ procurement organization high performing and obtaining much needed organs to those awaiting a transplantation?

In a new paper, published in JAMA Surgery, researchers found the metrics used to rank organ procurement organizations don't create an even playing field for organizations, and lead to inaccuracies.

"The 58 organ procurement organizations throughout the U.S. coordinate placing organs from deceased donors with recipients," says Neehar Parikh, M.D., senior author of the paper and an assistant professor in the divisions of gastroenterology and hepatology at Michigan Medicine. "The organizations are evaluated by their efficiency in procuring organs and getting them to recipients by a variety of metrics, including eligible donors, donor demographics, donors per death and more."

Analyzing metrics

Parikh says the research team analyzed national organ transplant data from 2008 to 2017 and found a wide variation in organ procurement organizations' efficiency.

"High performing sites were pretty variable depending on which metric you chose," Parikh says. "Donors that do not meet eligibility criteria explained a large part of this variance. There was also variance in donation by race and ethnicity, which highlights the disparities in donation that may be due to lack of consent or other biases that may underutilize these organs."

The team proposed a combination of three metrics that would give a more accurate and equitable representation of organ procurement organization performance.

"The three metrics we are suggesting are donors per death, eligible donors per eligible death and eligible donors per donor, or the donor eligibility rate," says Luke DeRoos, lead author of the study and a Ph.D. student in the University of Michigan College of Engineering.

The researchers defined donors per death as the percentage of the population that donates an organ when deceased, while eligible donors per eligible death is a similar rate that only considers individuals that meet a list of broad health criteria.

"Since many organ transplants successfully use 'ineligible' donors outside of these health criteria, the donors per death and eligible donors per eligible death metrics can appear very different," DeRoos says.

The research team also noted that transplant centers use ineligible donors at vastly different rates, as demonstrated in the donor eligibility rate, which measures the percentage of donors that meet all health criteria. The researchers say this rate can highlight which centers have consistently identified traditionally ineligible donors as good candidates for donation, which could lead to best practice sharing.

"By using a consistent definition of eligibility and comparing these metrics against each other, you gain more advanced insight into how eligibility definition greatly impacts the procurement organizations' overall ratings," DeRoos says.

Parikh adds, "These complimentary metrics would give a better picture of how the organizations are performing, and could allow for quality improvement with the goal of increasing the number of organs available for transplantation."

Future evaluation of organ donation system

The research team hopes this study will help with overall evaluations of the national organ donation system.

"We have a collaborative research group with the U-M College of Engineering and the U-M School of Public Health that is interested in exploring the national organ donation system and ways to improve it," Parikh says.

The Centers for Medicare & Medicaid Services announced new organ procurement metrics in November 2020. They will now use the number of transplanted organs from an organ procurement organizations' donor service area as a percentage of inpatient deaths among patients 75 years old or younger, and with a primary cause of death that is consistent with organ donation.

"There have been efforts by the federal government to evaluate and overhaul the organ donation system, as evidenced by the recently announced change in the way organ procurement organizations will be evaluated," Parikh says. "Our findings give a snapshot in the variance in performance depending on which metrics you choose."

Credit: 
Michigan Medicine - University of Michigan

Incredible vision in ancient marine creatures drove an evolutionary arms race

image: An artist's reconstruction of 'Anomalocaris' briggsi swimming within the twilight zone.

Image: 
Katrina Kenny

Ancient deep sea creatures called radiodonts had incredible vision that likely drove an evolutionary arms race according to new research published today.

The international study, led by Professor John Paterson from the University of New England's Palaeoscience Research Centre, in collaboration with the University of Adelaide, the South Australian Museum and The Natural History Museum (UK), found that radiodonts developed sophisticated eyes over 500 million years ago, with some adapted to the dim light of deep water.

"Our study provides critical new information about the evolution of the earliest marine animal ecosystems," Professor Paterson said. "In particular, it supports the idea that vision played a crucial role during the Cambrian Explosion, a pivotal phase in history when most major animal groups first appeared during a rapid burst of evolution over half a billion years ago."

Radiodonts, meaning "radiating teeth", are a group of arthropods that dominated the oceans around 500 million years ago. The many species share a similar body layout comprising of a head with a pair of large, segmented appendages for capturing prey, a circular mouth with serrated teeth, and a squid-like body. It now seems likely that some lived at depths down to 1000 metres and had developed large, complex eyes to compensate for the lack of light in this extreme environment.

"When complex visual systems arose, animals could better sense their surroundings," Professor Paterson explained. "That may have fuelled an evolutionary arms race between predators and prey. Once established, vision became a driving force in evolution and helped shape the biodiversity and ecological interactions we see today."

Some of the first radiodont fossils discovered over a century ago were isolated body parts, and initial attempts at reconstructions resulted in "Frankenstein's monsters".

But over the past few decades many new discoveries -- including whole radiodont bodies -- have given a clearer picture of their anatomy, diversity and possible lifestyles.

Co-author, Associate Professor Diego García-Bellido from the University of Adelaide and South Australian Museum, said the rich treasure trove of fossils at Emu Bay Shale on South Australia's Kangaroo Island in particular has helped to build a clearer picture of Earth's earliest animals.

"The Emu Bay Shale is the only place in the world that preserves eyes with lenses of Cambrian radiodonts. The more than thirty specimens of eyes we now have, has shed new light on the ecology, behaviour and evolution of these, the largest animals alive half-a-billion years ago," A/Prof. García-Bellido said.

In 2011, the team published two papers in the journal Nature on fossil compound eyes from the 513-million-year-old Emu Bay Shale on Kangaroo Island.

The first paper on this subject documented isolated eye specimens of up to one centimetre in diameter, but the team were unable to assign them to a known arthropod species. The second paper reported the stalked eyes of Anomalocaris, a top predator up to one metre in length, in great detail.

"Our new study identifies the owner of the eyes from our first 2011 paper: 'Anomalocaris' briggsi --representing a new genus that is yet to be formally named," Prof. Paterson said.

"We discovered much larger specimens of these eyes of up to four centimetres in diameter that possess a distinctive 'acute zone', which is a region of enlarged lenses in the centre of the eye's surface that enhances light capture and resolution."

The large lenses of 'Anomalocaris' briggsi suggest that it could see in very dim light at depth, similar to amphipod crustaceans, a type of prawn-like creature that exists today. The frilly spines on its appendages filtered plankton that it detected by looking upwards.

Dr Greg Edgecombe, a researcher at The Natural History Museum, London and co-author of the study, added that the South Australian radiodonts show the different feeding strategies previously indicated by the appendages - either for capturing or filtering prey - are paralleled by differences in the eyes.

"The predator has the eyes attached to the head on stalks but the filter feeder has them at the surface of the head. The more we learn about these animals the more diverse their body plan and ecology is turning out to be," Dr Edgecombe said.

"The new samples also show how the eyes changed as the animal grew. The lenses formed at the margin of the eyes, growing bigger and increasing in numbers in large specimens - just as in many living arthropods. The way compound eyes grow has been consistent for more than 500 million years."

Credit: 
University of Adelaide

RUDN University physicists described a new type of amorphous solid bodies

image: Many substances with different chemical and physical properties, from diamonds to graphite, are made up of carbon atoms. Amorphous forms of solid carbon do not have a fixed crystal structure and consist of structural units--nanosized graphene particles. A team of physicists from RUDN University studied the structure of amorphous carbon and suggested classifying it as a separate type of amorphous solid bodies: a molecular amorphic with enforced fragmentation.

Image: 
RUDN University

Many substances with different chemical and physical properties, from diamonds to graphite, are made up of carbon atoms. Amorphous forms of solid carbon do not have a fixed crystal structure and consist of structural units--nanosized graphene particles. A team of physicists from RUDN University studied the structure of amorphous carbon and suggested classifying it as a separate type of amorphous solid bodies: a molecular amorphic with enforced fragmentation. The results of the study were published in the Fullerenes, Nanotubes and Carbon Nanostructures journal.

Solid carbon has many allotropic modifications. It means that substances with different chemical and physical properties can be built from one and the same atoms arranged in different structures. The variety of carbon allotropes is due to the special properties of its atoms, namely their unique ability to form single, double, and triple valence bonds. If, due to certain reaction conditions, only single bonds are formed (i.e. the so-called sp3-hybridization takes place), solid carbon has the shape of a three-dimensional grid of tetrahedrons, i.e. a diamond. If the conditions are favorable for the formation of double bonds (sp2-hybridization), solid carbon has the form of graphite--a structure of flat layers made of comb-like hexagonal cells. Individual layers of this solid body are called graphene. These two types of solid carbon structures are observed both in ordered crystals and non-ordered amorphous bodies. Solid carbon is widely spread in nature both as crystalline rock (graphite or diamond) deposits and in the amorphous form (brown and black coal, shungite, anthraxolite, and other minerals).

Unlike its crystalline form, natural amorphous carbon belongs to the sp2 type. A major study of the structure and elemental composition of sp2 amorphous carbon was conducted at the initiative and with the participation of a team of physicists from RUDN University. In the course of the study, the team also took spectral measurements using photoelectronic spectroscopy, inelastic neutron scattering, infrared absorption, and Raman scattering. Based on the results of the study, the team concluded that sp2 amorphous carbon is a fractal structure based on nanosized graphene domains that are surrounded by atoms of other elements (hydrogen, oxygen, nitrogen, sulfur, and so on). With this hypothesis, the team virtually re-wrote the history of amorphous carbon that has been known to humanity since the first-ever man-made fire.

"The discovery and experimental confirmation of the graphene nature of the 'black gold' will completely change the theory, modeling, and interpretation of experiments with this class of substances. However, some questions remain unanswered. What does solid-state physics make of this amorphous state of solid carbon? What role does amorphous carbon with sp2-hybridization play in the bigger picture? We tried to find our own answers," said Elena Sheka, a Ph.D. in Physics and Mathematics, and a Consulting Professor at the Faculty of Physics and Mathematics and Natural Sciences, RUDN University.

The team spent two years thoroughly studying the nature of amorphous carbon. Other results of this ambitious project were published in Fullerenes, Nanotubes and Carbon Nanostructures, Journal of Physical Chemistry C, and Journal of Non-Crystalline Solids, Nanomaterials. Together, these works confirm a breakthrough achieved by the physicists of RUDN University in this complex field of physics.

"We have analyzed many studies on amorphous sp2 carbon from the point of view of our general understanding of amorphous solid bodies. Based on our research, we can confirm that it belongs to a new type of amorphous substances," added Elena Sheka from RUDN University.

Credit: 
RUDN University

Cost of planting, protecting trees to fight climate change could jump

Planting trees and preventing deforestation are considered key climate change mitigation strategies, but a new analysis finds the cost of preserving and planting trees to hit certain global emissions reductions targets could accelerate quickly.

In the analysis, researchers from RTI International (RTI), North Carolina State University and Ohio State University report costs will rise steeply under more ambitious emissions reductions plans. By 2055, they project it would cost as much as $393 billion per year to pay landowners to plant and protect enough trees to achieve more than 10 percent of total emissions reductions that international policy experts say are needed to restrict climate change to 1.5 degrees Celsius. The findings were published today in the journal Nature Communications.

"The global forestry sector can provide a really substantial chunk of the mitigation needed to hit global climate targets," said Justin Baker, co-author of the study and associate professor of forest resource economics at NC State. "The physical potential is there, but when we look at the economic costs, they are nonlinear. That means that the more we reduce emissions - the more carbon we're sequestering - we're paying higher and higher costs for it."

The researchers found that The Intergovernmental Panel on Climate Change expects forestry to play a critical role in reducing climate change. To analyze the cost of preserving forest, preventing harvest and deforestation, and planting trees, researchers used a price model called the Global Timber Model. That model estimates costs of preserving trees in private forests owned and managed by companies for harvesting for pulp and paper products, as well as on publicly owned land, such as U.S. national parks.

"Protecting, managing and restoring the world's forests will be necessary for avoiding dangerous impacts of climate change, and have important co-benefits such as biodiversity conservation, ecosystem service enhancement and protection of livelihoods," said Kemen Austin, lead author of the study and senior policy analyst at RTI. "Until now, there has been limited research investigating the costs of climate change mitigation from forests. Better understanding the costs of mitigation from global forests will help us to prioritize resources and inform the design of more efficient mitigation policies."

The researchers estimated it would cost $2 billion per year to prevent 0.6 gigatons of carbon dioxide from being released by 2055. Comparatively, $393 billion annually would sequester 6 gigatons, or the equivalent of emissions from nearly 1.3 billion passenger vehicles driven for one year, according to the U.S. Environmental Protection Agency's Greenhouse Gas Equivalencies Calculator.

"It's not clear from these results that you'll have consistent low-cost mitigation from the global forest sector as other studies have indicated," Baker said.

The tropics are expected to play the biggest role in reducing emissions, with Brazil - the country that contains the largest share of the Amazon rainforest - the Democratic Republic of Congo, and Indonesia contributing the largest share. The tropics will contribute between 72 and 82 percent of total global mitigation from forestry in 2055.

The researchers also found that forest management in temperate regions, such as forestland in the southern United States, will play a significant role, especially under higher price scenarios. They expect that afforestation, which is introducing trees to areas that are not actively in forest, and managing existing forestland will be important strategies in the United States.

Credit: 
North Carolina State University

National Autism Indicators Report: health and health care of individuals with autism

People on the autism spectrum face barriers to comprehensive care that may cause their health and quality of life to be worse than that of their peers. While some people may be predisposed to worse health, preventive services and comprehensive health care can go a long way in improving the trajectory of health throughout their lives.

In the recently published sixth report in the National Autism Indicators Report series, researchers from Drexel University's A.J. Drexel Autism Institute highlight a holistic picture of what health and health care look like across the life course for people on the autism spectrum.

"Health and health care are critical issues for many children and adults on the autism spectrum," said Lindsay Shea, DrPH, director of the Policy and Analytics Center at the Autism Institute and interim leader of the Life Course Outcomes Research Program, an associate professor and co-author of the report. "They may experience more frequent use of services and medications. They may need more types of routine and specialty healthcare. And their overall health and mental health care tends to be more complex than people with other types of disabilities and special health care needs."

The complexity of a person's health needs depends, in part, on the number of issues they need help with and whether they have access to the types and quality of care they need. The health care journey may be less complicated if people are able to access care that is integrated across health and mental health, if they have fewer unmet needs, and if their care is coordinated when needed.

Unfortunately, the current health care system often fails to adequately address the needs of people on the spectrum. As a result, there is more frequent need for emergency health care and hospitalization. The gaps in health care for people with autism, versus those with other disabilities, are important to address.

"This is one of our most ambitious reports to date. We combined data from several national and regional sources to understand the health and health care of people on the autism spectrum of all ages," said Jessica Rast, a research associate in the Autism Institute and lead author of the report. "We need to understand health and health care needs across the life course so that recommendations can be made about how to improve health and health care at critical points across a person's life."

Children and adults with ASD have a lot of health concerns and complex health care needs

Researchers found when parents were asked about whether their child had certain health conditions, children with ASD had higher rates of every single condition except for asthma, compared to other children with special health care needs. Conditions included learning disability, behavior or conduct problems, attention deficit hyperactivity disorder (ADHD), speech or language disorder and anxiety.

Similarly, in another set of data, adults with ASD had higher rates of many conditions than a random sample of other adults. Adults with ASD were two to three times as likely to have depression or anxiety, compared to adults without ASD and were also far more likely to have hypertension, epilepsy, ADHD, bipolar disorder or schizophrenia.

When it comes to paying for the health care, almost half (46%) of parents of children with ASD reported that their child's insurance did not always cover the services they needed. One-fifth of parents of children with ASD reported avoiding changing jobs because of concerns about maintaining health insurance for their child - five times the rate of parents of children with no special health care needs.

Because of the complexity of care, parents either sought help or felt they needed more help with coordinating health care. Of parents whose child with ASD had more than one health care appointment in the past 12 months, 28% reported someone helped coordinate or arrange care among different providers, similar to parents of children with other special health care needs. Another 30% of parents of children with ASD reported they could use more help coordinating care - more than parents of other children with special health care needs.

Health and health care varied by race and ethnicity in children with ASD

Researchers discovered that certain conditions varied in prevalence by race and ethnicity in children with ASD. ADHD and anxiety were more commonly reported in white, non-Hispanic children than in children of any other race or ethnicity. Asthma and developmental delays were the most common in black, non-Hispanic children.

Health care use also varied by race and ethnicity. Hispanic children with ASD were the least likely to have regular appointments for health and dental health. Hispanic children and black, non-Hispanic children were the least likely to have a usual source of care, compared to white, non-Hispanic children and non-Hispanic children of another or multiple races. Black, non-Hispanic children with ASD were the least likely to have family-centered care, an important component of effective, comprehensive health care. And they had the lowest rates of effective care coordination.

Researchers say acknowledging these disparities is only the first step to combating them. Future research must work to reduce them. The reliance on public health insurance (Medicaid), especially among underserved and underrepresented groups, suggests that research focused in these systems should be an area of emphasis. Economic disparities are also of particular concern in this population, where costly services and insurance inadequacies are common.

"Our current health care approaches are not up to task, we need systems-wide improvement focused on holistic care," said Shea.

More research is needed to determine how to improve systems and services to increase levels of health and decrease health care burden, but researchers provided a few suggestions for moving forward.

Care coordination is a key to improving health in people with complex conditions like autism. Interdisciplinary strategies should be explored and evaluated.

"For example, the reasons that children with ASD have worse dental health than their peers could include lack of insurance coverage, a need for behavioral supports for successful office visits and daily hygiene, dietary issues linked to dental health, or lack of accommodations at the dental office," said Rast. "If these all impact dental health, they must all be included in a plan to improve dental health."

The latest report in the series, "National Autism Indicators Report: Health and Health Care," combines data from two national surveys about health, The National Survey of Children's Health (NSCH) and the Medical Expenditure Panel Survey, to examine health and health care in children on the autism spectrum; one national sample of hospital inpatient stays, The National Inpatient Sample (NIS), to examine hospital inpatient stays in all ages of people on the autism spectrum; and previously published findings from Kaiser Permanente Northern California (KPNC) patient records, which add a vital source of information on adult health and health care.

Credit: 
Drexel University

Football-loving states slow to enact youth concussion laws

PULLMAN, Wash. - States with college teams in strong conferences, in particular the Southeastern Conference (SEC), were among the last to take up regulations on youth concussions, according to a recent study. The study, which investigated the association between youth sport participation and passage of concussion legislation, uncovered the importance of SEC affiliation, and found a similar connection in states with high rates of high school football participation.

In contrast, states with higher gender equality, measured by the number of women in the labor force, were early adopters.

Washington State University sociologists Thomas Rotolo and Michael Lengefeld, a recent WSU Ph.D. now at Goucher College, analyzed the wave of youth concussion laws from 2007 to 2014, specifically looking at return-to-play guidelines: a mandated 24-hour wait period before sending a player with a possible concussion back on to the field.

"We explored a lot of different ways of measuring college football presence, and the thing that just kept standing out was SEC membership," said Rotolo, the lead author on the study published in the journal Social Science & Medicine. "Every college town thinks they have a strong college football presence, but the SEC is a very unique conference."

Co-author Lengefeld, a former high school football player from Texas, knows first-hand how important the sport is throughout the South, but the data showed a specific correlation between resistance to youth concussion regulations among SEC states in particular.

"This SEC variable was similar to the South effect, but not all southern states have an SEC school--and in SEC states the resistance to concussion laws was a bit stronger," he said.

Lengefeld added that the SEC also stands out since it has the largest number of viewers and brings in more profits than any other conference.

Scientists have known for more than a century that youth concussions were a serious health issue, but the movement to create concussion health policies for youth sports did not gain any ground until a Washington state middle school player was badly injured. In 2006, Zackery Lystedt was permanently disabled after being sent back onto the field following a concussion. The Seattle Seahawks took up the cause in the state, followed by the NFL which took the issue nationwide.

Even though the NFL advocated for youth concussion policy changes, the states responded differently. Washington state, Oregon and New Mexico were among the first to adopt the new return-to-play guidelines, while states like Georgia and Mississippi were among the last.

"There's clearly something culturally going on that was different in those states," said Lengefeld.

The researchers also investigated the role of gender equity in concussion adoption since football is often viewed as hyper-masculine. They used women's participation in the labor market as a rough indicator of a state's gender egalitarian views and found a statistically significant difference showing that states with higher levels of women's labor market participation enacted the concussion legislation more quickly.

Lengefeld said the methodology they used in this study can also be applied to analyze how many other health policies are enacted across different states.

"As we were submitting this research for publication, COVID-19 was just starting, and we noticed all the differences in the way states are behaving," Lengefeld said. "It's not new for sociologists to study the diffusion of laws at the state level, but this is another way of doing that that incorporates a set of ideas about culture."

Credit: 
Washington State University

Transportation of water into the deep Earth by Al-phase D

image: (a) Shear velocity contrast between the Al-rich hydrous layer (including Al-phase D) and the dry mantle for two model compositions: hydrous pyrolite (h-pyrolite) and hydrous harzburgite (h-Harzburgite). (b) Hypothetical mechanisms of water transportation in the subduction zone from the shallower lithosphere to the uppermost lower mantle by hydrogen transfer between hydrous phases and melts (modified from Pamato et al., 2014).

Image: 
Ehime University

Since the discovery of a water-bearing ringwoodite specimen trapped in a superdeep diamond from Brazil by Pearson et al., in 2014 (published in Nature), there is a regained interest for finding and characterizing the potential carrier and host minerals of water in the deep Earth's interior. Among the candidate minerals, Dense Hydrous Magnesium Silicates (DHMSs) are considered as primary water carriers from the shallow lithosphere to the deep mantle transition zone (MTZ; 410-660 km in depth), but because of their relative instability against pressure (P) and temperature (T), DHMSs were generally associated with the presence of water up to the middle-part of the MTZ.

An experimental study published also in 2014, in the journal Nature Geoscience however showed that when aluminum incorporates DHMSs, their stability against P and T is drastically improved, allowing those minerals to transport and host water up to depths of 1200 km in the lower mantle (Pamato et al., 2014). Their experiments indeed showed that the aluminum-bearing DHMS mineral called Al-phase D is likely to form at the uppermost lower mantle P and T conditions, from the recrystallization of hydrous melt at the boundary of the mantle and the subducted slab. Although this reaction was justified by laboratory experiments, there were no direct measurement of the sound velocities of Al-phase D and therefore it was difficult to associate the presence of Al-rich hydrated rocks to the seismic observations at the bottom of the MTZ and in the uppermost lower mantle.

The researchers at Ehime successfully measured the longitudinal (VP) and shear (VS) velocities, as well as the density of Al-phase D, up to 22 GPa and 1300 K by mean of synchrotron X-ray techniques combined with ultrasonic measurements in situ at high P and and T, in the multi-anvil apparatus located at the beamline BL04B1 in SPring-8 (Hyogo, Japan). The results of their experiments provided a clear understanding of the sound velocities of Al-phase D under a wide P and T range, allowing for modelling the seismic velocities of hydrous rocks in the inner and outer parts of the subducted slab (Image 1). From these models they showed that the presence of an Al-rich hydrous layer including Al-phase D, in the uppermost lower mantle, would be associated with negative VS perturbations (-1.5%) while the corresponding VP variations (-0.5%) would remain below the detection limit of seismological techniques. These new data should greatly contribute to tracing the existence and recycling of the former subducted lithospheric crust and eventually the presence of water in the Earth's lower mantle.

Credit: 
Ehime University

Seismic guidelines underestimate impact of 'The Big One' on metro Vancouver buildings

Scientists examining the effects of a megathrust earthquake in the Pacific Northwest say tall buildings across Metro Vancouver will experience greater shaking than currently accounted for by Canada's national seismic hazard model.

The region lies above the Georgia sedimentary basin, which is made up of layers of glacial and river sediments sitting on top of sedimentary rock. In the event of an earthquake, it would jiggle and amplify the seismic waves, causing more intense and longer-lasting tremors. However, the amplification caused by the sedimentary basin is not explicitly accounted for in the 2015 seismic hazard model, which informs Canada's national building code.

The latest U.S. national seismic hazard model now explicitly accounts for sedimentary basin amplification, but Canada's latest seismic hazard model, released this October, still doesn't, says lead researcher Carlos Molina Hutt, a structural and earthquake engineering professor at UBC.

"As a result, we're underestimating the seismic hazard of a magnitude-9 earthquake in Metro Vancouver, particularly at long periods. This means we're under-predicting the shaking that our tall buildings will experience," he warned. "Fortunately, Natural Resources Canada, responsible for the development of our national seismic hazard model, recognizes the potential importance of basin effects in certain parts of Vancouver and is actively reviewing and participating in research on the topic. They intend to address basin effects in the next seismic hazard model."

Using physics-based computer simulations, the researchers found that regions where the Georgia Basin is deepest will have the greatest seismic amplification. Delta and Richmond will experience the most amplification, followed by Surrey, New Westminster, Burnaby, Vancouver and North Vancouver. West Vancouver, which sits just outside the basin, will have the least.

Older, tall buildings at greater risk

The researchers also evaluated the impact of the magnitude-9 simulations on tall reinforced concrete shear wall buildings, of which there are more than 3,000 located in the Lower Mainland. They found that those built to building codes from the 1980s and earlier are at the greatest risk of severe damage or even collapse, with buildings in the 10- to 20-storey range experiencing the worst impacts.

"We have these pockets of tall buildings within the Georgia Basin--in Vancouver, Burnaby, Surrey and New Westminster. In general, based on a comparison of the code requirements in the past versus the code requirements now, many of our older buildings are vulnerable to these large earthquakes, particularly if we consider the amplification effect of the Georgia Basin," said Molina Hutt. The differences in expected performance between new buildings and older constructions reflects continuous improvements in seismic hazard estimates and engineering design provisions.

"When we build a structure, it only needs to meet the code of the time when it was built. If there is a future change in the code, you don't have to go back and upgrade your building. To address vulnerable existing buildings, jurisdictions must explore different seismic risk reduction policy options and adopt the most effective mitigation strategies," Molina Hutt added.

The study, published recently in Earthquake Engineering & Structural Dynamics, notes that concrete is the predominant construction material for buildings taller than eight storeys in the city of Vancouver, constituting 90 per cent of a total 752 buildings identified. Of these, more than 300 are reinforced concrete shear wall constructions that pre-date 1980.

"Typically, people think that, if we have a magnitude-9 Cascadia subduction zone earthquake, it will be worse in Victoria, because they're closer to the seismic source. But the reality is that, for tall buildings, we're going to be worse off in Vancouver, because this basin amplifies the shaking in taller structures," Molina Hutt noted. The probability of a magnitude 8 or 9 Cascadia earthquake is estimated to be 14 per cent in the next 50 years.

"We're collaborating closely with our neighbours to the south, who are taking active steps to account for these basin amplification effects," said Molina Hutt. "Our work attempts to assess the impacts of neglecting these effects so we can appreciate their significance and take action."

Credit: 
University of British Columbia

An escape route for seafloor methane

Methane, the main component of natural gas, is the cleanest-burning of all the fossil fuels, but when emitted into the atmosphere it is a much more potent greenhouse gas than carbon dioxide. By some estimates, seafloor methane contained in frozen formations along the continental margins may equal or exceed the total amount of coal, oil, and gas in all other reservoirs worldwide. Yet, the way methane escapes from these deep formations is poorly understood.

In particular, scientists have been faced with a puzzle. Observations at sites around the world have shown vigorous columns of methane gas bubbling up from these formations in some places, yet the high pressure and low temperature of these deep-sea environments should create a solid frozen layer that would be expected to act as a kind of capstone, preventing gas from escaping. So how does the gas get out?

A new study helps explain how and why columns of the gas can stream out of these formations, known as methane hydrates. Using a combination of deep-sea observations, laboratory experiments, and computer modeling, researchers have found phenomena that explain and predict the way the gas breaks free from the icy grip of a frozen mix of water and methane. The findings are reported today in the journal PNAS, in a paper by Xiaojing (Ruby) Fu SM '15, PhD '17, now at the University of California at Berkeley; Professor Ruben Juanes at MIT; and five others in Switzerland, Spain, New Mexico, and California.

Surprisingly, not only does the frozen hydrate formation fail to prevent methane gas from escaping into the ocean column, but in some cases it actually facilitates that escape.

Early on, Fu saw photos and videos showing plumes of methane, taken from a NOAA research ship in the Gulf of Mexico, revealing the process of bubble formation right at the seafloor. It was clear that the bubbles themselves often formed with a frozen crust around them, and would float upward with their icy shells like tiny helium balloons.

Later, Fu used sonar to detect similar bubble plumes from a research ship off the coast of Virginia. "This cruise alone detected thousands of these plumes," says Fu, who led the research project while a graduate student and postdoc at MIT. "We could follow these methane bubbles encrusted by hydrate shells into the water column," she says. "That's when we first knew that hydrate forming on these gas interfaces can be a very common occurrence."

But exactly what was going on beneath the seafloor to trigger the release of these bubbles remained unknown. Through a series of lab experiments and simulations, the mechanisms at work gradually became apparent.

Seismic studies of the subsurface of the seafloor in these vent regions show a series of relatively narrow conduits, or chimneys, through which the gas escapes. But the presence of chunks of gas hydrate from these same formations made it clear that the solid hydrate and the gaseous methane could co-exist, Fu explains. To simulate the conditions in the lab, the researchers used a small two-dimensional setup, sandwiching a gas bubble in a layer of water between two plates of glass under high pressure.

As a gas tries to rise through the seafloor, Fu says, if it's forming a hydrate layer when it hits the cold seawater, that should block its progress: "It's running into a wall. So how would that wall not be preventing it from continuous migration?" Using the microfluidic experiments, they found a previously unknown phenomenon at work, which they dubbed crustal fingering.

If the gas bubble starts to expand, "what we saw is that the expansion of the gas was able to create enough pressure to essentially rupture the hydrate shell. And it's almost like it's hatching out of its own shell," Fu says. But instead of each rupture freezing back over with the reforming hydrate, the hydrate formation takes place along the sides of the rising bubble, creating a kind of tube around the bubble as it moves upward. "It's almost like the gas bubble is able to chisel out its own path, and that path is walled by the hydrate solid," she says. This phenomenon they observed at small scale in the lab, their analysis suggests, is also what would also happen at much larger scale in the seafloor.

That observation, she said, "was really the first time we've been aware of a phenomenon like this that could explain how hydrate formation will not inhibit gas flow, but rather in this case, it would facilitate it," by providing a conduit and directing the flow. Without that focusing, the flow of gas would be much more diffuse and spread out.

As the crust of hydrate forms, it slows down the formation of more hydrate because it forms a barrier between the gas and the seawater. The methane below the barrier can therefore persist in its unfrozen, gaseous form for a long time. The combination of these two phenomena -- the focusing effect of the hydrate-walled channels and the segregation of the methane gas from the water by a hydrate layer -- "goes a long way toward explaining why you can have some of this vigorous venting, thanks to the hydrate formation, rather than being prevented by it," says Juanes.

A better understanding of the process could help in predicting where and when such methane seeps will be found, and how changes in environmental conditions could affect the distribution and output of these seeps. While there have been suggestions that a warming climate could increase the rate of such venting, Fu says there is little evidence of that so far. She notes that temperatures at the depths where these formations occur -- 600 meters (1,900 feet) deep or more -- are expected to experience a smaller temperature increase than would be needed to trigger a widespread release of the frozen gas.

Some researchers have suggested that these vast undersea methane formations might someday be harnessed for energy production. Though there would be great technical hurdles to such use, Juanes says, these findings might help in assessing the possibilities.

"The problem of how gas can move through the hydrate stability zone, where we would expect the gas to be immobilized by being converted to hydrate, and instead escape at the seafloor, is still not fully understood," says Hugh Daigle, an associate professor of petroleum and geosystems engineering at the University of Texas at Austin, who was not associated with this research. "This work presents a probable new mechanism that could plausibly allow this process to occur, and nicely integrates previous laboratory observations with modeling at a larger scale."

"In a practical sense, the work here takes a phenomenon at a small scale and allows us to use it in a model that only considers larger scales, and will be very useful for implementing in future work," Daigle says.

Credit: 
Massachusetts Institute of Technology

Sustainable regenerated isotropic wood

image: Schematic of the bottom-up approach to regenerate isotropic wood.

Image: 
@SCIENCE CHINA PRESS

Since plastic was invented in the late 19th century, it was beginning to change human lives. The invention of plastic gives us a lightweight, strong and inexpensive material, which greatly facilitated daily life. Petroleum-based plastics play a critical role in human lives, but the flip side of the coin is, they possess a considerably increasing negative impact on the environment and human health. It's unclear how long it will take for plastics to completely biodegrade into their constituent molecules. Estimates range from 450 years to never. Because plastics are hard to break down, waves and sunlight would have worn these non-degradable plastics into tiny bits, so-called microplastics. And those microplastics might be floating around the world's oceans, sponging up toxins, waiting to be eaten by some hapless fish or oyster, and ultimately perhaps by one of us. In order to alleviate the hazards of plastic pollution, constructing sustainable materials for plastic substitute from all-green (i.e., 100 % bio-based) basic building blocks is a promising alternative option.

Nowadays, a team lead by Prof. Shu-Hong Yu from University of Science and Technology of China (USTC) report a high-performance sustainable regenerated isotropic wood (RGI-wood), constructed from surface nanocrystallized wood particles (SNWP) by efficient bottom-up strategy with micro/nanoscale structure design (Figure 1). Through surface nanocrystallization, a lot of cellulose nanofibers expand from the surface of wood sawdust and thus the properties of wood particles improved significantly. The obtained RGI-wood exceeds the limitation of the anisotropic, inconsistent mechanical properties, and inflammability of natural wood, making it a strong competitor to petroleum-based plastics. Mass production of large-sized RGI-wood can be achieved, overcoming the rareness of large-sized natural wood. Moreover, through this bottom-up strategy, a series of functional RGI-wood nanocomposites can also be prepared, which show great potential in diverse applications.

Based on the strong interaction between SNWP, high-performance RGI-wood can be obtained by the direct press. Through surface nanocrystallization of wood particles, RGI-wood exceeds the limitation of the anisotropic and inconsistent mechanical properties of natural wood with the isotropic flexural strength of ~170 MPa and flexural modulus of ~10 GPa. Comparing to natural wood, RGI-wood shows much higher flexural strength and modulus on both directions, due to the large surface area of cellulose nanofibers and long-range hydrogen bond interaction between SNWP. Moreover, RGI-wood also shows superior fracture toughness, ultimate compressive strength, hardness, impact resistance, dimensional stability, and fire retardancy to natural wood (Figure 2). As an all-green biopolymer material, RGI-wood is superior to petroleum-based plastics on mechanical properties, which allows RGI-wood to be a strong competitor to petroleum-based plastics on many application fields.

Moreover, because SNWP can perform as a great structural binder with the three-dimensional nano-network, this versatile bottom-up strategy holds promising of constructing a series of bulk functional composites. For example, by mixing SNWP with carbon nanotubes (CNTs) before pressing, the conductive smart RGI-wood can be prepared. The conductive smart RGI-wood has a lower percolation threshold and a lower exponent, indicating that CNTs can form a better conductive network than that of polymer/CNTs composite. Due to its high conductivity, this conductive smart RGI-wood shows excellent electromagnetic shielding performance (exceeds 90 dB in the X-band), which meets the requirements of shielding standards of precision electronic instruments. Additionally, this excellent electrical conductivity of conductive smart RGI-wood also allows it to self-heat through Joule heat at low voltages. A low heating voltage can effectively ensure the safety of self-heating devices while reducing energy consumption. Thus, the obtained conductive smart RGI-wood can be used as electromagnetic shielding materials and self-heating wallboard for smart buildings.

In short, this bottom-up strategy with micro/nanoscale structure design exceeds the limitation of the anisotropic, inconsistent mechanical properties and sizes of natural wood by introducing the surface nanocrystallization method of biomass particles. The micro/nanoscale structure design strategy can also be expanded to other biomass (e. g., leaf, rape straw, and grass, etc.), and a series of all-green sustainable structural materials can be made. With better mechanical properties than plastics, RGI-wood can become a strong competitor for petroleum-based plastics. Moreover, the RGI-wood and its nanocomposites show outstanding performance in various areas including mechanical properties, inflammability, smart materials, and building materials.

Credit: 
Science China Press