Tech

Complement genes add to sex-based vulnerability in lupus and schizophrenia

image: Robert Kimberly, MD, professor of medicine at the University of Alabama at Birmingham and director of the UAB Center for Clinical and Translational Science, is a co-author of the research.

Image: 
UAB

Variants in a gene of the human immune system cause men and women to have different vulnerabilities to the autoimmune diseases lupus and Sjögren's syndrome, according to findings published in the journal Nature. This extends recent work that showed the gene variants could increase risk for schizophrenia.

The gene variants are a member of the complement system, a cascade of proteins that help antibodies and phagocytic cells remove damaged cells of a person's own body, as well as an infection defense that promotes inflammation and attacks pathogens. Normally the complement system keeps a person healthy in the face of pathogens; it also helps cart away the debris of damaged human cells before the body can mount an autoimmune attack. Now complement gene variants apparently play a contributing role in the diseases systemic lupus erythematosus, Sjögren's syndrome and schizophrenia.

It had been known that all three illnesses had common genetic associations with a section of the human chromosome called the major histocompatibility complex, or MHC. This region on chromosome 6 includes many genes that regulate the immune system. However, making an association with a specific gene -- or with the mutational variants of a specific gene that are called alleles -- has been difficult, partly because the MHC on human chromosome 6 spans three million base-pairs of DNA.

The Nature paper is a collaboration of 22 authors at 10 institutions in the United States and one in England, along with many members of a schizophrenia working group. Robert Kimberly, M.D., professor of medicine at the University of Alabama at Birmingham and director of the UAB Center for Clinical and Translational Science, is a co-author of the research, which was led by corresponding author Steven McCarroll, Ph.D., assistant professor of genetics at Harvard Medical School.

The identified alleles are complement component 4A and 4B, known as C4A and C4B.

The research showed that different combinations of C4A and C4B copy numbers generate a sevenfold variation in risk for lupus and 16-fold variation in risk for Sjögren's syndrome among people with common C4 genotypes. Paradoxically, the same C4 alleles that previously were shown to increase risk for schizophrenia had a different impact for lupus and Sjögren's syndrome -- they greatly reduced risk in those diseases. In all three illnesses, the C4 alleles acted more strongly in men than in women.

For the complement proteins that are encoded by the genes for C4 and for complement component 3, or C3, both C4 protein and its effector C3 protein were present at greater levels in men than in women in cerebrospinal fluid and blood plasma among adults ages 20-50. Intriguingly, that is the age range when the three diseases differentially affect men and women for unknown reasons. Lupus and Sjögren's syndrome affect women of childbearing age nine times more than they do men of similar age. In contrast, in schizophrenia, women exhibit less severe symptoms, more frequent remission of symptoms, lower relapse rates and lower overall incidence than men, who are affected more frequently and more severely.

Both men and women have an age-dependent elevation of C4 and C3 protein levels in blood plasma. In men, this occurs early in adulthood, ages 20-30. In women, the elevation is closer to menopause, ages 40-50. Thus, differences in complement protein levels in men and women occur mostly during the reproductive years, ages 20-50.

The researchers say sex differences in complement protein levels may help explain the larger effects of C4 alleles in men, the greater risk of women for lupus and Sjögren's, and the greater vulnerability of men for schizophrenia.

The ages of pronounced sex differences in complement levels correspond to the ages when men and women differ in disease incidence. In schizophrenia cases, men outnumber women in early adulthood; but that disparity of onset lessens after age 40. In lupus, female cases greatly outnumber male cases during childbearing years; but that difference is much less for disease onset after age 50 or during childhood. In Sjögren's syndrome, women are more vulnerable than are men before age 50.

The researchers say the differing effect of C4 alleles in schizophrenia versus lupus and Sjögren's syndrome will be important to consider in any therapeutic effort to engage the complement system. They also said, "Why and how biology has come to create this sexual dimorphism in the complement system in humans presents interesting questions for immune and evolutionary biology."

Credit: 
University of Alabama at Birmingham

When a spinning toy meets hydrodynamics: Point-of-care technology is set in motion

image: (A) Photographs of a fidget spinner toy (left) and a diagnostic-fidget spinner (right). (B) Bacterial load detection by color changes. Orange color shows high bacterial cell load. (C) Fluid Assisted Separation Technology (FAST) enables rapid and robust enrichment of bacteria. (C-a) Cross-sectional view of the device depicting sample bacterial enrichment through the filter membrane in the conventional (upper) and FAST (lower) setups. (C-b) Images showing effective filtration during rotation and fluorescence images of bacterial cells enriched on the membrane for (1 and 3) conventional and (2 and 4) FAST filtration. Scale bar: 2 mm / 0.1 mm. (C-c) Flow volume of Dx-FS estimated at ω_max. (C-d) Number of spins by 10 independent subjects who spun Dx-FS. The angular velocities were measured, and the number of spins to elute 1 mL of liquid was estimated. (C-e) Measurements from (d) plotted for each subject. Red lines, dashed red lines, and gray and red boxes over the measurements (gray dots) denote the mean, median, standard error of mean, and standard deviation. The gray areas indicate the number of spins in (d).

Image: 
IBS

About 60% of women will experience urinary tract infection (UTI) at least once in their lifetime. With antibiotic-resistant organisms increasing, UTI is likely to bring more of the health and economic burden. To turn things around, point of care testing (POCT) technology has been identified as a breakthrough in diagnosing suspected UTI patients. POCT enables staff to provide real-time, lab-quality patient care when and where it is needed. Despite recent advances by POCT, every year millions of people die of treatable illness such as UTI and of the lack of diagnosis in developing parts of the world. It is a pressing need for technologies to bridge this existing gap.

Researchers at the Center for Soft and Living Matter, within the Institute for Basic Science (IBS, South Korea), reported a diagnostic fidget spinner (Dx-FS) that allows for highly sensitive and rapid diagnosis and prescription only with hand power. Fidget spinners are boomerang-shaped toys whose ball bearings reduce friction and allow things to rotate freely for a long time (Figure 1A). One flick of the fidget with a finger sets the gadget in motion. By exploiting the centrifugal force derived from the design of a fidget spinner and their novel mechanism called, a fluid assisted separation technology (FAST), the research team optimized the fluidic dynamics in Dx-FS. This mechanism enables the Dx-FS to work just with one or two spins by hand and to produce 100 times more enriched pathogens that can be easily seen by naked-eyes without the need of bacteria culture (Figure 1B).

Conventional approach for the diagnostics of the infectious disease require time-consuming cell culture as well as modern laboratory facilities. Worse yet, typical bacterial cell enrichment requires huge force and it is prone to membrane fouling or clogging due to the pressure imbalance in the filtration chamber. "Though the centrifugal force serves as an "engine" of the device, the force is felt more strongly in the outer path as it acts outwardly away from the center of rotation. The imbalanced impact of the centrifugal force keeps some of the sample left in the membrane. We utilized hydrodynamic forces that acts vertically to the centrifugal force by filling the filter membrane with liquid before the spinning process. This minimized the pressure drop and brought the uniform pressure balance throughout the entire area of the membrane. This allowed for maximized bacterial cell enrichment efficiency while minimizing the force needed for the filtration. Therefore, one or two spins were enough to filter 1 mL of sample despite large variation in the spin speed among different operators with different hand power (Figure 1C)." explains professor CHO Yoon-Kyoung, the corresponding author of the study.

In FAST-based particle separation, the fluid flow caused by centrifugal force is in a direction perpendicular to the filtration flow through the membrane. In addition, the drainage chamber underneath the membrane remains fully filled with the liquid during the entire filtration process. This is achieved by placing a buffer solution in the bottom chamber of the membrane prior to the spinning process, which ensures uniform filtration across the entire area of the membrane and significantly reduces the hydrodynamic resistance.

The research team verified Dx-FS can perform "sample-in-answer-out" analyses. The research team tested urine samples from 39 UTI suspects in Tiruchirappalli, India. Compared to the gold-standard culture method, which has a relatively long turnaround time, Dx-FS provided a comparable answer on site in 50 minutes. The experiment shows 59% of UTI suspects were over/under-treated for antibiotics, which may be saved by using Dx-FS (Figure 2). Further, they performed a rapid antimicrobial susceptibility test (AST) for two antimicrobial drugs on 30 UTI patients using Dx-FS. The test produced 100% accurate results within 120 minutes (Figure 3).

Overall, this simple, hand-powered, portable device allows rapid enrichment of pathogens from human urine samples, showing high potential for future low-cost POCT diagnostic applications. A simple tool like Dx-FS provides UTI management and prevention of resistance in low resource settings.

Credit: 
Institute for Basic Science

Not all multiple sclerosis-like diseases are alike

image: The white knob on the left shows demyelination of a nerve fiber end. The right image shows phagocytic macrophages infiltrating the periphery of a demyelinating lesion in a multiple sclerosis patient, suggesting a chronic solitary expanding lesion.

Image: 
Tatsuro Misu

An antibody appears to make a big difference between multiple sclerosis and other disorders affecting the protective myelin sheath around nerve fibres, report Tohoku University scientists and colleagues in the journal Brain. The finding suggests that some of these 'inflammatory demyelinating diseases' belong to a different category than multiple sclerosis, and should be treated according to their disease mechanism.

Multiple sclerosis is a well-known demyelinating disease of the central nervous system, but is not the only one by far. In inflammatory demyelinating diseases,?targeted myelin sheaths - the protective layers surrounding nerve fibres in the central nervous system - becoming damaged, slowing or even stopping the transmission of nerve impulses. This leads to various neurological problems.

Scientists have found that some, but not all, patients with inflammatory demyelinating diseases have auto-immune antibodies against myelin oligodendrocyte glycoprotein (MOG), which is thought to be important in maintaining the myelin sheath's structural integrity. This antibody is rarely detected in patients with typical multiple sclerosis, but is found in patients diagnosed with optic neuritis, myelitis, and acute disseminated encephalomyelitis (ADEM), for example. Scientists had not yet been able to show that high levels of this antibody mean it is specifically targeting and damaging MOG.

Tohoku University neurologist Tatsuro Misu and colleagues in Japan analysed the brain lesions of inflammatory demyelinating disease patients with and without detectable MOG antibodies, and found the two groups were quite different.

Autopsies were taken from brain lesions of people diagnosed with multiple sclerosis and neuromyelitis optica spectrum disorder (NMOSD), which predominantly targets the optic nerve and spinal cord. These patients did not have detectable MOG antibodies. Typical multiple sclerosis lesions showed solitary, slowly expanding demyelination with a profound loss of myelin sheath proteins, and the presence of activated debris-clearing macrophages at their periphery. NMOSD lesions showed reductions in nerve cells called astrocytes and in myelin-producing cells called oligodendrocytes, and loss in the innermost layers of myelin sheath proteins.

Biopsies taken from patients with other inflammatory demyelinating diseases with detectable MOG antibodies told a different story. Demyelination in these lesions was rapid, disseminated, and characteristically happened around small veins. Importantly, the MOG protein was initially deficient from the myelin sheaths, indicating it was attacked and damaged by the MOG antibodies. In contrast to multiple sclerosis and NMOSD lesions, oligodendrocytes were relatively preserved. Also, a specific sub-group of T cells infiltrated the lesions and macrophages carrying MOG were found around blood vessels during the acute phase of the disease like ADEM.

"Our findings suggest that MOG antibody-associated diseases belong to a different autoimmune demyelinating disease entity than multiple sclerosis and NMOSD," says Misu. The findings also suggest that therapeutic strategies need to be individualized for patients with inflammatory demyelinating diseases, depending on the demyelinating mechanism involved, he explains.

Credit: 
Tohoku University

A theoretical boost to nano-scale devices

image: The newly developed formalism and QFL splitting analysis led to new ways of characterizing extremely scaled-down semiconductor devices and the technology computer-aided design (TCAD) of next- generation nano-electronic/energy/bio devices.

Image: 
Yong-Hoon Kim, KAIST

Semiconductor companies are struggling to develop devices that are mere nanometers in size, and much of the challenge lies in being able to more accurately describe the underlying physics at that nano-scale. But a new computational approach that has been in the works for a decade could break down these barriers.

Devices using semiconductors, from computers to solar cells, have enjoyed tremendous efficiency improvements in the last few decades. Famously, one of the co-founders of Intel, Gordon Moore, observed that the number of transistors in an integrated circuit doubles about every two years--and this 'Moore's law' held true for some time.

In recent years, however, such gains have slowed as firms that attempt to engineer nano-scale transistors hit the limits of miniaturization at the atomic level.

Researchers with the School of Electrical Engineering at KAIST have developed a new approach to the underlying physics of semiconductors.

"With open quantum systems as the main research target of our lab, we were revisiting concepts that had been taken for granted and even appear in standard semiconductor physics textbooks such as the voltage drop in operating semiconductor devices," said the lead researcher Professor Yong-Hoon Kim. "Questioning how all these concepts could be understood and possibly revised at the nano-scale, it was clear that there was something incomplete about our current understanding."

"And as the semiconductor chips are being scaled down to the atomic level, coming up with a better theory to describe semiconductor devices has become an urgent task."

The current understanding states that semiconductors are materials that act like half-way houses between conductors, like copper or steel, and insulators, like rubber or Styrofoam. They sometimes conduct electricity, but not always. This makes them a great material for intentionally controlling the flow of current, which in turn is useful for constructing the simple on/off switches--transistors--that are the foundation of memory and logic devices in computers.

In order to 'switch on' a semiconductor, a current or light source is applied, exciting an electron in an atom to jump from what is called a 'valence band,' which is filled with electrons, up to the 'conduction band,' which is originally unfilled or only partially filled with electrons. Electrons that have jumped up to the conduction band thanks to external stimuli and the remaining 'holes' are now able to move about and act as charge carriers to flow electric current.

The physical concept that describes the populations of the electrons in the conduction band and the holes in the valence band and the energy required to make this jump is formulated in terms of the so-called 'Fermi level.' For example, you need to know the Fermi levels of the electrons and holes in order to know what amount of energy you are going to get out of a solar cell, including losses.

But the Fermi level concept is only straightforwardly defined so long as a semiconductor device is at equilibrium--sitting on a shelf doing nothing--and the whole point of semiconductor devices is not to leave them on the shelf.

Some 70 years ago, William Shockley, the Nobel Prize-winning co-inventor of the transistor at the Bell Labs, came up with a bit of a theoretical fudge, the 'quasi-Fermi level,' or QFL, enabling rough prediction and measurement of the interaction between valence band holes and conduction band electrons, and this has worked pretty well until now.

"But when you are working at the scale of just a few nanometers, the methods to theoretically calculate or experimentally measure the splitting of QFLs were just not available," said Professor Kim.

This means that at this scale, issues such as errors relating to voltage drop take on much greater significance.

Kim's team worked for nearly ten years on developing a novel theoretical description of nano-scale quantum electron transport that can replace the standard method--and the software that allows them to put it to use. This involved the further development of a bit of math known as the Density Functional Theory that simplifies the equations describing the interactions of electrons, and which has been very useful in other fields such as high-throughput computational materials discovery.

For the first time, they were able to calculate the QFL splitting, offering a new understanding of the relationship between voltage drop and quantum electron transport in atomic scale devices.

In addition to looking into various interesting non-equilibrium quantum phenomena with their novel methodology, the team is now further developing their software into a computer-aided design tool to be used by semiconductor companies for developing and fabricating advanced semiconductor devices.

Credit: 
The Korea Advanced Institute of Science and Technology (KAIST)

Eavesdropping crickets drop from the sky to evade capture by bats

video: A short video showing a small rainforest cricket performing the same flight stop in response to a bat call and two katydid calls.

Image: 
University of Graz, Austria

Researchers have uncovered the highly efficient strategy used by a group of crickets to distinguish the calls of predatory bats from the incessant noises of the nocturnal jungle. The findings, led by scientists at the Universities of Bristol and Graz in Austria and published in Philosophical Transactions of the Royal Society B, reveal the crickets eavesdrop on the vocalisations of bats to help them escape their grasp when hunted.

Sword-tailed crickets of Barro Colorado Island, Panama, are quite unlike many of their nocturnal, flying-insect neighbours. Instead of employing a variety of responses to bat calls of varying amplitudes, these crickets simply stop in mid-air, effectively dive-bombing out of harm's way. The higher the bat call amplitude, the longer they cease flight and further they fall. Biologists from Bristol's School of Biological Sciences and Graz's Inst of Zoology discovered why these crickets evolved significantly higher response thresholds than other eared insects.

Within the plethora of jungle sounds, it is important to distinguish possible threats. This is complicated by the cacophony of katydid (bush-cricket) calls, which are acoustically similar to bat calls and form 98 per cent of high-frequency background noise in a nocturnal rainforest. Consequently, sword-tailed crickets need to employ a reliable method to distinguish between calls of predatory bats and harmless katydids.

Responding only to ultrasonic calls above a high-amplitude threshold is their solution to this evolutionary challenge. Firstly, it allows the crickets to completely avoid accidentally responding to katydids. Secondly, they do not respond to all bat calls but only sufficiently loud ones, which indicates the bat is within seven metres of the insect. This is the exact distance at which a bat can detect the echo of the crickets, which ensures the crickets only respond to bats that have already detected them when trying to evade capture.

This type of approach is rare in nature with most other eavesdropping insects living in less noisy environments being able to rely on differences in call patterns to distinguish bat predators.

Dr Marc Holderied, senior author on the study from Bristol's School of Biological Sciences, explained: "The beauty of this simple avoidance rule is how the crickets respond at call amplitudes that exactly match the distance over which bats would detect them anyway -- in their noisy world it pays to only respond when it really counts."

Credit: 
University of Bristol

New ECU research finds 'Dr. Google' is almost always wrong

image: New Edith Cowan University research has found online symptom checkers are almost always wrong.

Image: 
Unsplash

Many people turn to 'Dr Google' to self-diagnose their health symptoms and seek medical advice, but online symptom checkers are only accurate about a third of the time, according to new Edith Cowan University (ECU) research published in the Medical Journal of Australia today.

The study analysed 36 international mobile and web-based symptom checkers and found they produced the correct diagnosis as the first result just 36 per cent of the time, and within the top three results 52 per cent of the time.

The research also found that the advice provided on when and where to seek health care was accurate 49 per cent of the time.

It has been estimated that Google's health related searches amount to approximately 70,000 every minute. Close to 40 per cent of Australians look for online health information to self-treat.

Lead author and ECU Masters student Michella Hill said the findings should give people pause for thought.

"While it may be tempting to use these tools to find out what may be causing your symptoms, most of the time they are unreliable at best and can be dangerous at worst," she said.

Online symptom checkers ask users to list their symptoms before presenting possible diagnoses. Triage advice is about whether - or how quickly - the user should see a doctor or go to hospital.

The 'cyberchondria' effect

According to Ms Hill, online symptom checkers may be providing a false sense of security.

"We've all been guilty of being 'cyberchondriacs' and googling at the first sign of a niggle or headache," she said.

"But the reality is these websites and apps should be viewed very cautiously as they do not look at the whole picture - they don't know your medical history or other symptoms.

"For people who lack health knowledge, they may think the advice they're given is accurate or that their condition is not serious when it may be."

When to see a doctor

The research found that triage advice, that is when and where to seek healthcare, provided more accurate results than for diagnoses.

"We found the advice for seeking medical attention for emergency and urgent care cases was appropriate around 60 per cent of the time, but for non-emergencies that dropped to 30 to 40 per cent," Ms Hill said.

"Generally the triage advice erred on the side of caution, which in some ways is good but can lead to people going to an emergency department when they really don't need to."

A balance

According to Ms Hill, online symptom checkers can have a place in the modern health system.

"These sites are not a replacement for going to the doctor, but they can be useful in providing more information once you do have an official diagnosis," she said.

"We're also seeing symptom checkers being used to good effect with the current COVID-19 pandemic. For example, the UK's National Health Service is using these tools to monitor symptoms and potential 'hot spot' locations for this disease on a national basis."

Lack of quality control

Ms Hill points to the lack of government regulation and data assurance as being major issues behind the quality of online symptom checkers.

"There is no real transparency or validation around how these sites are acquiring their data," she said.

"We also found many of the international sites didn't include some illnesses that exist in Australia, such as Ross River fever and Hendra virus, and they don't list services relevant to Australia."

'The quality of diagnosis and triage advice provided by free online symptom checkers and apps in Australia' was published in the Medical Journal of Australia.

Credit: 
Edith Cowan University

Commercial airliners monitoring CO2 emissions from cities worldwide

image: (left) Aircraft of Japan Airlines with a special project logo (CONTRAIL) taking off at Tokyo International Airport and (right) Continuous CO2 Measuring Equipment onboard the aircraft

Image: 
CONTRAIL Team (Photo: Japan Airlines)

Cities are responsible for more than 70% of the global total greenhouse gas (GHG) emissions. Ability to monitor GHG emissions from cities is an important capability to develop in order to support climate mitigation activities in response to the Paris Agreement. The science community has examined the data collected from different platforms, such as ground-based, aircraft and satellites, to establish a science-based monitoring capability. A study by an international team, published in Scientific Reports, examined the data collected by commercial airliners and showed the potential of the aircraft data to contribute to the global GHG emission monitoring.

The CONTRAIL (Comprehensive Observation Network for TRace gases by AIrLiners) program is Japan's unique aircraft observation project. Since 2005, the CONTRAIL team has achieved high-precision atmospheric CO2 measurements using instruments onboard Japan Airlines' (JAL) commercial airliners. "Following the aircraft measurements conducted between Tokyo and Australia that I initiated in 1993, and had maintained during my entire career, the CONTRAIL program continuously expanded its global network and has provided numerous data to understand the carbon budget of this planet," stated Hidekazu Matsueda, co-author of the study and researcher at the Meteorological Research Institute, Japan.

Recently, the team analyzed thousands of vertical ascending and descending measurements over airports and characterized CO2 variations over 34 major cities worldwide for the first time. Airports are often located in the proximity of large cities to ensure convenient access. The CONTRAIL aircraft fly up and down over Narita International Airport many times nearly on a daily basis (7,692 times in total during 2005-2016) and are able to obtain atmospheric chemical signature of the Greater Tokyo Area (~several tens km away). With similar geographical locations of major airport relative to large urban centers, the research team examined the data collected around global airports in order to retrieve urban CO2 emission signatures from the data. "We analyzed millions of observational data collected at and around the Tokyo Narita Airport and found clear CO2 enhancements when the wind comes from the Greater Tokyo Area," Taku Umezawa, leading author of the study and researcher at the National Insititute for Enviromental Studies, Japan, said. "That was also the case globally for other airports, such as Moscow, Paris, Beijing, Osaka, Shanghai, Mexico City, Sydney, and others."

The team also examined the magnitude of CO2 variability in the lowermost atmosphere over these airports. "Short-term changes in the CO2 concentration in the lower atmosphere are associated with various factors such as the upwind pattern of CO2 emissions and uptakes, flight path and its geographical position relative to the locations of emissions and uptakes, and meteorological conditions during each landing and takeoff," said Kaz Higuchi, co-author of the study and adjunct professor of Environmental Studies, York University, Canada. "Despite these complex conditions under which the measurements are made, it was very interesting that we found a relationship between the magnitude of CO2 variability and CO2 emissions from a nearby city." The results show that the commercial airliner-based CO2 dataset can consistently provide urban emission estimates when combined with atmospheric modeling framework.

"But still there are missing pieces to examine the physical link to city emissions to establish urban monitoring," said Tomohiro Oda, scientist of the Universities Space Research Association, Maryland, USA, who collaborated with the team as a PI of a NASA-funded emission modeling project. "Cities are considered to be responsible for more than 70% of the global manmade greenhouse gas emissions. Accurate estimation of CO2 emissions from urban areas is thus important for effective emission reduction strategies." This study suggests that commercial airliner measurements can collect useful urban CO2 data that are complementary to the data collected from other observational platforms, such as ground stations and satellites, in order to monitor CO2 emissions from cities. The advantage of commercial airliners is the great global spatial coverage of the measurements even in regions where we only have sparse greenhouse gas measurement networks, especially places where securing ground-site measurements is challenging, such as in developing countries. "A further implementation of similar CO2 instruments into other domestic and international flights will significantly extend our global monitoring capability of cities," said Toshinobu Machida, project leader of the CONTRAIL program and head of the Office for Atmospheric and Oceanic Monitoring at the National Institute for Environmental Studies.

Credit: 
National Institute for Environmental Studies

Quantifying the impact of interventions

image: Due to the relaxation of restrictions as of May 11, a further change in the infection rate is expected. Three possible scenarios for the development of new infections are illustrated in the figure.

Optimistic scenario
In the optimistic scenario (green), it is assumed that no increase in the infection rate takes place despite loosened restrictions. This scenario is based on the consideration that contact tracing and the detection of new outbreaks of infection could be so successful that they would reduce the spread of the infection, even though the measures have been relaxed.

Neutral scenario

In the neutral scenario (orange), it is assumed that the reproduction number is about R=1. This scenario could illustrate that although contacts are increased, hygiene and precautionary
measures, as well as contact tracing, ensure that there are not too many transmissions. The number of new infections could then remain approximately constant. However, with every change in contact behavior, there is a risk of a new wave.

Pessimistic scenario

In the pessimistic scenario (red), it is assumed that the infection rate roughly doubles. This can be achieved by doubling contacts at work, in public places, and among friends. Less caution in individual contacts can also contribute to this. If the infection rate doubles, there will be another exponential increase. By July, there would then again be around 6,000 new infections per day.

Image: 
(c) MPIDS

Researchers from the Max Planck Institute for Dynamics and Self-Organization (MPIDS) and the University of Göttingen have now succeeded in analyzing the German COVID-19 case numbers with respect to past containment measures and deriving scenarios for the coming weeks. Their computer models could also provide insights into the effectiveness of interventions in other countries. Their results have been published today online in the journal Science.

Simulations since mid-March

Many people are currently concerned about how well the measures to contain the pandemic have worked in recent weeks and how things will continue in the coming weeks. Scientists at the MPIDS have been investigating these questions. The team has been simulating the course of the corona epidemic in Germany together with scientists from the Göttingen Campus since mid-March. In their model calculations, the researchers relate the gradually increasing restrictions of public life in March to the development of COVID-19 case numbers. In particular, they examined the effect of the three packages of interventions in March: the cancellation of major public events around March 8, the closure of educational institutions and many shops on March 16, and the extensive contact ban on March 22. To this end, the researchers combined data on the temporal course of the COVID-19 new infections with an epidemiological dynamics model that allows the analysis of the course of the pandemic to date and the investigation of scenarios for the future. According to the computer models, the packages of measures initially slowed down the spread of COVID-19 and finally broke the dreaded exponential growth. "Our analysis clearly shows the effect of the various interventions, which together ultimately brought about a strong trend reversal," says Viola Priesemann, research group leader at the Max Planck Institute. Michael Wilczek, research group leader and co-author of the study, adds: "Our model calculations thus show us the overall effect of the change in people's behavior that goes hand in hand with the interventions".

A computer model also for other countries and regions

In their work, the Göttingen researchers did not only have Germany in mind. "From the very beginning, we designed our computer model so that it could be transferred to other countries and regions. Our analysis tools are freely available on GitHub and are already being used and developed further by researchers around the world," says Jonas Dehning, lead author of the study. The Göttingen research team is currently working on applying the model to European countries. It is particularly important to work out the different points in time at which the measures were taken in different countries, which could allow to draw conclusions about the effectiveness of the individual measures.

Concerns about the second wave

The Göttingen researchers' analysis of Germany on the basis of case numbers up to April 21 indicated an overall positive development of case numbers for the coming weeks. However, their analysis also reveals a central challenge in assessing the epidemic dynamics: changes in the spread of the coronavirus are only reflected in the COVID-19 case numbers with considerable delays. "We have only recently seen the first effects of the relaxation of restrictions of April 20 in the case numbers. And until we can evaluate the relaxations of May 11, we also have to wait two to three weeks," says Michael Wilczek. The researchers are therefore continuing to monitor the situation very closely. Every day they evaluate the new case numbers to assess whether a second wave is to be expected.

Using three different model scenarios (see figure and explanation below), the Göttingen team also shows how the number of new cases might develop further. If the relaxations of May 11 doubles the infection rate, a second wave can be expected. Instead, if the infection rate balances the recovery rate, the new infections stay approximately constant. However, it is also possible that the number of new infections will continue to decrease, says Viola Priesemann: "If everyone continues to be very careful and contact tracing by the health authorities is effective, and at the same time all new outbreaks of infection are detected and contained early, then the number of cases can continue to decrease. How exactly the numbers will develop in the future, therefore, depends decisively on our behavior, the observance of distance recommendations and hygiene measures," says the Göttingen physicist.

Credit: 
Max Planck Institute for Dynamics and Self-Organization

Chromium speciation in marine carbonates and implications on atmospheric oxygenation

image: The curves show the normalized XANES spectra of the Cr(VI) standard K2CrO4, the Cr(III) standard Cr(OH)3, the Cr(III) standard CuCr2O4, Ediacaran carbonate WJ705.6 (~0.63 Ga) and 12JLW44 (~0.56 Ga), Triassic carbonate XK127 (~0.25 Ga), Carboniferous carbonate BCS-CRM513 (~0.35 Ga), Quaternary carbonate 1460A26F1W110/116 (~0.83 Ma), and Mid-Proterozoic carbonate J1-52.6 (~1.44 Ga). There is no prominent peak at pre-edge region of the spectra of all the carbonate samples, suggesting that Cr(III) dominates in the natural carbonates.

Image: 
©Science China Press

The oxidation of Earth's early atmosphere and ocean played an important role in the evolution of life. Reconstructing the paleo-redox conditions is crucial for the understanding of the coevolution of life and environment. The Cr isotopic composition in sedimentary rocks have been increasingly used as an emerging paleo-redox indicator. It is largely based on the assumption that when the atmospheric oxygen level is relatively high, oxidation of Cr(III) in terrestrial rocks will result in soluble Cr(VI). This process can lead to positively fractionated Cr isotopic composition of Cr(VI), which will ultimately be transported to the ocean and preserved in sedimentary rocks. Carbonates are widely distributed sedimentary rocks and are an important geological archive. Previous studies suggested that Cr(VI) in seawater can be directly incorporated into carbonate crystal lattice, and marine non-skeletal carbonate could directly record the Cr isotopic composition of the contemporaneous seawater. However, there is hitherto no direct evidence for the presence of Cr(VI) in sedimentary carbonates.

To address whether Cr(VI) is present in sedimentary carbonates, Ziyao Fang and colleagues applied synchrotron based X-ray absorption near edge structure (XANES) spectra to geochemical studies. They examined the Cr valence states and Cr isotopic compositions of sedimentary carbonates formed in different geological periods. Their results showed that Cr(III) dominates in all sedimentary marine carbonates, even in those which have positively fractionated Cr isotopic compositions. This is in apparent contrast with the previous assumption.

Fang and colleagues proposed two possible mechanisms for the absence of Cr(VI) in carbonates. One is that Cr(VI) in seawater might have been reduced by microbes or porewater during carbonate precipitation or early diagenesis. In this case, the reduction of Cr(VI) is likely not quantitative in the relatively oxic carbonate deposition environment, and as a result, the isotopic composition of carbonate would be lighter than that of contemporaneous seawater. The other possible mechanism is that carbonate preferentially directly uptake Cr(III) in seawater, especially those formed in the early geological times when the oxygen level is widely regarded low. Trivalent Cr can come from non-redox Cr cycling, and recent studies suggested that dissolution and adsorption of Cr(III) can also cause positive Cr isotopic fractionation. Therefore, the slightly positively fractionated Cr isotopic values recorded in some sedimentary carbonates do not necessarily correspond to the presence of oxidative Cr(III) weathering as previously suggested.

Credit: 
Science China Press

Applying the analogy method to improve the forecasting of strong convection

image: Diagram of an analogy forecast of strong convection.

Image: 
Na Li

Strong convective weather, including thunderstorms, severe winds, hail, tornados, and short-term heavy rainfall, is a type of weather phenomenon that is extremely difficult to predict owing to its small spatial scales and short-term durations. In recent years, high-resolution numerical models have become the focus for weather forecasters to predict strong convective weather. They output simulated radar reflectivities and many other physical parameters that directly display the locations of strong convective weather for forecasters. However, these models have errors, which greatly limit their accuracy in convective weather forecasting.

In a paper recently published in Atmospheric and Oceanic Science Letters, Dr. Na Li and her coauthors from the Key Laboratory of Cloud-Precipitation Physics and Severe Storms (LACS) at the institute of Atmospheric Physics, Chinese Academy of Sciences, introduce an analogy-based method in strong convection forecasts using numerical model output.

"The method is very interesting. It assumes that the occurrence of the atmospheric environment for similar weather phenomena is also similar in the model. So, using historical model forecasts, we can estimate the occurrence environment for current convective weather. It calculates many diagnostic parameters as predictors to detect the favorable-occurrence environment of strong convections. Historical forecasts that are most similar to current forecasts are identified by searching a large historical numerical dataset. The observed strong convective situations corresponding to those most similar times are then used to form strong convection forecasts for the current time," explains Dr Li. An application of this method as a postprocess of the NCEP Global Forecast System (GFS) model shows that the method performs well in predicting the potential for strong convection in different regions of China.

"At present, the method only forecasts whether or not strong convective weather will happen. The specific phenomena of the weather, such as thunderstorms, surface strong winds, hail, and short-term heavy rainfall, are not considered. In future work, we hope to provide more specific forecasts regarding the kind of strong convective weather that will happen by improving the analogy metric and the corresponding observations," says Dr Li.

Credit: 
Institute of Atmospheric Physics, Chinese Academy of Sciences

Pine martens like to have neighbors -- but not too near

image: A female pine marten with a radio collar

Image: 
Nick Upton

Pine martens need neighbours but like to keep their distance, according to new research.

Over three years, the cat-like predators were caught in Scotland and moved to mid-Wales by Vincent Wildlife Trust.

By attaching miniaturised radio-transmitter collars to 39 of the released animals, a tracking team followed them for a year as they explored their new home in Welsh forests.

The research, published today in the journal Ecology and Evolution, was carried out by Dr Cat McNicol from the Environment and Sustainability Institute of the University of Exeter with staff from Vincent Wildlife Trust.

Dr McNicol's analyses have shown that the pine martens spent some time exploring their new habitats before settling into solitary territories, but that having pine marten neighbours helped them settle more quickly.

In the first release, when there were no other pine martens nearby, the new arrivals roamed long distances over two weeks before settling into their chosen territory - often close to the point where they were released.

The following year, when more pine martens were released in the same area, the new cohort established territories within a week, but further away from the release point.

Dr McNicol said "Although they defend solitary territories vigorously, pine martens depend on their neighbours when deciding where to set up home. Releasing martens near to others promoted rapid settlement. Using scent-marking as their main way of communication, newly-released martens can figure out which bits of woodland are occupied by other individuals and then set up home nearby. This behaviour results in a patchwork-quilt of new territories spreading across the Welsh countryside."

Although smaller than a domestic cat, pine martens are highly mobile, and the tracked animals had an average range of 9.5 km2 (about 2350 acres).

The Scottish martens and their descendants are now thriving in Wales, where they are living and breeding in woodland habitats, mainly eating voles and mice.

The new arrivals are also tucking into grey squirrels.

In a separate study published in the Journal of Applied Ecology and funded by The Forestry Commission, Dr McNicol attached similar tracking collars to squirrels as the pine martens were introduced.

She found that resident grey squirrels increased their ranging behaviour significantly in the presence of pine martens.

"The martens created a 'landscape of fear' for the grey squirrels, changing their behaviour to avoid predation," Dr McNicol said.

The finding adds to a growing body of research showing that pine martens could have a negative impact on grey squirrels - which is good news for foresters and woodland owners.

Credit: 
University of Exeter

New bone-graft biomaterial gives patients a nicer smile and less pain

image: OCP/collagen sponge.

Image: 
Shinji Kamakura and Hitoshi Inada, Tohoku University

A new recipe for a bone-graft biomaterial that is supercooled before application should make it easier to meet dental patients' expectation of a good-looking smile while eliminating the pain associated with harvesting bone from elsewhere in their body.

The findings were published in the Journal of Biomedical Materials Research Part B: Applied Biomaterials on Apr 2, 2020.

Patients missing teeth don't just want a restoration of function. Above all, they want a tooth replacement that gives them a nice smile.

"The exacting aesthetic demands of the patient make a procedure that is already difficult for clinicians--due to the tricky anatomy of these parts of the mouth--even more challenging," says Shinji Kamakura, a professor from the Bone Regenerative Engineering Laboratory at Tohoku University.

To overcome these challenges, clinicians have tended to employ bone grafts using non-essential bits of bone taken from other parts of the same patient such as the chin or pelvis, a process called autologous grafting. Unfortunately, this requires additional surgical sites, increases pain for the patient at that site, and there is also just a very limited amount of non-essential bone!

Synthetic alternatives made of minerals that already appear in the body but with similar mechanical properties to bone are sometimes used for other types of bone grafts. But these biomaterial substitutes have low bone regenerative properties compared to the gold standard of autologous grafts.

The conventional recipe for an OCP/Col preparation still runs into problems with 'appositional bone growth,' or increase in width, due to compressional stress on the material from the bone itself, so OCP/Col has been supported by a Teflon ring structure in order to retain its shape while the bone is forming. But Teflon is not a material that can be absorbed by the body and ultimately needs to be removed.

To get around this, professor Kamakura and his colleagues developed a recipe for OCP/Col that increases its density, and supercooled it with liquid nitrogen down to ?196°C before application on their rodent test subjects, producing the bone shape retention dental surgeons are looking for.

The researchers now intend to test the recipe on larger animals and then engage in clinical trials in humans.

In the last fifteen years, a new alternative bone-substitute, octacalcium phosphate, which provides the basis of the mineral crystals that make up bone, has appeared combined with collagen (OCP/Col) as a likely candidate because it has bone regenerative properties superior to earlier substitutes. Following years of clinical trials, commercialization of OCP/Col for oral surgery was recently approved in Japan.

Credit: 
Tohoku University

Written WhatsApps work like a spontaneous informal conversation

image: Traditional text classification proposal

Image: 
UPF

The emergence of new means of communication via the Internet has brought about new genres of discourse, understood as socially situated communication practices that did not previously exist, and which require studying from the linguistic standpoint.

Forty years after the description of the 'oral-written' medium proposed by Gregory and Carroll (1978) and in the light of some recent studies, these experts consider the WhatApp message genre as a "fuzzy category", a genre that, though initially written, does not have a clear separating line between speaking and writing.

Carme Bach and Joan Costa Carreras, researchers at the UPF Department of Translation and Language Sciences, analysed a corpus of 68 two-way conversations on WhatsApp in Catalan, between young people aged 18 to 22 years, with a total of 500,000 words, in order to describe its proximity to the colloquial conversation genre and define the WhatsApp message genre from the point of view of assignment to the 'oral-written' medium, according to the model by Gregory and Carroll (1978). Their study has just been published in Revista Signos. Estudios de Lingüística.
"Our hypothesis is that there will be many similarities between colloquial conversation and WhatsApp and, therefore, it might be appropriate to reconsider Gregory and Carroll's classification of 1978", the authors suggest.

Thus, in their study Bach and Costa Carreras proposed a dual goal: to describe a corpus of written WhatsApps to see the extent to which this genre of discourse can be described in terms of a colloquial conversation; to study how this genre is defined from the standpoint of assignment to the oral-written medium and see how it could be placed in Gregory and Carroll's proposal (1978) to accommodate the WhatsApp message genre in this framework", Bach and Costa Carreras point out.

A new linguistic category for the WhatsApp message genre

The authors extracted from the data the traits of the genre that came closest to a colloquial conversation using the Atlas.ti tool, and performed a qualitative analysis. The results of the paper indicate the proximity of WhatsApp messages to the informal oral medium, i.e., close to a colloquial conversation.

"We propose to a new category for addition to the one proposed Gregory and Carroll (1978), of a written genre conceived as if it were a spontaneous conversation: a new configuration of the proposed oral and written genres that accommodates the WhatsApp message genre", the researchers conclude.

Credit: 
Universitat Pompeu Fabra - Barcelona

Physicists offer a new 'spin' on memory

image: Using focused X-rays, researchers can peek inside a sample of magnetic tunnel junctions and resolve the arrangement of atoms in the thin layers.

Image: 
Weigang Wang, University of Arizona

Imagine biting into a peanut butter sandwich and discovering a slice of cheese tucked between the bread and the butter. In a way, this is what happened to a team of physicists at the University of Arizona, except the "cheese" was a layer of iron oxide, less than one atomic layer thick, and the "sandwich" was a magnetic tunnel junction - a tiny, layered structure of exotic materials that someday may replace current silicon-based computer transistors and revolutionize computing. Iron oxide - a material related to what is commonly known as rust - exhibits exotic properties when its thickness approaches that of single atoms.

A team led by Weigang Wang, professor in the UArizona Department of Physics, suggest in a new study that the previously unknown layer is responsible for certain behaviors of magnetic tunnel junctions that had physicists puzzled for many years. The discovery, published in the journal Physical Review Letters, opens up unexpected possibilities for developing the technology further.

Unlike conventional micro-transistors, magnetic tunnel junctions don't use the electrical charge of electrons to store information, but take advantage of a quantum-mechanical property that electrons have, which is referred to as "spin." Known as spintronics, computing technology based on magnetic tunnel junctions is still very much in the experimental phase, and applications are extremely limited. For example, the technology is used in aircraft and slot machines to protect stored data from sudden power outages.

This is possible because magnetic tunnel junctions process and store information by switching the orientation of nano-scale magnets instead of moving electrons around as regular transistors do.

"When you flip the direction of the magnetization, a magnetic tunnel junction behaves like a transistor in that it either is 'on' or 'off'," said Meng Xu, a doctoral student in Wang's lab and first author on the paper. "One of its advantages is that as you keep it in that state, it doesn't consume any energy to maintain the stored information."

Although high-performance magnetic tunnel junctions have been around for about 20 years, scientists have been perplexed by the fact that whenever they measured the difference between the "on" and "off" state, the values were much lower than what the physical properties of these nano-sized switches would predict, limiting the potential of magnetic tunnel junctions as the building blocks of spintronic computing.

That mystery may be explained by the thin layer of iron oxide that Wang and his colleagues discovered at the interface between the two magnetic layers in their magnetic tunnel junction samples - the "cheese slice" in the sandwich analogy.

"We think that this layer acts as a contaminant, preventing our sample from achieving the performance we want to see from a magnetic tunnel junction," Wang said.

However, Wang says the findings are a two-faced medal, because while the unanticipated layer is slimming the prospects for magnetic tunnel junctions by lowering the resistance change in their "on" and "off" state, it is good news in that it opens unexpected opportunities in another area of spintronics.

Wang's group discovered that the layer behaves as a so-called antiferromagnet when they tested the tunnel junctions at extremely cold temperatures below negative 400 degrees Fahrenheit, or negative 245 degrees Celsius.

Antiferromagnets are under intensive research because they can be potentially manipulated at Terahertz frequencies, about 1,000 times faster than existing, silicon-based technology, which typically operates in the Gigahertz region. Until now, however, researchers have struggled with finding ways to manipulate the promising devices, a crucial first step in applying the technology to data storage.

"In a few cases, researchers did successfully manage to control antiferromagnetic materials in isolation," Wang said, "but as soon as you try to incorporate an antiferromagnetic layer into a magnetic tunnel junction - and that's what you have to do in order to use them for spintronics - it kills the whole thing."

Hoewever, the layer reported in this study doesn't, Wang's team found. For the first time, this may allow researchers to marry the advantages of antiferromagnets - unprecedented read and write speed - with the controllability of magnetic tunnel junctions, Wang said.

"With this study, we demonstrated for the first time that we can change the antiferromagnetic property of a magnetic tunnel junction using an electrical field, which brings us one step closer toward using antiferromagnetic spintronics for memory storage," Wang said.

Here's why: While using the spins in antiferromagnets to process information vastly increases computational speed, eventually that information has to be converted back into an electrical charge, Wang said.

"Any information that we encode in spin, no matter if antiferromagnetic or magnetic, we eventually want to read out as an electrical signal because the electron is really the best thing we have and the most popular medium to process, read and write information," he said. "That conversion is normally done by magnetic tunnel junctions."

Incorporating antiferromagnetic layers into magnetic tunnel junctions may someday allow engineers to design computers in which the processing of information occurs in the same place as storing information, similar to the human brain.

Spintronic devices offer another advantage over conventional transistors, according to Wang: They don't require energy just to maintain the information stored in the memory.

"With spintronics, you need the electrical field only to write the information, but once that's done, you can switch it off to reduce energy consumption," he said.

Silicon-based transistors on the other hand, suffer from an effect known as electron leakage, Wang said. As manufacturers are cramming more and more transistors into smaller areas of microprocessors, more and more electrons are lost, requiring the device to perform extra work and consume extra energy just to counteract this process.

Electron leakage is one of the reasons why Moore's Law - which states the number of transistors on a chip doubles every two years - is projected to end soon, Wang said.

With spintronic devices, leakage is not an issue; they can store information virtually indefinitely without consuming power.

"It's the same reason your fridge magnets can stay in place for a really long time," he said. "Once the quantum mechanical exchange interaction has been made, no energy input is needed to maintain the magnetization direction."

Credit: 
University of Arizona

Using big data to design gas separation membranes

image: Schematic representation of the proposed materials design process: start with knowledge of the structure of the polymer building blocks, use that to develop a machine learning algorithm that finds the best material for a given application.

Image: 
Connor Bilchak/Columbia Engineering

New York, NY--May 15, 2020--Researchers at Columbia Engineering and the University of South Carolina have developed a method that combines big data and machine learning to selectively design gas-filtering polymer membranes to reduce greenhouse gas emissions.

Their study, published today in Science Advances, is the first to apply an experimentally validated machine learning method to rapidly design and develop advanced gas separation membranes.

"Our work points to a new way of materials design and we expect it to revolutionize the field," says the study's PI Sanat Kumar, Bykhovsky Professor of Chemical Engineering and a pioneer in developing polymer nanocomposites with improved properties.

Plastic films or membranes are often used to separate mixtures of simple gases, like carbon dioxide (CO2), nitrogen (N2), and methane (CH4). Scientists have proposed using membrane technology to separate CO2 from other gases for natural gas purification and carbon capture, but there are potentially hundreds of thousands of plastics that can be produced with our current synthetic toolbox, all of which vary in their chemical structure. Manufacturing and testing all of these materials is an expensive and time-consuming process, and to date, only about 1,000 have been evaluated as gas separation membranes.

Kumar and his collaborators at Columbia Engineering, the University of South Carolina, and the Max Planck Society in Mainz (Germany) created a machine learning algorithm that correlates the chemical structure of the 1,000 tested polymers with their gas transport properties, to investigate what structure makes the best membrane. They then applied the algorithm to more than 10,000 known polymers to predict which would produce the best material in this context.

This procedure identified some 100 polymers that had never been tested for gas transport but were predicted to surpass the current membrane performance limits for CO2/CH4 separations.

"Rather than experimentally test all the materials that exist for a particular application, you instead test a smaller subset of materials which have the most promise. You then find the materials that combine the very best ingredients and that gives you a shot at designing a better material, just like Netflix finding you the next movie to watch", says the study's co-author Connor Bilchak, formerly a PhD student with Kumar and currently a post-doctoral fellow at the University of Pennsylvania.

To test the algorithm's accuracy, a group led by Brian Benicewicz, SmartState Professor of Chemistry and Biochemistry at the University of South Carolina, synthesized two of the most promising polymer membranes predicted by this approach and found that the membranes exceeded the upper bound for CO2/CH4 separation performance.

"Their performance was very good ? much better than what had been previously made," says the study's co-author Laura Murdock, a graduate student of Benicewicz's. "And it was pretty easy. This methodology has significant potential for commercial use."

Benicewicz added, "Looking beyond this one context, this method is easily extendable to other membrane materials which could profoundly affect the development of next generation batteries and technologies for water purification."

Credit: 
Columbia University School of Engineering and Applied Science