Culture

The hidden machinery of a photosynthetic giant revealed

image: Marc Nowaczyk and Anna Frank (right) explore photosynthesis.

Image: 
© RUB, Marquard

The collaborative work is published online in the journal Communications Biology on March 8th, 2021.

The power of photosynthesis

Photosynthesis represents the only biological process, which converts the energy of sunlight into chemically stored energy. On molecular level, the photosynthetic key enzymes called photosystems are responsible for this conversion process. Photosystem I (PSI), one of the two photosystems, is a large membrane protein complex that can be present in different forms - as monomers, dimers, trimers or even tetramers.
New isolation technique helps revealing the structure of monomeric PSI

Although the structure of trimeric PSI from the thermophilic cyanobacterium Thermosynechococcus elongatus was solved 20 years ago, it was not yet possible to obtain the corresponding structure of monomeric PSI. Major bottleneck was the low natural abundance of this specific PSI form. Therefore, a new extraction method was developed by the researchers at RUB, which enabled selective isolation of PSI monomers with high yield. The isolated protein complex was characterized in detail at RUB by mass-spectrometry, spectroscopy and biochemical methods, whereas the research team at Osaka University was able to solve its structure by cryo-electron microscopy.
Teamwork between chlorophylls and lipids might enable uphill energy transfer

The atomic structure of monomeric PSI provides novel insights into the energy transfer inside the protein complex as well as on the localization of so-called red chlorophylls - specially arranged chlorophylls, closely interacting with each other and thus enabling the absorption of low-energy far red light, which normally cannot be used for photosynthesis. Interestingly, the structure revealed that the red chlorophylls seem to interact with lipids of the surrounding membrane. This structural arrangement might indicate that additional thermal energy is used to make far red light accessible for photosynthesis.
Long-run cooperation bears further fruits

Credit: 
Ruhr-University Bochum

Image methods tested on a SARS-CoV-2 protein improve the 3D reconstructions of macromolec

image: 3D Reconstruction of the SARS-CoV-2 spike protein.

Image: 
Javier Vargas

An international study led by the University Complutense of Madrid (UCM) proposed new computational image processing methods that improve the analysis and three-dimensional reconstruction of biological macromolecules.

Currently, determining the composition (i.e., the sequence of amino acids) of macromolecules such as proteins is relatively simple; however, determining the shape in which they are ordered in a three-dimensional structure is not. The new methodology, published in Nature Communications, improves the visualization of the 3D reconstructions obtained through cryogenic electron microscopy, as well as their quality.

"This study helps us broaden our understanding of proteins and other macromolecules that support essential life processes, providing new tools for structural biologists to interpret more with greater reliability", explained Javier Vargas, Ramón y Cajal researcher at the Department of Optics of the UCM.

These methods are applied to diverse biological macromolecules with biomedical relevance, including 3D reconstructions of the SARS-CoV-2 spike S protein.

"This protein is essential for the entry of the virus into human cells. The processing of this protein with these new methods helped analyze regions that previously could not be interpreted", says the physicist.

Utility in designing drugs

The study began when Vargas was working as a professor in McGill University (Canada) and was conducted and concluded when he returned to the UCM in mid-2020. In addition to these institutions, the study also counted on the participation of the National Biotechnology Centre of the Superior Council of Scientific Investigations (Spain) and the University of Texas in Austin (US).

The researchers predict that this study will be used to improve the construction of atomic models without previous information of macromolecules based on 3D reconstructions obtained using cryogenic electron microscopy.

"This information is essential for understanding and characterizing macromolecules from the biochemical standpoint and useful for designing new drugs such as those for blocking SARS-CoV-2 from accessing the interior of cells", highlights Vargas.

Credit: 
Universidad Complutense de Madrid

Scientists examine more than 60 teeth of stegosaurs from Yakutia

video: Three-dimensional reconstruction of a stegosaurian tooth found at the excavation site near the Teete stream (the Republic of Sakha)

Image: 
SPbU

Powerful and squat stegosaurs are now one of the most recognisable dinosaurs: they are easily identified by the spines on the tail and the bony plates on the back - osteoderms. The representatives of this group lived about 165-125 million years ago, during the Jurassic and early Cretaceous periods. They were five to seven metres long and had a disproportionately small head. Their teeth were therefore quite small - about a centimetre in height and about the same in width.

Palaeontologists from St Petersburg University worked together with colleagues from: the Zoological Institute of the Russian Academy of Sciences; the Borissiak Paleontological Institute of the Russian Academy of Sciences; the University of Bonn; and the Diamond and Precious Metal Geology Institute of the Siberian Branch of the Russian Academy of Sciences. The research materials were collected during a series of expeditions to the Republic of Sakha in 2012 and 2017-2019. On the banks of the Teete stream, not far from the small Yakut town of Suntar, there is a large, but not yet fully examined locality of dinosaurs. In the Cretaceous, these territories were located close to the North Pole, which means that they can shed light on the life of polar dinosaurs. Was the local fauna different from that of the southern regions? What was the climate here? How were animals affected by the polar day and polar night? The scientists are trying to find answers to these questions, including by studying the teeth of ancient creatures.

'We have found teeth of animals of different ages - both adults and cubs,' said Pavel Skutschas. 'This suggests that the polar stegosaurs are most likely to have been sedentary: they multiplied and raised offspring on the same territory all year round. Additionally, almost all of the finds are extremely eaten away: many of them have two or three facets - worn edges from contact with adjacent teeth.'

This feature prompted the researchers to believe that second dentition in polar stegosaurs could occur sufficiently quickly. The scientists therefore investigated 'temporary rings' - the so-called von Ebner lines, which can be used to calculate the number of days required for odontogenesis. It took Yakut stegosaurs only about 95 days to complete this task, although in other dinosaur species the process usually lasted 200 days or longer. These Yakut inhabitants are most likely not to have suffered from caries since it takes much more time for it to appear.

'The fact that teeth formed quickly, grinded quickly and changed quickly is highly likely to indicate that the stegosaurs from Yakutia ate some kind of tough food. We cannot yet say with 100% certainty that we have found polar adaptation, since there is, in principle, very little information about the teeth of stegosaurs. However, their teeth, found in more southern areas, usually have only one wear surface. In a word, this is a new question for palaeobotanists - what was the hard plant growing in the polar regions that the Yakut stegosaurs ate?' noted Pavel Skutschas.

Another remarkable thing made it possible to take a different view on the structure of the jaws of these animals: on the surface of the teeth abrasion, the scientists were able to spot curved micro gouges. Palaeontologists used to assume that very simple jaw movements were characteristic of stegosaurs - up and down, like scissors. However, now, thanks to the patterns of gouges on the facets, it became clear that jaw movements were more complex and included a longitudinal phase.

Another conclusion turned out to be associated with the wavy structure of the enamel. It used to be thought that it was unique to the younger Late Cretaceous dinosaurs, which had a complex dentition, such as the hadrosaurids. However, the palaeontologists saw this feature in stegosaurs from Yakutia and decided to examine the teeth of another Early Cretaceous dinosaur, a primitive relative of Triceratops - Psittacosaurus. This unique feature turned out to have been prevalent among dinosaurs in general.

'Stegosaurs are one of the most recognisable and popular dinosaurs that are often seen on T-shirts and various pictures. However, we still know little about them. This research has raised many new questions that can be solved without setting out on an expedition, but by studying materials that have been stored in museums for hundreds of years. We have managed to show what features the polar stegosaurs had. But what is an "ordinary", "benchmark" stegosaurus? This has yet to be investigated,' stressed Pavel Skutschas.

Credit: 
St. Petersburg State University

Leading blue energy revolution

video: Multi-mode operation of WT-TENG

Image: 
CUHK

The ocean covers about 70% of the Earth's surface area and is the largest reservoir of energy. Researchers have been exploring the approach for harnessing ocean energy to solve the world energy crisis and pollution problems caused by thermal power generation. The nanogenerator, including piezoelectric, triboelectric, and pyroelectric nanogenerators, is one of the key technologies for mechanical energy conversion. The triboelectric nanogenerator (TENG) makes use of the triboelectric effect and electrostatic induction to harvest mechanical energy based on contact or sliding electrification.

However, conventional TENG device is often based on solid/solid contact, and it is hard to ensure the contact intimacy of the two tribo-materials. In the meanwhile, the material surfaces will wear or become damaged after long-term friction. Also, the solid/solid-based TENGs need shell structures and/or mechanical components such as springs, holders, and rotors to harvest random vibration energy. The complex structure will reduce the efficiency of energy harvesting.

The research team led by Prof. Zi Yunlong, Assistant Professor of the Department of Mechanical and Automation Engineering at CUHK, has recently overcome the above technical limitations and developed a water-tube-based TENG (WT-TENG) for irregular and low-frequency environmental energy harvesting, such as water waves. They encapsulated water in a finger-sized tube (FEP). When water moves in the tube between regions of the two electrodes, triboelectrification happens and electric currents can be generated. Taking advantage of the flexibility of water, the WT-TENG can be operated in various modes, including rotation, swing, seesaw, and horizontal linear modes, to harvest energy from diverse mechanical movements in the environment, such as ocean waves, wind, body and vehicle movements. Due to the high contact intimacy of water and the tube surface, the output volumetric charge density of the WT-TENG is significantly enhanced, reaching 9 mC/m3 at a frequency as low as 0.25 Hz, which is beyond all previous reports.

Moreover, just like toy building bricks, multiple small WT-TENG units can be easily combined and integrated as one larger unit and realise multiplied electric outputs. Researchers designed two power generation units. One is a box with 34 WT-TENG units which was placed in the sea to collect ocean wave energy. Another one is a wristband composed of 10 WT-TENG units. A researcher put it on and kept swinging her arms for body motion energy harvesting. The peak power generations of the two tests were both enough to drive 150 LED light bulbs.

Prof. Zi Yunlong stated, "Previous designs of ocean energy harvesters have been equipped with electromagnetic-based generators which are large in size and heavy, and will only generate power if the frequency of ocean waves reaches a certain high level. Our latest research has overcome the technical hurdles and will promote the use of nanogenerators, especially in "blue energy" harvesting, offering a new direction for the development of renewable energy to achieve carbon neutrality."

Related research results were recently published in the internationally renowned journal Advanced Energy Materials. The first author of the article is Postdoctoral Fellow Dr. Wu Hao, and Professor Zi Yunlong is the only corresponding author. Professor Wang Zuankai from the City University of Hong Kong participated in the guidance of this work.

Credit: 
The Chinese University of Hong Kong

How gamblers plan their actions to maximize rewards

In their pursuit of maximum reward, people suffering from gambling disorder rely less on exploring new but potentially better strategies, and more on proven courses of action that have already led to success in the past. The neurotransmitter dopamine in the brain may play an important role in this, a study in biological psychology conducted at the University of Cologne's Faculty of Human Sciences by Professor Dr Jan Peters und Dr Antonius Wiehler suspects. The article 'Attenuated directed exploration during reinforcement learning in gambling disorder' has appeared in the latest edition of the Journal of Neuroscience, published by the Society for Neuroscience.

Gambling disorder affects slightly less than one percent of the population - often men - and is in some ways similar to substance abuse disorders. Scientists suspect that this disorder, like other addiction disorders, is associated with changes in the dopamine system. The brain's reward system releases the neurotransmitter dopamine during gambling. Since dopamine is important for the planning and control of actions, among other things, it could also affect strategic learning processes.

'Gambling disorder is of scientific interest among other things because it is an addiction disorder that is not tied to a specific substance', Professor Dr Jan Peters, one of the authors, remarked. The psychologists examined how gamblers plan their actions to maximize rewards - how their so called reinforcement learning works. In the study, participants had to decide between already proven options or new ones in order to win as much as possible. At the same time, the scientists used functional magnetic resonance imaging to measure activity in regions of the brain that are important for processing reward stimuli and planning actions.

Twenty-three habitual gamblers and twenty-three control subjects (all male) performed what is known as a 'four-armed bandit task'. The name of this type of decision-making task refers to slot machines, known colloquially as 'one-armed bandits'. In each run, the participants had to choose between four options ('four-armed bandit', in this case four coloured squares), whose winnings slowly changed. Different strategies can be employed here. For example, one can choose the option that yielded the highest profit last time. However, it is also possible to choose the option where the chance of winning is most uncertain - the option promising maximum information gain. The latter is also called directed (or uncertainty-based) exploration.

Both groups won about the same amount of money and exhibited directed exploration. However, this was significantly less pronounced in the group of gamblers than in the control group. These results indicate that gamblers are less adaptive to changing environments during reinforcement learning. At the neural level, gamblers showed changes in a network of brain regions that has been associated with directed exploration in previous studies. In one previous study by the two biological psychologists, pharmacologically raising the dopamine level in healthy participants had shown a very similar effect on behaviour. 'Although this indicates that dopamine might also play an important role in the reduction of directed exploration in gamblers, more research would have to be conducted to prove such a correlation,' said Dr Antonius Wiehler.

Further research also needs to clarify whether the observed changes in decision-making behaviour in gamblers are a risk factor for, or a consequence of, regular gambling.

Credit: 
University of Cologne

A leap forward in research on CAR T cell therapy

In cancer immunotherapy, cells in the patient's own immune system are activated to attack cancer cells. CAR T cell therapy has been one of the most significant recent advances in immunotherapies targeted at cancer.

In CAR T cell therapy, T cells are extracted from the patient for genetic modification: a chimeric antigen receptor (CAR) is transported into the cells using a viral vector, helping the T cells better identify and kill cancer cells. When the antigen receptor cells identify the desired surface structure in the patient's cells, they start multiplying and killing the target cells.

CAR T cell therapy was introduced to Finland in 2018, and the treatment form has been used in support of patients suffering from leukaemia and lymphomas.

So far, the application of CAR T cell therapy to solid tumours has been difficult: targeting the therapy at just the tumour is difficult when the cancer type is not associated with any specific surface structure. In many cancer types, there is an abundance of a specific protein on the tumour's surface, but as the protein also occurs in low numbers in normal tissue, CAR T cell therapy is not able to discriminate between target protein levels. This is why genetically modified cells are quick to attack also healthy cells and organs, which can result in fatal adverse effects associated with the treatment.

A study recently published in the Science journal has found a solution to applying CAR T cell therapy to solid tumours as well: through collaboration, American and Finnish researchers identified a new way of programming CAR T cells so that they only kill cancer cells, leaving alone healthy cells that have the same marker protein as cancer cells.

New technique based on ultrasensitive identification of HER2 cells, further investigation underway

HER2 is a protein characteristic of, among others, breast cancer, ovarian cancer and abdominal cancers. The protein can also occur in great numbers on the surface of tumour cells, since, as a result of gene amplification, HER2 expression can be multiplied in tumours.

A new CAR T cell engineering technique developed by the researchers is based on a two-step identification process of HER2 positive cells. Thanks to the engineering, the researchers were able to produce a response where CAR T cells kill only the cancer cells in the cancer tissue.

"Our solution requires the preliminary identification of the surface structures associated with the cancer. When the preliminary recognition ability that induces the CAR construct is adjusted to require a binding affinity that is different from the affinity used by CAR to direct the killing of these cells, an extremely accurate ability to differentiate between cells based on the amount of target protein on their surface can be programmed in this two-step 'circuit' which controls the function of killer T cells," says Professor of Virology Kalle Saksela from the University of Helsinki.

Further studies for the application of the technique are already ongoing. Postdoctoral Researcher Anna Mäkelä, who works at Professor Saksela's laboratory, is coordinating a project funded by the Academy of Finland investigating the use of CAR T cell therapy on various cancer types and their surface structures.

"We are very excited about these results, and we are currently developing the technique so that it could be used to treat ovarian cancer. As the work progresses, the aim is to apply the technique itself and the targeting molecules of CAR constructs even more broadly to malignant solid tumours. Our goal is to develop 'multi-warhead missiles', against which cancer cells will find it difficult to develop resistance," Mäkelä says.

Credit: 
University of Helsinki

Study: Black bears are eating pumas' lunch

image: Pumas in Mendocino National Forest killed adult deer more often in seasons when black bears were most active, researchers found. The team also observed black bears eating the remains of adult deer killed by pumas.

Image: 
Photo by Max Allen

CHAMPAIGN, Ill. -- A camera-trap study in the Mendocino National Forest in Northern California reveals that black bears are adept at finding and stealing the remains of adult deer killed by pumas. This "kleptoparasitism" by bears, as scientists call it, reduces the calories pumas consume in seasons when the bears are most active. Perhaps in response to this shortage, the pumas hunt more often and eat more small game when the bears are not in hibernation.

The findings are published in the journal Basic and Applied Ecology.

Pumas, also known as mountain lions or cougars, are apex predators, but this doesn't mean they can't be threatened by other carnivores, said study lead author Max Allen, a research scientist at the Illinois Natural History Survey who studies big cats and other carnivorous mammals.

"Bears are dominant scavengers, and their large body size means that they can take carcasses from apex predators," Allen said. He and his colleagues became interested in this phenomenon when they saw signs of bear scat and bear claw marks near puma kills.

The researchers used GPS collars to track seven pumas across a 386-square-mile territory over a period of two years. Whenever a puma made a kill, it would repeatedly visit or spend a lot of time at that location. The researchers visited those sites to document the type of animals the puma had killed and to look for signs of bears. They also set up camera traps at many of the kill sites to determine which animals were eating the remains.

The team documented 352 puma kills, of which 64 were animals other than deer. The smaller prey animals included dozens of squirrels, birds and rabbits, but also a coyote, two gray foxes, a fisher and two black bears. The pumas also went after fawns, which they can eat quickly, likely before a bear discovers them, Allen said.

The bears discovered kills of adult deer within about two days, cutting the pumas' feeding time at a deer carcass from 5-7 days in winter to about two days when the bears showed up.

The study found the highest frequency of puma kills ever reported. And the kill rate increased when bears were most active.

"There were only about 0.68 mountain lions per 100 square kilometers in our study area," Allen said. "An average number would be two to three. The average home range for a female in Santa Cruz is between 30 and 35 square kilometers. But in Mendocino it was over 200."

Despite the large territory available to them, pumas were not getting the full benefit of their kills. The amount they consumed varied month to month, from more than 190 pounds of meat in January, when bears were least active, to less than 110 pounds in April. Bear scavenging of adult deer carcasses was highest in the warmer months, but the researchers found evidence of bears at puma kills every month of the year.

"We found evidence to suggest that the bears are having an impact on pumas and how often they're killing deer," Allen said. "When a bear pushes a puma off of a carcass, the puma runs away, and the bear eats the deer. The puma then has to make another kill in order to get the energy it needs."

In the absence of black bears, pumas made a kill about once a week, Allen said.

"But in the presence of bears, they're killing every five to six days," he said. "They have to work harder, and they're getting less nutrition overall."

Credit: 
University of Illinois at Urbana-Champaign, News Bureau

SFU lab one step closer to understanding how life started on Earth

video: At the dawn of life, polymerases made from RNA likely replicated RNA genomes and maintained metabolic RNA enzymes essential for life.

The clamping RNA polymerase ribozyme uses a specificity primer to recognize an RNA promoter.

Once localized, the polymerase rearranges into a processive complex, capable of copying extended regions of template.

This promoter recognition and processivity is similar to many aspects of modern promoter dependent transcription and demonstrates how early in evolution RNA genes might have been replicated and expressed.

Image: 
Simon Fraser University

How did life begin on Earth and could it exist elsewhere? Researchers at Simon Fraser University have isolated a genetic clue--an enzyme known as an RNA polymerase--that provides new insights about the origins of life. The research is published today in the journal Science.

Researchers in SFU molecular biology and biochemistry professor Peter Unrau's laboratory are working to advance the RNA World Hypothesis in answer to fundamental questions on life's beginnings.

The hypothesis suggests that life on our planet began with self-replicating ribonucleic acid (RNA) molecules, capable of not only carrying genetic information but also driving chemical reactions essential for life, prior to the evolution of deoxyribonucleic acid (DNA) and proteins, which now perform both functions within our cells.

Through a process of in vitro evolution in the lab, the team has isolated a promoter-based RNA polymerase ribozyme--an enzyme capable of synthesizing RNA using RNA as a template--that has processive clamping abilities that are equivalent to modern-day protein polymerases.

"This RNA polymerase has many of the features of modern protein polymerases; it was evolved to recognize an RNA promoter, and subsequently, to copy RNA processively," says Unrau. "What our finding implies is that similar RNA enzymes early in the evolution of life could also have manifested such sophisticated biological features."

There is evidence that suggests RNA came before DNA and proteins. For example, the ribosome, the 'machine' that makes proteins in our cells, is built from RNA. Yet proteins are better at catalyzing reactions.

This has led experts to theorize that this machine was an invention of the late RNA world that was never discarded by evolution.

DNA is also made from RNA. Since RNA is a jack-of-all-trades and can perform the functions of both protein and DNA, this suggests that DNA and proteins evolved later as an 'upgrade' to enhance cellular functions originally supported by RNA.

The clamping polymerase ribozyme discovered by Unrau's laboratory, located within SFU's Burnaby campus, indicates that RNA replication by RNA catalysts indeed might have been possible in such primitive life.

Unrau and his team's long-term goal is to build a self-evolving system in the lab. This would involve creating an RNA polymerase ribozyme that can also replicate and sustain itself, to gain a deeper understanding of how early RNA-based organisms came into being.

"If we are able to create a living and evolving RNA-based system in the laboratory we'd have made something quite remarkable, something that has probably has never existed since the dawn of life on this planet," says Unrau, who wrote the Science article with SFU PhD student Razvan Cojocaru.

"By understanding the fundamental complexity of life, in the laboratory, we can start to estimate the chances of life on other planets and determine the likelihood that planets such as Mars either had or still have the potential to harbor life."

Credit: 
Simon Fraser University

Demonstrating the world's fastest spintronics p-bit

image: A top-view scanning electron microscopy image of a magnetic tunnel junction device.

Image: 
K. Hayakawa et al.

Tohoku University researchers have, for the first time, developed the technology for the nanosecond operation of the spintronics-based probabilistic bit (p-bit) - dubbed the poor man's quantum bit (q-bit).

The late physicist R.P. Feynman envisioned a probabilistic computer: a computer that is capable of dealing with probabilities at scale to enable efficient computing.

"Using spintronics, our latest technology made the first step in realizing Feynman's vision," said Shun Kanai, professor at the Research Institute of Electrical Communication at Tohoku University and lead author of the study.

Magnetic tunnel junctions (MTJs) are the key component of non-volatile memory or MRAM, a mass produced memory technology that uses magnetization to store information. There, thermal fluctuation typically poses a threat to the stable storage of information.

P-bits, on the other hand, function with these thermal fluctuations in thermally unstable (stochastic) MTJs. Prior collaborative research between Tohoku University and Purdue University demonstrated a spintronics-based probabilistic computer at room temperature consisting of stochastic MTJs with millisecond-long relaxation times.

In order to make probabilistic computers a viable technology, it is necessary to develop stochastic MTJs with much shorter relaxation times which reduces the fluctuation timescale of the p-bit. Doing so would effectively increase the computation speed/accuracy.

The Tohoku University research group, comprising Kanai, professor Hideo Ohno (the current Tohoku University president), and professor Shunsuke Fukami, produced a nanoscale MTJ device with an in-plane magnetic easy axis (Fig. 1). The magnetization direction updates every 8 nanoseconds on average - 100 times faster than the previous world record (Fig 2).

The group explained the mechanism of this extremely short relaxation time by utilizing entropy - a physical quantity used to represent the stochasticity of systems that had previously not been considered for magnetization dynamics. Deriving a universal equation governing the entropy in magnetization dynamics, they discovered that the entropy rapidly increases in MTJs with in-plane easy axis with larger magnitudes of perpendicular magnetic anisotropy. The group intentionally employed an in-plane magnetic easy axis for achieving shorter relaxation times.

"The developed MTJ is compatible with current semiconductor back-end-of-line processes and shows substantial promise for the future realization of high-performance probabilistic computers," added Kanai. "Our theoretical framework of magnetization dynamics including entropy also has broad scientific implication, ultimately showing the potential of spintronics to contribute to debatable issues in statistical physics."

Credit: 
Tohoku University

High-efficiency pulse compression established on solitons in nonlinear Kerr resonators

image: An illustrative scenario for the high-efficiency pulse compressor established on solitons in a nonlinear Kerr resonator consisting of periodic layered Kerr media.

Image: 
by Sheng Zhang, Zongyuan Fu, Bingbing Zhu, Guangyu Fan, Yudong Chen, Shunjia Wang, Yaxin Liu, Andrius Baltuska, Cheng Jin, Chuanshan Tian & Zhensheng Tao

Generating intense ultrashort pulses with high spatial quality has opened up possibilities for ultrafast and strong-field science. It is so important that the Nobel Prize in Physics 2018 was given to Dr. Strickland and Dr. Mourou for inventing a technique called chirped pulse amplification, which drives numerous ultrafast lasers worldwide. With the great advancement in the last decade, Yb-based ultrafast lasers have become highly popular, because they exhibit exceptional thermal efficiency, are low in cost and are highly flexible in adjusting pulse energies and repetition rates. However, the pulse durations from these lasers are usually not shorter than 100 fs or even 1 ps, which requires external pulse compression for applications. The existing supercontinuum generation (SCG) and pulse compression techniques are typically low in efficiency. Many of them require vacuum systems, vacuum-gas interfaces, and are, hence, expensive and complex to maintain. As a result, the applications of these techniques are still limited in a few specialized laboratories, and cannot be widely used in physics, femtochemistry and femtobiology labs, which represents the major applications of ultrafast lasers.

In a new paper published in Light Science & Application, a team of Chinese and Austrian scientists, led by Professor Zhensheng Tao from State Key Laboratory of Surface Physics and Department of Physics, Fudan University, Shanghai, China proposed and demonstrated that the formation of optical solitons during the propagation of strong ultrafast laser pulses in periodic layered Kerr media (PLKM) can serve as a simple, reliable and cost-effective solution for SCG and pulse compression. They found that the formation of the solitons is a result of the balance between the nonlinear Kerr self-focusing and the linear diffraction of the laser beam, which can support sustainable and long-distance nonlinear light-matter interaction, and hence enhance the SCG efficiency. More interestingly, by confining the beam propagation in these solitary modes, high spatial quality and spatio-spectral homogeneity can be achieved, reaching >85% compression efficiency. As a demonstration of such a method, the scientists used the compressed pulses to drive a highly nonlinear optical process, called high harmonic generation, producing bright and coherent extreme-ultraviolet and soft X-ray light from a gas target. The high harmonic process is extremely sensitive to the spatio-temporal quality of the compressed pulses, and it clearly demonstrated the great potential of this method. It is worth further mentioning that the total cost of constructing the PLKM SCG device is only ~$200. The reported method and technique will pave the way for future high-efficiency, reliable and cost-effective SCG and pulse compression of ultrafast lasers, which can be widely used in labs of ultrafast physics, chemistry and biology.

The high-efficiency, low-cost SCG and pulse compression method is centered around the studies on the formation and stability of the solitary states in a PLKM nonlinear resonator. With the solitary modes, the propagation of intense laser beam can be manipulated to generate the desired broad spectrum and high spatial quality. This method can support applications on ultrafast lasers with various pulse energies and repetition rates. The scientists summarize the advantages of their method:
"Compared to existing supercontinuum generation and pulse compression method, the method we proposed and demonstrated has four advantages: (1) It is very simple and cost-effective to construct and maintain, because it does not require vacuum systems or beam-pointing stabilization setups; (2) It is very flexible, which can be applied to ultrafast lasers with various energies and power; (3) It has very high efficiency, which can be as high as 85%; and (4) it is very stable. We believe this method can be widely introduce to many physics, chemistry and biology labs, for the scientists who use ultrafast laser but do not have specialties of constructing a broadband laser system."

Credit: 
Light Publishing Center, Changchun Institute of Optics, Fine Mechanics And Physics, CAS

Disability highest for schizophrenia and personality disorders

image: Disability highest for schizophrenia and personality disorders

Image: 
Niels Bohr Professorship Psychiatric Epidemiology

Schizophrenia and personality disorders are the most disabling mental health conditions to live with, according to scientists from The University of Queensland.

A Danish-Australian research team studied a cohort of 6.9 million Danish residents in the Danish Psychiatric Central Research Register to understand the burden of disability associated with 18 mental and substance use disorders.

Professor John McGrath from UQ Queensland Brain Institute's and the Queensland Centre for Mental Health Research said the data was used to develop a new method for measuring disability that took comorbidities into account.

"Traditionally the impact of mental disorders has been presented for an entire nation, but in this study, we focussed on people with different types of mental and substance use disorders at an individual level," Professor McGrath said.

"We found that schizophrenia and personality disorders were the most disabling mental conditions and showed how disorders like autism, anxiety disorders and schizophrenia contribute to disability at different ages.

"Our new measure known as the Health Loss Proportion (HeLP) allows us to measure the average disability for different disorders at the individual level, which means that individuals who experience more inherent disability, and more comorbid conditions, will have a higher HeLP weighting, and therefore a higher measure of disability."

Professor McGrath said the new method complemented methods being used by the Global Burden of Disease Study to help policymakers and clinicians plan health system responses.

"The Global Burden of Disease Study uses top-down summary statistics to estimate the impact of mental disorders on societies, while we have used a 'bottom-up' method based on Danish registers to estimate how mental disorders impact individuals across their life span," Professor McGrath said.

The team hopes that future register-based studies will create new knowledge about how comorbidity contributes to global disease burdens and apply this new method to disorders of interest.

"People with mental disorders lead valued and productive lives, despite a lack of social and economic support for their unmet needs," Professor McGrath said.

"We hope our findings ensure more disabling disorders are given adequate attention, support, and funding."

Credit: 
University of Queensland

The trouble of being tall

image: Giraffes are in general very alert and exploit their height advantage to scan the horizon using their excellent eyesight

Image: 
Mogens Trolle

The giraffe is a truly puzzling animal. With its exceptional anatomy and suite of evolutionary adaptations, the giraffe is an outstanding case of animal evolution and physiology. Now, an international team of researchers from the University of Copenhagen and Northwestern Polytechnical University in China have produced a high-quality genome from the giraffe and investigated which genes are likely to be responsible for its unique biological features.

The extraordinary stature of the giraffe has led to a long list of physiological co-adaptations. The blood pressure of the giraffe, for instance, is twice as high as in humans and most other mammals to allow a steady blood supply to the lofty head. How does the giraffe avoid the usual side effects of high blood pressure, such as severe damage to the cardiovascular system or strokes?

The team discovered a particular gene – known as FGFRL1 – that has undergone many changes in the giraffe compared to all other animals. Using sophisticated gene editing techniques they introduced giraffe-specific FGFRL1 mutations into lab mice. Interestingly, the giraffe-type mice differed from normal mice in two important aspects: they suffered less cardiovascular and organ damage when treated with a blood pressure increasing drug, and they grew more compact and denser bones.

- “Both of these changes are directly related to the unique physiological features of the giraffe – coping with high blood pressure and maintaining compact and strong bones, despite growing them faster than any other mammal, to form the elongated neck and legs.”, says Rasmus Heller from the Department of Biology, University of Copenhagen, one of the lead authors on the study.

Giraffe’s can’t get no sleep

While jumping out of bed for (some) humans might be an effortless and elegant affair, this is definitely not the case for the giraffe. Merely standing up is an a lengthy and awkward procedure, let alone getting up and running away from a ferocious predator. Therefore, giraffes have evolved into spending much less time sleeping than most other mammals.

- Rasmus Heller elaborates: “We found that key genes regulating the circadian rhythm and sleep were under strong selection in giraffes, possibly allowing the giraffe a more interrupted sleep-wake cycle than other mammals”.

In line with research in other animals an evolutionary trade-off also seem to be determining their sensory perception, Rasmus continues:

- “Giraffes are in general very alert and exploit their height advantage to scan the horizon using their excellent eyesight. Conversely, they have lost many genes related to olfaction, which is probably related to a radically diluted presence of scents at 5m compared to ground level”.

A model of evolutionary mechanisms—and perhaps even human medicine?

These findings provide insights into basic modes of evolution. The dual effects of the strongly selected FGFRL1 gene are compatible with the phenomenon that one gene can affect several different aspects of the phenotype, so called evolutionary pleiotropy. Pleiotropy is particularly relevant for explaining unusually large phenotypic changes, because such changes often require that a suite of traits are changed within a short evolutionary time. Therefore, pleiotropy could provide one solution to the riddle of how evolution could achieve the many co-dependent changes needed to form an animal as extreme as a giraffe. Furthermore, the findings even identifies FGFRL1 as a possible target of research in human cardiovascular disease.

- “These results showcase that animals are interesting models, not only to understand the basic principles of evolution, but also to help us understand which genes influence some of the phenotypes we are really interested in – such as those related to disease. However, it’s worth pointing out that genetic variants do not necessarily have the same phenotypic effect in different species, and that phenotypes are affected by many other things than variation in coding regions.”, says Qiang Qiu from Northwestern Polytechnical University, another lead author on the study.

The results have just been published in the prestigious scientific journal, Science Advances.

Credit: 
University of Copenhagen - Faculty of Science

Missing baryons found in far-out reaches of galactic halos

image: A new study has found that a share of particles that has been challenging to locate is most likely sprinkled across the distant bounds of galaxy halos. The study found some of these particles of baryonic matter are located up to 6 million light-years from their galactic centers. This color-rendered image shows the halo of the Andromeda galaxy, which is the Milky Way's largest galactic neighbor.

Image: 
NASA

Researchers have channeled the universe's earliest light - a relic of the universe's formation known as the cosmic microwave background (CMB) - to solve a missing-matter mystery and learn new things about galaxy formation. Their work could also help us to better understand dark energy and test Einstein's theory of general relativity by providing new details about the rate at which galaxies are moving toward us or away from us.

Invisible dark matter and dark energy account for about 95% of the universe's total mass and energy, and the majority of the 5% that is considered ordinary matter is also largely unseen, such as the gases at the outskirts of galaxies that comprise their so-called halos.

Most of this ordinary matter is made up of neutrons and protons - particles called baryons that exist in the nuclei of atoms like hydrogen and helium. Only about 10% of baryonic matter is in the form of stars, and most of the rest inhabits the space between galaxies in strands of hot, spread-out matter known as the warm-hot intergalactic medium, or WHIM.

Because baryons are so spread out in space, it has been difficult for scientists to get a clear picture of their location and density around galaxies. Because of this incomplete picture of where ordinary matter resides, most of the universe's baryons can be considered as "missing."

Now, an international team of researchers, with key contributions from physicists at the U.S. Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) and Cornell University, has mapped the location of these missing baryons by providing the best measurements, to date, of their location and density around groups of galaxies.

It turns out the baryons are in galaxy halos after all, and that these halos extend much farther than popular models had predicted. While most of an individual galaxy's stars are typically contained within a region that is about 100,000 light-years from the galaxy's center, these measurements show that for a given group of galaxies, the most distant baryons can extend about 6 million light-years from their center.

Paradoxically, this missing matter is even more challenging to map out than dark matter, which we can observe indirectly through its gravitational effects on normal matter. Dark matter is the unknown stuff that makes up about 27% of the universe; and dark energy, which is driving matter in the universe apart at an accelerating rate, makes up about 68% of the universe.

"Only a few percent of ordinary matter is in the form of stars. Most of it is in the form of gas that is generally too faint, too diffuse to be able to detect," said Emmanuel Schaan, Chamberlain Postdoctoral Fellow in Berkeley Lab's Physics Division and lead author for one of two papers (https://journals.aps.org/prd/abstract/10.1103/PhysRevD.103.063513) about the missing baryons, published March 15 in the journal Physical Review D (view the other paper at this link: https://journals.aps.org/prd/abstract/10.1103/PhysRevD.103.063514).

The researchers made use of a process known as the Sunyaev-Zel'dovich effect that explains how CMB electrons get a boost in energy via a scattering process as they interact with hot gases surrounding galaxy clusters.

"This is a great opportunity to look beyond galaxy positions and at galaxy velocities," said Simone Ferraro, a Divisional Fellow in Berkeley Lab's Physics Division who participated in both studies. "Our measurements contain a lot of cosmological information about how fast these galaxies move. It will complement measurements that other observatories make, and make them even more powerful," he said.

A team of researchers at Cornell University, comprised of research associate Stefania Amodeo, assistant professor. Professor Nicholas Battaglia, and graduate student Emily Moser, led the modeling and the interpretation of the measurements, and explored their consequences for weak gravitational lensing and galaxy formation.

The computer algorithms that the researchers developed should prove useful in analyzing "weak lensing" data from future experiments with high precision. Lensing phenomena occur when massive objects such as galaxies and galaxy clusters are roughly aligned in a particular line of site so that gravitational distortions actually bend and distort the light from the more distant object.

Weak lensing is one of the main techniques that scientists use to understand the origin and evolution of the universe, including the study of dark matter and dark energy. Learning the location and distribution of baryonic matter brings this data within reach.

"These measurements have profound implications for weak lensing, and we expect this technique to be very effective at calibrating future weak-lensing surveys," Ferraro said.

Schaan noted, "We also get information that's relevant for galaxy formation."

In the latest studies, researchers relied on a galaxies dataset from the ground-based Baryon Oscillation Spectroscopic Survey (BOSS) in New Mexico, and CMB data from the Atacama Cosmology Telescope (ACT) in Chile and the European Space Agency's space-based Planck telescope. Berkeley Lab played a leading role in the BOSS mapping effort, and developed the computational architectures necessary for Planck data-processing at NERSC.

The algorithms they created benefit from analysis using the Cori supercomputer at Berkeley Lab's DOE-funded National Energy Research Scientific Computing Center (NERSC). The algorithms counted electrons, allowing them to ignore the chemical composition of the gases.

"It's like a watermark on a bank note," Schaan explained. "If you put it in front of a backlight then the watermark appears as a shadow. For us the backlight is the cosmic microwave background. It serves to illuminate the gas from behind, so we can see the shadow as the CMB light travels through that gas."

Ferraro said, "It's the first really high-significance measurement that really pins down where the gas was."

The new picture of galaxy halos provided by the "ThumbStack" software that researchers created: massive, fuzzy spherical areas extending far beyond the starlit regions. This software is effective at mapping those halos even for groups of galaxies that have low-mass halos and for those that are moving away from us very quickly (known as "high-redshift" galaxies).

New experiments that should benefit from the halo-mapping tool include the Dark Energy Spectroscopic Instrument, the Vera Rubin Observatory, the Nancy Grace Roman Space Telescope, and the Euclid space telescope.

NERSC is a DOE Office of Science user facility.

Credit: 
DOE/Lawrence Berkeley National Laboratory

Modelling speed-ups in nutrient-seeking bacteria

Many bacteria swim towards nutrients by rotating the helix-shaped flagella attached to their bodies. As they move, the cells can either 'run' in a straight line, or 'tumble' by varying the rotational directions of their flagella, causing their paths to randomly change course. Through a process named 'chemotaxis,' bacteria can decrease their rate of tumbling at higher concentrations of nutrients, while maintaining their swimming speeds. In more hospitable environments like the gut, this helps them to seek out nutrients more easily. However, in more nutrient-sparse environments, some species of bacteria will also perform 'chemokinesis': increasing their swim speeds as nutrient concentrations increase, without changing their tumbling rates. Through new research published in EPJ E, Theresa Jakuszeit and a team at the University of Cambridge led by Ottavio Croze produced a model which accurately accounts for the combined influences of these two motions.

The team's findings deliver new insights into how self-swimming microbes survive, particularly in harsher environments like soils and oceans. Previously, studies have shown how chemokinesis allows bacteria to band around nutrient sources, respond quickly to short bursts of nutrients, and even form mutually beneficial relationships with algae. So far, however, none of them have directly measured how bacterial swim speeds can vary with nutrient concentration.

Starting from mathematical equations describing run-and-tumble dynamics, Croze's team extended a widely used model for chemotaxis to incorporate chemokinesis. They then applied the new model to predict the dynamics of bacterial populations within the chemical gradients generated by nutrient distributions used in previous experiments. Through their approach, the researchers showed numerically how a combination of both motions can enhance the responses of populations compared with chemotaxis alone. They also presented more accurate predictions of how bacteria respond to nutrient distributions - including sources which emit nutrients sporadically. This allowed them to better assess the biological benefits of motility.

Credit: 
Springer

Losing rivers

Water is an ephemeral thing. It can emerge from an isolated spring, as if by magic, to birth a babbling brook. It can also course through a mighty river, seeping into the soil until all that remains downstream is a shady arroyo, the nearby trees offering the only hint of where the water has gone.

The interplay between surface water and groundwater is often overlooked by those who use this vital resource due to the difficulty of studying it. Assistant professors Scott Jasechko and Debra Perrone, of UC Santa Barbara, and their colleagues leveraged their enormous database of groundwater measurements to investigate the interaction between these related resources. Their results, published in Nature, indicate that many more rivers across the United States may be leaking water into the ground than previously realized.

In many places surface waters and groundwaters connect, while in others they're separated by impermeable rock layers. It depends on the underlying geology. But where they do intermingle, water can transition between flowing above and below ground.

"Gaining rivers" receive water from the surrounding groundwater, while "losing rivers" seep into the underlying aquifer. Scientists didn't have a good understanding of the prevalence of each of these conditions on a continental scale. Simply put, no one had previously stitched together so many measurements of groundwater, explained Jasechko, the study's co-lead author.

Gaining and losing rivers

Waterways can gain water from the surrounding aquifer or leak water into the ground depending on the conditions.

Typical groundwater studies include water level measurements from a few hundred to 1,000 wells. This study encompasses 4.2 million.

Perrone and Jasechko devoted years to compiling data from 64 agencies across the U.S. and analyzing the results. "Compiling these data was a massive undertaking. We collected millions of datapoints and reviewed hundreds of papers over the course of six years," Perrone said.

The resulting database has precipitated a number of the team's subsequent studies. "We can use this extensive dataset in innovative ways to answer questions that we have not been able to address previously," she added.

For this paper, Jasechko, Perrone and their coauthors compared water levels in wells to the surface of the nearest stream. "We apply a simple method to a large dataset," Jasechko said. "We identify wells with water levels that lie below the nearest stream, implying that these nearby streams could leak into the subsurface if it is sufficiently permeable."

The team found that nearly two-thirds of the wells had water levels below the nearest stream. This creates a gradient that can drive water from the river channel into the aquifer beneath.

"Our analysis shows that two out of three rivers in the U.S. are already losing water. It's very likely that this effect will worsen in the coming decades and some rivers may even disappear" said co-lead author Hansjörg Seybold at ETH Zurich.

"The phenomenon, set in motion decades ago, is now widespread across the U.S. There are far more streams draining into underlying aquifers than we had first assumed," Seybold continued. "Since rivers and streams are a vital water supply for agriculture and cities, the gravity of the situation came as a surprise."

A map of well water levels with respect to the surface of the nearest river.
Photo Credit: JASECHKO ET AL.

Rivers were particularly prone to losing water in arid regions, along flat topography and in areas with extensive groundwater pumping, they observed. A prime example of this would be flat agricultural land in semi-arid regions like California's Central Valley. "We are literally sucking the rivers dry," Seybold said.

Losing rivers can impact other water users, downstream communities and ecosystems that rely on surface flows. "Historically, we've often treated these two resources as separate resources," Perrone said. "Our work highlights the importance of considering groundwater and surface water as a single resource where they are connected."

The researchers also found that losing rivers have been widespread in the U.S. for quite some time, present in many places at least as far back as the 1940s and '50s. And while many waterways naturally lose water, the issue can be exacerbated by human activity.

Humans have extracted water from the ground for thousands of years; in America they've been doing so for hundreds of years. The practice accelerated after World War II and has been rampant since the 1970s, accompanied by the undesirable and unintended consequences it entails.

"This isn't a new phenomenon," Jasechko said. "It's been with us for decades."

Water levels do fluctuate over years and decades, and unfortunately the researchers have only one data point for many of the wells in their sample. Other work by the team suggests that groundwater typically fluctuates by no more than a few meters over the course of a year. However, the water level for the many wells near losing rivers was more than two meters below the surface of the nearest stream, increasing the researchers' confidence that leaky rivers are likely widespread.

The sun shines down on a shallow marshy stretch of river bordered by grasses and riparian shrubs. Mountains are visible off to the left against a clear blue sky.

This section of the Santa Ynez river leaks water into the surrounding aquifer.

Photo Credit: DEBRA PERRONE

"We can only observe well water levels where wells exist," Jasechko acknowledged. "It's an obvious but important point. Our analysis is inherently biased to places where wells have been drilled, and therefore also to places where groundwater is pumped."

While the researchers don't see any straightforward way around this in the short-term, they hope their results can inform resource management and monitoring, perhaps informing policies that fund more monitoring wells in under-surveyed areas.

"Big studies like this get people thinking about broader water policy," Perrone said. "And for me, that is why continental scale analyses are important."

"My hope is that this study gets more people thinking about the interconnection of groundwater and surface water where these two resources are connected, and it also gets groundwater policy on the map," she continued. For so long this resource has been literally and metaphorically out of sight.

Perrone and Jasechko plan to expand this type of large-scale analysis to other parts of the globe and see how pumping and losing rivers impact groundwater-dependent ecosystems. Perrone also intends to connect their results back to her groundwater dashboard.

"Losing rivers aren't some hypothetical scenario," Jasechko stated. "They're here and now." They are in part the result of the past century of water use and misuse.

"If we have a better understanding of how widespread this phenomenon is, then we can influence future policy in positive ways," added Perrone. Because society is past the point where it can talk about prevention; we're now talking about response.

Credit: 
University of California - Santa Barbara