Tech

Initial Upper Paleolithic technology reached North China by ~41,000 years ago

image: SDG2 T3 stratigraphy.
On the left, the schematic column summarizes the general sediment characteristic and archaeological remain distribution of the site stratigraphic sequence. On the right, the photographs and schematic drawings show a close-up view of different parts of the stratigraphic sequence. The distribution of the three main types of archaeological remains recovered (lithics, fauna remains and ostrich egg shell fragments) corresponding to the photographed sections are overlain on the schematic drawings. The partitioning of the seven cultural layers (CL1a-7) are denoted beside the schematic drawings.

Image: 
Peng et al, 2020 (PLOS ONE) CC BY

A wave of new technology in the Late Paleolithic had reached North China by around 41,000 years ago, according to a study published May 27, 2020 in the open-access journal PLOS ONE by Fei Peng of the Minzu University of China, Beijing and colleagues.

Around 40,000 years ago, the Asian continent saw the spread of new forms of technology associated with what's known as the Initial Upper Paleolithic. This change brought new blade technology along with symbolic materials such as beads and pendants, and it is thought to mark the spread of humans, possibly our own species Homo sapiens, across the continent. But the exact timing and route of this dispersal has been difficult to ascertain in past studies.

Shuidonggou is an archaeological site in North China that provides the southernmost examples of Initial Upper Paleolithic technology in North Asia. In this study, Peng and colleagues provide radiocarbon dates on 18 samples of charcoal and ostrich eggshell beads from multiple stratigraphic layers of Shuidonggou Locality 2. Their results indicate that this new wave of technology had reached this region by between 43,000 and 39,000 years ago, slightly later than dates recovered from more northern sites.

These results support previous hypotheses that the spread of this Initial Upper Paleolithic technology originated in the Altai region of Russia around 47,000 years ago before spreading eastward and southward across Asia. While more dating will be need to further constrain the timing of this event, this study importantly shows that, even in a region with unfavorable conditions for preserving datable materials, careful selection and treatment of samples can yield reliable results from multiple corroborating sources of data.

The authors add: "We carried out a systematical radiocarbon analysis of charcoal and ostrich eggshell samples obtained from 2014-2016 excavation throughout the whole sequence of Shuidonggou locality 2. Based on the Bayesian age modeling, two phases which is an early phases 43-35 cal kBP and a later phase 35-28 cal kBP were split. The result supports the interpretation that the blade technology appeared in this region by at least ~41ka."

Credit: 
PLOS

A few months of vaping puts healthy people on the brink of oral disease

COLUMBUS, Ohio - Scientists who've taken the first look at bacteria in young and healthy vapers' mouths say that the potential for future disease lies just below the surface.

The collection of oral bacteria in daily e-cigarette users' mouths is teeming with potent infection-causing organisms that put vapers at substantial risk for ailments ranging from gum disease to cancer, researchers found.

Though they didn't have active disease, participants' oral bacteria composition resembled that of people with periodontitis, a gum infection that can lead to tooth loss and, left untreated, is a risk factor for heart and lung disease.

The damaging effects were seen with or without nicotine, leading the scientists to believe that the heated and pressurized liquids in e-cigarette cartridges are likely the key culprit in transforming vapers' mouths into a welcoming home for a dangerous combination of microbes.

"Vaping is such a big assault on the oral environment, and the change happens dramatically and over a short period of time," said Purnima Kumar, professor of periodontology at The Ohio State University and senior author of the study.

Even longtime current and former cigarette smokers in the study, whose tobacco habit would have given disease-causing microbes easier access to the mouth, had the more damaging oral profiles linked to vaping after only three to 12 months of e-cigarette use. Kumar said this finding calls into question claims that vaping reduces the harm caused by smoking.

"If you stop smoking and start vaping instead, you don't move back toward a healthy bacterial profile but shift up to the vaping profile," she said. "Knowing the vaping profile is pathogen-rich, you're not doing yourself any favors by using vaping to quit smoking."

The research is published today (May 27) in the journal Science Advances.

The researchers collected plaque samples from under the gums of 123 people who showed no current signs of oral disease: 25 smokers, 25 nonsmokers, 20 e-cigarette users, 25 former smokers using e-cigarettes and 28 people maintaining both cigarette smoking and vaping habits at the same time.

The bacteria below the gums are the last line of defense against disease because they are the least likely to be disrupted by environmental changes in the mouth, such as food, toothpaste and tobacco.

Kumar and colleagues conducted DNA deep sequencing of the bacteria genomes to identify not just the types of microbes living in those mouths, but also what their functions were.

The profile of the oral microbiome in the vapers who had never smoked, who were young (age 21-35) and healthy and had used e-cigarettes for four to 12 months, was startling to the researchers.

The most concerning characteristics were the levels of stress in the microbial community, which were detected by the activation of genes that contribute to the creation of a mucus-like slime layer surrounding bacterial communities. The immune system is used to seeing assembled bacteria look like clearly defined communities, but Kumar said that in e-cigarette users, these communities cloaked in slime look like foreign invaders and trigger a destructive inflammatory response.

She said this change in the microbial landscape - accompanied by higher levels of proteins in vapers' mouths that signaled the immune system was on standby to activate and produce inflammation - exponentially increases the likelihood for disease.

"The reason we're all healthy is because our immune system has recognized these bacteria and their functions since birth and has established a sense of harmony," Kumar said. "The problem is when you throw a curve ball with an environmental shift like this, your immune system doesn't recognize the bacteria as friends anymore. You have to call the police on them, and that causes a huge inflammatory response."

After that finding, Kumar and colleagues hoped to find that people who had replaced or supplemented cigarette smoking with vaping might be better off using e-cigarettes. Instead, they found that people who had traded cigarettes for a vape pen had a more vape-driven microbial profile.

"And if you smoked and vaped at the same time, which of these two effects overwhelms your system? It was vape," Kumar said.

A longer duration of the vapers' habit, with or without the use of nicotine or flavoring agents, made the oral conditions more severe.

Knowing the bacteria samples represented a snapshot of a person's oral environment, the scientists used a "fake mouth" to validate what they had found in the human participants. They created conditions simulating normal oral bacteria in artificial saliva and introduced a vape cloud or clean air to the fake mouth.

The presence of e-cigarette aerosol set in motion development of the more harmful bacterial profile seen in human vapers' mouths. And the nicotine-free aerosol consisting of glycerol and glycol, viscous sugar alcohol fluids that generate the cloud when vapers exhale, functioned as a nutrition source to fuel the altered oral environment.

"To mimic a smoking effect, the glycerol-glycol combination holds the nicotine to your throat to give you that sensation of a nicotine hit, and it produces a giant vapor cloud. It's a very essential component of vaping," Kumar said. "I'm not saying nicotine is good for you. But even without the nicotine, vaping has a pretty large impact on the bacterial communities our bodies have regarded as friends."

Credit: 
Ohio State University

Cyclones can damage even distant reefs

image: The same area of Scott Reef photographed in 2010, and again in 2012 after Cyclone Lua.

Image: 
James Gilmour/AIMS

Big and strong cyclones can harm coral reefs as far as 1000 kilometres away from their paths, new research shows.

A study led by Dr Marji Puotinen from the Australian Institute of Marine Science (AIMS) sounds a warning about the way strong cyclone winds build extreme seas that affect coral reefs in Australia and around the world.

Conventional modelling used to predict how a cyclone, hurricane or typhoon might impact corals assumes that wave damage occurs primarily within 100 kilometres of its track.

To test this, Dr Puotinen and colleagues looked at Scott Reef, a well-studied atoll reef structure off the northwest of Western Australia, and how it fared as a result of Cyclone Lua - a slow-moving weather event that developed off the coast in 2012.

Although the area of the cyclone producing the most intense winds came no closer than 500 kilometres to the reef, the high seas it whipped up battered it with waves four to 20 metres high for three and a half days.

The researchers found that at its most exposed sections, Scott Reef lost 50 per cent of its massive and robust Porites corals and virtually all its more fragile branching Acropora coral species. Similar damage was found on another reef, a further 300 kilometres distant, and models predicted damaging waves could be felt up to 1000 kilometres away.

"This example demonstrates that if we assume damage from all cyclones occurs within a 100 kilometre radius of a cyclone's track, we will underestimate the spatial extent for big, strong cyclones by up to 10 times," Dr Puotinen said.

"This could lead to making unfortunate choices when trying to prioritise conservation targets."

She added that estimates of wave damage from cyclones involve highly complex calculations because they change constantly, varying in strength, size and speed over time. The largest waves occur from storms that move slowly, and have the highest winds spread over the largest area.

To test the consequences of using the standard distance-based model, she and colleagues - from the AIMS node in Perth, the University of Western Australia and the Indian Ocean Marine Research Centre - collected existing information on cyclone size and frequency, crunching data gathered between 1985 and 2015 for 150 coral reef ecoregions around the world.

Position and strength and size for each cyclone was recorded every six hours, allowing variations to be plotted in detail.

They found that more than 70 per cent of the ecoregions had experienced at least one impact by a cyclone at peak strength and size during the 30-year period. Some, however, experienced them roughly every five years, and others roughly every 10.

"Coral reefs have been living with cyclones for millions of years," said Dr Puotinen. "But recovery after a big battering is a slow process, which can take a decade or more. This means that many coral reefs around the world will not have time to fully regrow before the next cyclone hits."

Climate change models present a complex picture for cyclones. The total number occurring in any given period may well not increase - but that's not necessarily good news for vulnerable reefs.

"Changes in the atmosphere mean it will be harder for cyclones to form in the first place, but warmer ocean water, which fuels their intensity, means it will be easier for them to strengthen once they do," Dr Puotinen explained.

She added that her team's findings carry lessons for reef management and conservation strategies.

"When deciding where on the Great Barrier Reef, for instance, to invest millions of dollars to repair or enhance reefs, you don't want to select a location likely to be regularly battered by cyclone waves," she said.

"Our research should make it easier for reef managers to choose between candidate reefs."

Dr James Gilmour, also from AIMS, a co-author on the paper, said the findings illustrated the complexity and severity of the threats facing reefs around the world.

"Coral reef communities around the world are under increasing threat from a range of stressors, and we must understand which parts of the reef should be the focus of conservation efforts," he said.

"In particular, it is the combination of cyclones with exposure to rising water temperatures that is the most significant emerging threat to reefs globally."

Unravelling the specific effects of cyclones, the researchers conclude, will provide vital clues for the management of at-risk areas.

Credit: 
Australian Institute of Marine Science

Treatment shows promise in treating deadly brain cancer

Hamilton, ON (May 27, 2020) - Researchers of McMaster University and the University of Toronto have developed a promising immunotherapy treatment for a deadly form of adult brain cancer called glioblastoma.

The treatment is a type in which a patient's T cells, which are a kind of immune cell in the blood, are changed in the laboratory so that they will bind to cancer cells and kill them. In this case, the treatment called chimeric antigen receptor T cell (CAR-T) therapy involves genetically engineering a patient's T cells to give the cells the ability to target and bind to a specific protein called CD133 in glioblastoma cells directly and eliminate them.

When used in mice with human glioblastoma, CD133-targetting CAR-T therapy was considered a success due to reduced tumour burden and improved survival.

The data from this study has led to the formation of a new Hamilton-based start-up brain cancer immunotherapy company called Empirica Therapeutics. The company aims to run clinical trials in recurrent glioblastoma patients for the lead program CD133-specific CAR-Ts and other therapies by 2022.

The study details are published in Cell Stem Cell.

Glioblastoma cancers have a dire prognosis, said the study's first author Parvez Vora, a former member of the laboratory team of professor Sheila Singh at McMaster and director of preclinical development at Empirica Therapeutics.

"Upon initial diagnosis, glioblastoma patients undergo aggressive treatment, including surgery to remove the tumour, radiation therapy and chemotherapy. However, cancer relapses in less than seven months, resulting in less than 15 months overall median survival," he said.

"Almost all the glioblastoma tumours come back as a more aggressive recurrent tumour, which has no standard-of-care treatment."

The research was led by the Singh lab at McMaster in collaboration with the Jason Moffat lab at the University of Toronto's Donnelly Centre for Cellular and Biomolecular Research.

The Singh lab has been studying the role of CD133 protein in brain tumours for more than a decade. The lab identified that the protein is a marker of cancer stem cells that have the properties necessary to grow glioblastoma tumours that are difficult to treat.

In this study, researchers investigated if specific targeting of CD133+ glioblastoma with cutting-edge immunotherapy drugs could eradicate the most aggressive subpopulation of cells in the tumour. They also looked at the safety of CD133-targeting therapies on normal, non-cancerous human stem cells including hematopoietic stem cells which create blood cells and progenitor cells which can form one or more kinds of cells.

Researchers subsequently designed three types of treatments and tested them both in the lab and in mice. The first treatment is the novel human synthetic IgG antibody, which can simply bind to CD133 protein on glioblastoma cells and halt the growth of the tumour. The second is a dual antigen T cell engager antibody, which uses the patient's own immune T cells to eliminate the CD133+ glioblastoma. The third is the CAR-T therapy.

"We found that the CAR-T therapy had enhanced activity compared to the other two therapeutics in preclinical models of human glioblastoma," said Vora.

"The accompanying safety studies in humanized mouse models address the potential impact on hematopoiesis, a vital process in the human body that leads to the formation of different blood cells. CD133-specific CAR-T therapy did not induce any acute systemic toxicity in humanized mouse models that harbored the human hematopoietic system."

Rashida Williams, a graduate student in Moffat's lab at the Donnelly Centre, generated the CD133 antibody, parts of which were used to construct different immuno-modalities including the CAR-T cell.

"Recent advances in immunotherapy have offered hope to patients with previously untreatable cancers," said Jason Moffat, professor of molecular genetics at U of T and the Canada Research Chair in Functional Genomics of Cancer. He is the chief scientific officer at Empirica Therapeutics.

"We hope that our approach of specifically targeting glioblastoma cells with CAR-T therapy will give the patients a better quality of life and increase their chances of survival."

Kristin Hope, associate professor of biochemistry and biomedical sciences at McMaster, is credited for her work generating the humanized models for toxicity testing.

Researchers are exploring combinatorial strategies next along with CD133-specific CAR-Ts to block glioblastoma tumour recurrence completely. Researchers suggest this type of therapy may prove to be effective in patients with other treatment-resistant cancers with CD133 tumour initiating cell populations.

"Our study has provided many novel conceptual insights into the value of targeting an aggressive CD133+ cancer stem cell population in glioblastoma," said corresponding author Sheila Singh, professor in the Department of Surgery at McMaster and the Canada Research Chair in Human Cancer Stem Cell Biology. She is chief executive officer of Empirica Therapeutics.

"We hope that our work will now advance the development of really new and promising treatment options for these patients."

Credit: 
McMaster University

Tuning the surface gives variations to metal foils

image: Formation mechanism of large area single crystalline Cu foils with different surface structures via the annealing of peroxide Cu foils.

Image: 
IBS

Just as cloning in biology allows for the creation of one or more replicas of the exact same genes, seeded growth in chemistry can produce a very large metal foil with the exact same surface texture as that of a seeded one. Seeded growth is very popular in synthesizing three-dimensional (3D) single crystals: 3D crystals are always grown into the same shapes, just as salts are invariably cubic single crystals.

Meanwhile, very thin foils/films can grow into different types depending on surface structures. As such, applications can vary. Great efforts had been dedicated to the synthesis of single crystalline metal foils as they have many important applications, such as (i) a substrate to support the synthesis of various two-dimensional (2D) materials, (ii) engineering the properties of the material deposited on it, (iii) allowing for selective catalysis, and (iv) fabricating metal wires with optimized electrical and thermal conductivities. Despite such possibilities, seeded growth has rarely been applied to grow thin films due to a lack of knowledge on how to control the growth process.

Prof. Feng Ding's group from the Center for Multidimensional Carbon Materials, within the Institute for Basic Science (IBS, South Korea), in collaboration with Prof. Kaihui Liu's group and Prof. Enge Wang's group from Peking University, as well as Prof. Dapeng Yu's group from Southern University of Science and Technology, reported how to give variations to single crystalline metal foils. Via the oxidation-led annealing plus seeded growth strategy, the research team obtained more than 30 types of copper foils the size of A4 paper (~30×21 cm2), which is roughly the same size as US legal.

The research team has been exploring copper foils, one of the most popular substrates to support the growth of graphene and other 2D materials. Though they obtained single crystal copper (Cu) foils in their previous study (Science Bulletin, 2017, 62, 1074-1080), they were mostly Cu (111), whose surface is ultra-flat and thus less active than those with step edges and kinks. Through theoretical calculations, the research team concluded that Cu (111) tends to be formed more easily than other types, as the Cu (111) surface has the lowest surface energy and thus is the most favorable structure in nature. This reasoning led them to tune the surface energy of Cu foils in order to obtain single-crystal metal foils with desired surface types.

The research team cut out the "gene" of a small single crystalline foil and "pasted" the seed (gene) to create very large Cu foils with the exact same surface texture as that of the inherited one. To obtain single crystalline metal seeds with various surface structures, polycrystalline Cu foils were first oxidized and then annealed at a high temperature (1020 °C), which is close to the melting point of Cu, for several hours. When the Cu was oxidized, both its upper and lower surface were covered by a layer of copper oxide (CuxO). As the pure Cu surface disappears due to the oxidation, the two surfaces of a Cu foil were transformed into two Cu-CuxO interfaces after pre-oxidation. This alteration switched the driving force of annealing from surface energy to interface energy. "We have proved that, unlike that of the surface energies, the differences of the interface energies of different Cu foils are negligible, so the polycrystalline Cu foils can be annealed into many different types of single crystals randomly." explains Professor Feng Ding, the corresponding author of the study.

A small piece of foil was then cut from a large single-crystal foil with a desired surface structure as a seed for mass production. The research team found that the annealing of a large polycrystalline Cu foil with such a seed will lead to a large single crystal Cu foil with the exact same surface structure (Figure 2, stage 2).

Great theoretical and experimental efforts were devoted to understanding how these single crystalline Cu foils were formed during the annealing. Such a process can be understood in two stages. First, the surface structure of the seed was copied to the lower part of the large polycrystalline Cu foil and formed an abnormal grain (a grain which is much larger than others and has the advantage to grow up further) with a specific surface structure. Second, the growth of the abnormal grain finally results in a very large single-crystal Cu foil with the designated surface structure.

From hundreds of annealing experiments, the research team obtained a library of single crystalline Cu foils with more than 30 types of different surface structures, as shown in Figure 3. The dimensions of the obtained single crystalline Cu foils reached 39*21 cm2, which was limited by the size of the annealing furnace.

Besides the Cu foils, the researchers proved that this seeded growth strategy can be applied to fabricate large-area single-crystal foils of other metals, suggesting that various types of single-crystal foils of most metals could be available in the near future. "This achievement demonstrates a practical method for scalable synthesis of extremely large transition metal single crystal foils with different surface types, which was long desired for both fundamental science and engineering applications. Our achievement opens many possibilities, such as to use single crystal metals as conducting channels in micro-devices; use these single crystal metal foils as templates for controllable synthesis of various two-dimensional materials; grow large area molecular patterns with selected metal foils; and selectively catalyze chemical reactions on a foil surface with a specific structure," notes Professor Kaihui Liu.

The research team will next aim to understand the mechanism of this oxidation-led seeding and seeded growth at the atomic level. Experimental efforts to synthesize various types of single crystal metal foils of different metals or metal alloys will continue, as well as exploring broad applications of these foils.

Credit: 
Institute for Basic Science

Extraction of skin interstitial fluid using microneedle patches

image: Schematic representation of the extraction process using the microneedle patch. A) The microneedle patch is applied to the skin. B) Cross section of the outer layer of skin with the interstitial fluid and metabolites (circles) below it. The patch is pressed against the skin, the needles penetrate into the interstitial layer and the fluid is absorbed into the needles. C) The patch is removed and placed into a tube with deionized water. The metabolites diffuse into the water and the tube is centrifuged to separate the needles.

Image: 
Khademhosseini lab

(LOS ANGELES) - The interstitial fluid is a major component of the liquid environment in the body and fills the spaces between the body's cells. In contrast, blood circulates only within the circulatory vessels of the body and is composed of blood cells and the liquid part of the blood, plasma. Both fluids contain special components called biomarkers, which are valuable indicators of bodily health. These biomarkers include various types of molecules such as proteins, hormones or DNA, and can also include drugs and metabolites.

When monitoring patient health, the standard source for the measurement of biomarkers is blood. Samples are drawn by venous puncture, most often from the forearm or from the veins in the hand. Occasionally there are problems in drawing blood when the veins are subject to collapse, or when they are very small or difficult to locate. Still other problems may occur when the veins "roll" or move from side to side. And as in any procedure that involves a wound to the skin, there is always the risk of infection that is introduced. The problems are compounded when patients are required to submit multiple samples over time.

In order to circumvent these problems, and to make improvements in patient health monitoring, scientists have turned to alternative sources for obtaining samples for biomarker testing. The interstitial fluid is an ideal choice for this purpose. It offers an advantage over blood in being a reservoir for certain site-specific drugs and drugs in a more active state. And it is a rich source of biomarkers, metabolites and therapeutic drugs, found in abundance just below the outermost layer of the skin. For these reasons, researchers have devised ways to access this source.

One method that has recently been focused on is the use of microneedle patches. Such patches are fabricated from liquid-absorbing materials that are molded into a patch, with an array of tiny microneedles that are approximately 600 micrometers in length, about the length of a grain of salt. The patch is then applied directly on the skin for a specified period of time, interstitial fluid is drawn into the patch, and the patch is then removed and processed to collect the fluid.

A research team led by Ali Khademhosseini, Ph.D., the Director and CEO of the Terasaki Institute, who was previously Director of the University of California, Los Angeles (UCLA) Center for Minimally Invasive Therapeutics, have devised such a patch and optimized conditions for its performance. This microneedle patch utilizes a gel made of a substance called gelatin methacryloyl (GelMA), a hydrogel with highly-absorptive capabilities and demonstrable strength. This substance was chosen for these qualities over other materials in previous use, as well as for its biocompatibility and the ability to adapt its composition to optimize performance.

The gel was molded into a patch with an array of solid-gel microneedles on one face. The Terasaki team has performed extensive testing to determine the optimum gel concentration, degree of gel cross-linking and cross-linking time needed to produce a patch that provides the best absorptive properties, needle strength and skin penetration. The effectiveness of the patch gel's fluidic capabilities also eliminates the need for fabrication of hollow needles, which simplifies production.

The team did comparison studies of drug and glucose levels measured from samples extracted with the GelMA patch versus blood collected by conventional means and the results were highly comparable. There was also an improvement in the volume of fluid collected with the GelMA patch over other microneedle patches.

"Collecting samples from patients in a non-invasive manner is important, particularly in the COVID era," says Dr. Khademhosseini. "We are excited about the microneedles developed here, as they open up rapid ways to collect patient samples in a simple and painless manner."

The GelMA patch developed by the Terasaki Institute delivers an improvement in design, cost effectiveness ease of production, and convenience; its unique qualities were recently featured as the cover story in a recent issue of Small.

Credit: 
Terasaki Institute for Biomedical Innovation

Scientists devise a way to determine the viability of predicted 2D materials

An international team of researchers from Russia, Sweden and South Korea has proposed a new way to test the structural stability of predicted 2D materials. The testing revealed a number of materials erroneously proposed earlier. The scholars believe that the use of the new method will further help to avoid mistakes in the development of two-dimensional nanomaterials that are in high demand in the modern world. The results were published in the international journal Physical Chemistry Chemical Physics.

The existence of two-dimensional structures, which are the thinnest films consisting of one layer of the crystal lattice of atoms, has been widely discussed since the mid 20th century. Scientists had been heatedly discussed for several decades until such possibility was proved by theoretical conclusions and confirmed experimentally later by the synthesis of graphene -- crystalline carbon with a thickness of one atom. Since then, attention to two-dimensional materials with unexpected properties -- high strength (hundreds of times stronger than metal), lightness, thermal conductivity -- has grown significantly, and today the number of experimentally obtained 2D materials comes in dozens.

It is worth noting that most of the early materials were discovered mainly by trial and error. With the advent of sufficient computer power and theoretical methods of prediction, however, scholars now are discovering materials even before their synthesis. Modern high-performance algorithms and methods can be used for mass scanning of new 2D materials among already known compounds. Moreover, with their help, we can create previously unknown materials with designed properties. Nevertheless, it is necessary to calculate the stability of such predicted materials in order to make them desired for the production and have future prospects for adopting into the reality.

'We discovered that the existing and widely used methods for checking the stability of theoretically-known 2D materials have a serious drawback, which allows bypassing the generally accepted criteria and, virtually, in some cases leads to a false prediction of the stability of a 2D material. To put it simpler, such materials merely should not exist, there is practically no chance to get them experimentally, and the discovery of such materials is just an error of the method used,' said Artyom Kuklin, a research engineer at the Laboratory for Fundamental Scientific Research, Department of Science and Innovation, SibFU.

The scholar explained that the main disadvantage of the common method applied nowadays is the model of representation of the material.

'When modelling, researchers use conditional material of some kind, which is an infinitely repeating pattern consisting of the so-called unit cells -- minimal fragments of the structure. It looks like cells recurring on a notebook sheet. Moreover, information about one cell gives information about the entire sheet. The model assumes that all these cells are rigidly interconnected and that they cannot be bent along this connection. In other words, we knowingly get a perfectly even infinite sheet, which, of course, lines up weakly with the reality,' explained Artyom Kuklin.

The authors of the study propose to disregard an infinite model of a material, but instead, to consider its portion of a finite size as an additional criterion for the stability of two-dimensional nanomaterials, as this part has no strict restrictions on the connection between separate fragments of the structure. If under these conditions, the material remains the same as it was in the periodic model, then there are no internal stresses in it. If the material significantly distorts (for example, folds up), then the internal stress in such structure will become a marker of instability, and hence realizability of this material will be dubious in reality.

'Using the proposed method, our team demonstrated the structural stability of the recently synthesized 2D material of Palladium diselenide (PdSe2) and the instability of several previously proposed two-dimensional materials with a similar structure. We consider this approach effective enough to theoretically study materials which are promising for the future technology. By the way, as for another criterion for assessing the absence of internal stresses in a 2D material, we have proposed to study its stability with respect to nanotubes of the same material. In this case, a two-dimensional material should be more stable than nanotubes.

We hope that the scientific community will turn their attention to the mentioned problem and improve the existing algorithms to avoid similar errors in the future,' summed up the researcher.

Credit: 
Siberian Federal University

Van der Waals junction spin valves without spacer layer

image: Two-state and three-state spin valves. Schematic illustration of (a) two-Fe3GeTe2-nanoflakes and (c) three-Fe3GeTe2-nanoflakes vdW homo-junction, with top h-BN passivation. (b), (d) The resistance of the junction (RJunction) as a function of the perpendicular magnetic field (B) at 10 K.

Image: 
©Science China Press

The fundamental principle of a spin valve is that the resistance is dependent on the parallel or antiparallel configurations of the two ferromagnetic electrodes, thus associating the magnetoresistance (MR) effect, whose basic structure consists of two ferromagnetic metals decoupled by the insertion of a non-magnetic spacer. The MR effect in such a sandwiched structure is the cornerstone of magnetic-sensing, data-storage, and processing technologies, which is best represented by the development of the giant magnetoresistance (GMR) and tunneling magnetoresistance (TMR) information industry over the past two decades. The physical mechanism underlying the GMR and TMR effects is due to electron transport dominated either by spin-dependent scattering or by spin-tunneling probability, respectively. To produce appreciable MR effect, the spin moment of the electrons must be maintained across the spacer layer and the interfaces, which is the key issue for spintronics. Thus, tremendous efforts have been devoted to optimizing the spacer layer and to pursuing high-quality electronic interfaces between the ferromagnetic layers and the spacer layer.

In this context, two-dimensional (2D) van der Waals (vdW) layered materials--especially emerging 2D magnetic materials--have provided researchers with another versatile way to tackle such obstacles in traditional magnetic multilayer systems. In particular, homo- or hetero-junctions incorporating these vdW materials without direct chemical bonding, avoiding the associated intermixing effect and defect-induced gap states, may show performance exceeding that of covalently bonded magnetic multilayers.

A research group led by Prof. Kaiyou Wang from State Key Laboratory for Superlattices and Microstructures, Institute of Semiconductors, Chinese Academy of Sciences, collaborating with Prof. Kai Chang and Prof. Zhongming Wei, has recently reported the fabrication of spin valves without spacer layers using vdW homo-junctions in which exfoliated Fe3GeTe2 nanoflakes act as ferromagnetic electrodes and/or interlayers. They demonstrated the textbook behavior of two-state and three-state MR for devices with two and three Fe3GeTe2 nanoflakes having different coercive fields, respectively. Interestingly, the all-metallic spin valves exhibit small resistance-area products (~10-4 Ω* cm2) and low operating current densities (down to 5 nA), and they possess vertical two-terminal setups, all of which are properties of major interest for future spintronics applications. This work demonstrates that two ferromagnetic layers without a spacer layer are sufficient to obtain the classical spin-valve effect, and it demonstrates the superiority of vdW interfaces.

Credit: 
Science China Press

Bullying is common factor in LGBTQ youth suicides, YSPH study finds

Researchers at the Yale School of Public Health have found that death records of LGBTQ youth who died by suicide were substantially more likely to mention bullying as a factor than their non-LGBTQ peers. The researchers reviewed nearly 10,000 death records of youth ages 10 to 19 who died by suicide in the United States from 2003 to 2017.

The findings are published in the current issue of JAMA Pediatrics.

While LGBTQ youth are more likely to be bullied and to report suicidal thoughts and behaviors than non-LGBTQ youth, this is believed to be the first study showing that bullying is a more common precursor to suicide among LGBTQ youth than among their peers.

"We expected that bullying might be a more common factor, but we were surprised by the size of the disparity," said lead author Kirsty Clark, a postdoctoral fellow at Yale School of Public Health. "These findings strongly suggest that additional steps need to be taken to protect LGBTQ youth -- and others -- against the insidious threat of bullying."

Death records from LGBTQ youths were about five times more likely to mention bullying than non-LGBTQ youths' death records, the study found. Among 10- to 13-year-olds, over two-thirds of LGBTQ youths' death records mentioned that they had been bullied.

Bullying is a major public health problem among youth, and it is especially pronounced among LGBTQ youth, said the researchers. Clark and her co-authors used data from the National Violent Death Reporting System, a Centers for Disease Control and Prevention (CDC)-led database that collects information on violent deaths, including suicides, from death certificates, law enforcement reports, and medical examiner and coroner records.

Death records in the database include narrative summaries from law enforcement reports and medical examiner and coroner records regarding the details of the youth's suicide as reported by family or friends, the youth's diary, social media posts, and text or email messages, as well as any suicide note. Clark and her team searched these narratives for words and phrases that suggested whether the individual was LGBTQ. They followed a similar process to identify death records mentioning bullying.

"Bullies attack the core foundation of adolescent well-being," said John Pachankis, the Susan Dwight Bliss Associate Professor of Public Health at the Yale School of Public Health and study co-author. "By showing that bullying is also associated with life itself for LGBTQ youth, this study urgently calls for interventions that foster safety, belonging and esteem for all young people."

Credit: 
Yale School of Public Health

NASA-NOAA satellite sees Tropical Storm Bertha organizing

image: NASA-NOAA's Suomi NPP satellite passed over the western North Atlantic Ocean as Tropical Storm Bertha was organizing off the coast of Georgia and South Carolina on May 26, 2020. Bertha became a tropical storm early on May 27 off the coast of South Carolina.

Image: 
NASA Worldview, Earth Observing System Data and Information System (EOSDIS)

The second tropical storm of the North Atlantic Ocean hurricane season has formed off the coast of South Carolina. NASA-NOAA's Suomi NPP satellite provided forecasters with a visible image of Tropical Storm Bertha as it was organizing.

On May 27, NOAA's National Hurricane Center (NHC) issued a Tropical Storm Warning in effect from Edisto Beach, SC to South Santee River, SC.

The Visible Infrared Imaging Radiometer Suite (VIIRS) instrument aboard Suomi NPP provided a visible image of developing Tropical Storm Bertha late on May 26. The imagery showed strong thunderstorms were circling the center of circulation.

Satellite imagery on May 27 at 8:30 a.m. EDT showed the area of disturbed weather that NHC has been tracking over the past day or so quickly became better organized. The circulation had become better defined and the center had reformed beneath the area of deep convection. Those strongest storms were located just off the South Carolina coast.

At 8:30 a.m. EDT (1230 UTC), the center of Tropical Storm Bertha was located near latitude 32.7 degrees north and longitude 79.4 degrees west. Bertha's center of circulation was just 30 miles (50 km) east-southeast of Charleston, South Carolina.

Bertha is moving toward the northwest near 9 mph (15 kph) and this motion is expected to continue through tonight. Maximum sustained winds are near 45 mph (75 kph) with higher gusts. Bertha is expected to weaken to a tropical depression after moving inland and become a remnant low tonight. The estimated minimum central pressure is 1009 millibars.

Bertha is expected to produce total rain accumulation of 2 to 4 inches with isolated totals of 8 inches across eastern and central South Carolina into west central to far southeastern North Carolina and southwest Virginia.  This rainfall may produce life-threatening flash flooding.

NHC said, "The system will be moving inland very shortly and little, if any, additional strengthening is expected.  Once inland, the small tropical cyclone should weaken rapidly and dissipate over central North Carolina on Thursday [May 28]."

Tropical cyclones/hurricanes are the most powerful weather events on Earth. NASA's expertise in space and scientific exploration contributes to essential services provided to the American people by other federal agencies, such as hurricane weather forecasting.

Credit: 
NASA/Goddard Space Flight Center

No laughing matter

A new study involving a scientific analysis of the prevalence of "LOL" in students' text messages demonstrates important potential applications for classroom learning. The study, "Linguistics in General Education: Expanding Linguistics Course Offerings through Core Competency Alignment," will be published in the June 2020 issue of the scholarly journal Language. An advance version of the article may be found at https://www.linguisticsociety.org/sites/default/files/LSA962101_0.pdf.

The study's authors, Katie Welch and Marco Shappeck, highlight how their work with students conducting linguistics research at the University of North Texas at Dallas aligns with an emerging trend in higher education--one that is altering the ways in which universities determine what courses are part of the traditional canon sometimes referred to as the core curriculum." Welch and Shappeck detail how their course, The Language of Now, became adopted by the Texas Higher Education Coordinating Board as a qualifying course for the state's core curriculum. The study provides a practical framework for how linguistic research can be effectively incorporated into courses that satisfy basic educational requirements for students to graduate.

After surveying the list of approved core courses at the 170 institutions in Texas during Fall 2018, Welch and Shappeck report that linguistics comprised less than 0.1% of the offerings, eve in categories where linguistics clearly fit the criteria. While these numbers are quite low, they indicate that opportunities do exist for linguistics concepts and courses to become more mainstream in higher education.

The authors' work builds on the current trend for reform of the core curriculum: while students still select their courses from broad discipline-based categories, the key difference is that now every course within a category must have the same learning objectives. Regardless of which course a student may select to fulfill the requirements of a given area, they will develop the same skillset. The common learning objectives are selected with career readiness in mind, as they are aligned with skills that employers list as desirable in job candidates--competencies like critical thinking, teamwork, data analysis and interpersonal communication.

The authors demonstrate how a key assignment from their The Language of Now course met the common learning objectives for the "Language, Philosophy, and Culture" component area of the Texas core curriculum. This research assignment asked students to evaluate their own usage of the popular texting phrase LOL, using it as a data set that they can then analyze. After going through a series of research activities designed to help them deepen their understanding of the history and current meaning of LOL, students then made predictions about the future of this word. While this assignment content is decidedly linguistic in nature, Welch and Shappeck argue that by working through the assigned project, students also gain the highly transferrable competencies of critical thinking, inquiry, analysis, and information literacy.

Credit: 
Linguistic Society of America

ADHD: genomic analysis in samples of Neanderthals and modern humans

image: According to the study, some features like hyperactivity or impulsiveness could have been favourably selected for survival in ancestral environments dominated by a nomad lifestyle.

Image: 
Paula Esteller (CNAG-CRG / IBE, CSIC-UPF)

The frequency of genetic variants associated with attention-deficit/hyperactivity disorder (ADHD) has decreased progressively in the evolutionary human lineage from the Palaeolithic to nowadays, according to a study published in the journal Scientific Reports.

The new genomic analysis compares several ADHD-associated genetic variants described in current European populations to assess its evolution in samples of the human species (Homo sapiens), modern and ancient, and in samples of Neanderthals (Homo neanderthalensis). According to the conclusions, the low tendency observed in European populations could not be explained for the genetic mix with African populations or the introgression of Neanderthal genomic segments in our genome.

The new genomic study isled by Professor Bru Cormand, from the Faculty of Biology and the Institute of Biomedicine of the University of Barcelona (IBUB), the Research Institute Sant Joan de Déu (IRSJD) and the Rare Diseases Networking Biomedical Research Centre (CIBERER), and the researcher Oscar Lao, from the Centro Nacional de Análisis Genómico (CNAG), part of the Centre for Genomic Regulation (CRG). The study, whose first author is the CNAG-CRG researcher Paula Esteller -current doctoral student at the Institute of Evolutionary Biology (IBE, CSIC-UPF)- counts on the participation of research groups of the Aarhus University (Denmark) and the Upstate Medical University of New York (United States).

ADHD: an adaptive value in the evolutionary lineage of humans?

The attention deficit/hyperactivity disorder (ADHD) is an alteration of the neurodevelopment which can have a large impact on the life of the affected people. Featured by hyperactivity, impulsiveness and attention deficit, it is very common in modern populations -with a prevalence of 5% in children and adolescents- and can last up to adulthood.

From an evolutionary perspective, one would expect that anything detrimental would disappear among the population. In order to explain this phenomenon, several natural hypotheses have been presented -specially focused on the context of transition from the Palaeolithic to the Neolithic-, such as the known Mismatch Theory.

"According to this theory, cultural and technological changes that occurred over the last thousands of years would have allowed us to modify our environment in order to adopt it to our physiological needs in the short term. However, in the long term, these changes would have promoted an imbalance regarding the environment in which our hunter-gatherer ancestors evolved", note the authors.

Therefore, several traits like hyperactivity and impulsiveness -typical in people with ADHD- could have been selectively favoured in ancestral environments dominated by a nomad lifestyle. However, the same features would have become non-adaptive in other environments related to more recent times (mostly sedentary).

Why is it one of the most common disorders in children and adolescents?

The new study, based on the study on 20,000 ADHD affected people and 35,000 controls, reveals the genetic variants and alleles associated with ADHD tend to be found in genes which are intolerant to mutations that cause loss of function, which shows the existence of a selective pressure on this phenotype.

According to the authors, the high prevalence of ADHD nowadays could be a result from a favourable selection that took place in the past. Although being an unfavourable phenotype in the new environmental context, the prevalence would still be high because much time has not passed for it to disappear. However, due to the absence of available genomic data for ADHD, none of the hypothesis has been empirically contrasted so far.

"Therefore, the analysis we conducted guarantee the presence of selective pressures that would have been acting for many years against the ADHD-associated variants. These results are compatible with the mismatch theory but they suggest negative selective pressures to have started before the transition between the Palaeolithic and the Neolithic, about 10,000 years ago", say the authors.

Credit: 
University of Barcelona

Yale finds a (much) earlier birth date for tectonic plates

New Haven, Conn. -- Yale geophysicists reported that Earth's ever-shifting, underground network of tectonic plates was firmly in place more than 4 billion years ago -- at least a billion years earlier than scientists generally thought.

Tectonic plates are large slabs of rock embedded in the Earth's crust and upper mantle, the next layer down. The interactions of these plates shape all modern land masses and influence the major features of planetary geology -- from earthquakes and volcanoes to the emergence of continents.

"Understanding when plate tectonics started on Earth has long been a fundamentally difficult problem," said Jun Korenaga, a professor of earth and planetary sciences in Yale's Faculty of Arts and Sciences and senior author of the new study, published in Science Advances. "As we go back deeper in time, we have fewer geological records."

One promising proxy for determining if tectonic plates were operational is the growth of continents, Korenaga said. This is because the only way to build up a continent-sized chunk of land is for surrounding surface rock to keep sinking deeply over a long period -- a process called subduction that is possible only through plate tectonics.

In the new study, Korenaga and Yale graduate student Meng Guo found evidence of continental growth starting as early as 4.4 billion years ago. They devised a geochemical simulation of the early Earth based on the element argon -- an inert gas that land masses emit into the atmosphere. Argon is too heavy to escape Earth's gravity, so it remains in the atmosphere like a geochemical ledger.

"Because of the peculiar characteristics of argon, we can deduce what has happened to the solid Earth by studying this atmospheric argon," Korenaga said. "This makes it an excellent bookkeeper of ancient events."

Most of the argon in Earth's atmosphere is 40Ar -- a product of the radioactive decay of 40K (potassium), which is found in the crust and mantle of continents. The researchers said their model looked at the atmospheric argon that has gradually accumulated over the history of the planet to determine the age of continental growth.

Part of the challenge in creating their simulation, the researchers said, was incorporating the effects of a geological process called "crustal recycling." This refers to the cycle by which continental crust builds up, then is eroded into sediments, and eventually carried back underground by tectonic plate movements -- until the cycle renews itself.

The simulation thus had to account for argon gas emissions that were not part of continental growth.

"The making of continental crust is not a one-way process," Korenaga said.

Credit: 
Yale University

Study uncovers gender roles in physics lab courses

ITHACA, N.Y. - A robust body of research examines and addresses gender discrepancies in many fields, but physics is not one of them, Cornell researchers have found.

Men are overrepresented not only in number but in high-ranking positions within the physics community, according to a new study published May 26 in the journal Physics Education Research. A research team led by Katherine Quinn, Ph.D. '19, and Natasha Holmes, the Ann S. Bowers Assistant Professor of Physics in the College of Arts and Sciences, examined gender roles in undergraduate physics lab classes as a step toward removing systematic gender biases in the field.

Using techniques of theoretical physics, they analyzed student behavior in two types of labs: traditional, highly structured labs; and less-structured "inquiry-based" labs. They found that inquiry-based physics labs, designed to encourage student agency by removing rigid structures, actually contained gender imbalances in lab work when compared with traditional, highly structured labs.

"Students working in inquiry-based labs assumed different roles within their groups," the researchers wrote. "However, men and women systematically took on different roles and men behaved differently when in single- versus mixed-gender groups."

Because gendered division of roles can emerge without active intervention, the researchers concluded, these results highlight the importance of structuring equitable group dynamics in educational settings.

"Instructors ... must not only remove explicitly biased aspects of curricula, but also take active steps to ensure that potentially discriminatory aspects are not inadvertently reinforced," they said.

Quinn, a theoretical physicist and postdoctoral fellow at the Princeton Center for Physics and Biological Function, specializes in an area of math called information geometry. She creates complex nonlinear models that describe physical phenomena, using information geometry to extract features.

For this study, conducted while she was at Cornell, Quinn used informational geometry to analyze the behavior of undergraduate students enrolled in an honors-level mechanics course of a calculus-based physics sequence, a course designed for physics majors.

In all, 143 students were considered in the study: 109 self-identifying as men, 32 self-identifying as women, and two not disclosing a gender.

During the course of several lab periods, researchers assigned codes to each student at five-minute intervals, determined by what the students were handling: lab desktop computer, personal laptop computer, paper or equipment. A broad "other" category was added to ensure that all time was coded for each student.

The researchers then examined the data for patterns in behavior, especially differences based on gender.

The patterns that emerged challenged the researchers' assumptions. As they expected, men handled equipment more, while women handled laptops more. What surprised the researchers was the fact that gender-based differences were happening in the inquiry-based labs.

"It occurred to us that this removing of structure was having an unintended effect," Quinn said.

The evidence indicates that inquiry-based labs promote student agency, the researchers said, but they suggest that gender-based social structures can be removed by redesigning lab processes.

Holmes, who specializes in physics education research, has received a National Science Foundation grant to continue this line of research, which aims to make the whole physics community more equitable for students, instructors and researchers.

"This isn't an abstract concept being applied to other people," Quinn said. "This is about us."

Credit: 
Cornell University

Researchers incorporate computer vision and uncertainty into AI for robotic prosthetics

image: Imaging devices and environmental context. (a) On-glasses camera configuration using a Tobii Pro Glasses 2 eye tracker. (b) Lower limb data acquisition device with a camera and an IMU chip. (c) and (d) Example frames from the cameras for the two data acquisition configurations. (e) and (f) Example images of the data collection environment and terrains considered in the experiments.

Image: 
Edgar Lobaton

Researchers have developed new software that can be integrated with existing hardware to enable people using robotic prosthetics or exoskeletons to walk in a safer, more natural manner on different types of terrain. The new framework incorporates computer vision into prosthetic leg control, and includes robust artificial intelligence (AI) algorithms that allow the software to better account for uncertainty.

"Lower-limb robotic prosthetics need to execute different behaviors based on the terrain users are walking on," says Edgar Lobaton, co-author of a paper on the work and an associate professor of electrical and computer engineering at North Carolina State University. "The framework we've created allows the AI in robotic prostheses to predict the type of terrain users will be stepping on, quantify the uncertainties associated with that prediction, and then incorporate that uncertainty into its decision-making."

The researchers focused on distinguishing between six different terrains that require adjustments in a robotic prosthetic's behavior: tile, brick, concrete, grass, "upstairs" and "downstairs."

"If the degree of uncertainty is too high, the AI isn't forced to make a questionable decision - it could instead notify the user that it doesn't have enough confidence in its prediction to act, or it could default to a 'safe' mode," says Boxuan Zhong, lead author of the paper and a recent Ph.D. graduate from NC State.

The new "environmental context" framework incorporates both hardware and software elements. The researchers designed the framework for use with any lower-limb robotic exoskeleton or robotic prosthetic device, but with one additional piece of hardware: a camera. In their study, the researchers used cameras worn on eyeglasses and cameras mounted on the lower-limb prosthesis itself. The researchers evaluated how the AI was able to make use of computer vision data from both types of camera, separately and when used together.

"Incorporating computer vision into control software for wearable robotics is an exciting new area of research," says Helen Huang, a co-author of the paper. "We found that using both cameras worked well, but required a great deal of computing power and may be cost prohibitive. However, we also found that using only the camera mounted on the lower limb worked pretty well - particularly for near-term predictions, such as what the terrain would be like for the next step or two." Huang is the Jackson Family Distinguished Professor of Biomedical Engineering in the Joint Department of Biomedical Engineering at NC State and the University of North Carolina at Chapel Hill.

The most significant advance, however, is to the AI itself.

"We came up with a better way to teach deep-learning systems how to evaluate and quantify uncertainty in a way that allows the system to incorporate uncertainty into its decision making," Lobaton says. "This is certainly relevant for robotic prosthetics, but our work here could be applied to any type of deep-learning system."

To train the AI system, researchers connected the cameras to able-bodied individuals, who then walked through a variety of indoor and outdoor environments. The researchers then did a proof-of-concept evaluation by having a person with lower-limb amputation wear the cameras while traversing the same environments.

"We found that the model can be appropriately transferred so the system can operate with subjects from different populations," Lobaton says. "That means that the AI worked well even thought it was trained by one group of people and used by somebody different."

However, the new framework has not yet been tested in a robotic device.

"We are excited to incorporate the framework into the control system for working robotic prosthetics - that's the next step," Huang says.

"And we're also planning to work on ways to make the system more efficient, in terms of requiring less visual data input and less data processing," says Zhong.

Credit: 
North Carolina State University