Culture

Researchers catalog dozens of mutations in crucial brain development gene

image: Genetic samples from developmentally disabled children have identified dozens of new mutations in the DDX3X gene that lead to smaller brains and intellectual disability because of the gene's essential role in neuron genesis and transport.

Image: 
Silver Lab, Duke University

DURHAM, N.C. -- An international team of researchers that pooled genetic samples from developmentally disabled patients from around the world has identified dozens of new mutations in a single gene that appears to be critical for brain development.

"This is important because there are a handful of genes that are recognized as 'hot spots' for mutations causing neurodevelopmental disorders," said lead author Debra Silver, an associate professor of molecular genetics and microbiology in the Duke School of Medicine. "This gene, DDX3X, is going to be added to that list now."

An analysis led by the Elliott Sherr lab at the University of California-San Francisco found that half of the DDX3X mutations in the 107 children studied caused a loss of function that made the gene stop working altogether, but the other half caused changes predicted to disrupt the function of the gene.

The DDX3X gene is carried by the X chromosome, which occurs twice in females and only once in males. Only three of the children in the study were male, indicating that an aberrant copy of the gene is probably most often a lethal problem for males who only have a single copy of X.

In humans, this syndrome often results in smaller brains and intellectual disability. Understanding how and why DDX3X mutations lead to developmental issues provides insight into how the gene functions normally.

With the finding that DDX3X was a common element in the developmental disabilities of these children, Silver's team "used a set of experimental tricks to see how it would lead to disease." In mice, her team manipulated levels of the gene to see how development of the cerebral cortex would be altered.

Changes in the gene led to fewer neurons being produced in a dosage-dependent manner, Silver said.

In the most severe cases, Sherr's team showed that functional changes in DDX3X resulted in a smaller or even completely missing corpus collosum, the broad communications structure between the two halves of the brain. In some cases, identical genetic spelling errors that occurred in several children also led to polymicrogyria, an abnormal folding pattern on the surface of the brain.

"Not every mutation acts the same," Silver said.

The collaborative team also tested how 'missense' mutations, in which the protein is made but somehow defective, would impair brain development. In the most severe missense mutations, the way protein was made was affected, leading to the formation of 'clumps' of RNA-protein aggregates in neural stem cells, similar to the protein clumps found in Alzheimer's disease, Silver said.

Together, these issues point to a role for DDX3X in the genesis of developing neurons as the brain grows. "The way neurons are made and organized is disrupted," Silver said. "We know that this gene is required for early brain development which can cause a whole host of developmental problems."

Almost all of the mutations seen in the study children were 'de novo,' meaning they happened during the child's early development, rather than being inherited from a parent.

Parents of the children with these mutations have established the DDX3X Foundation to pursue better understanding of what causes the disease, identify therapies, and provide a supportive community for families.

Credit: 
Duke University

Diabetes remission rates after 2 common weight-loss surgeries

What The Study Did: Researchers examined associations between two of the most common weight-loss surgeries on type 2 diabetes outcomes by comparing diabetes remission and relapse rates, glycemic control and weight loss after five years among 9,700 adults with type 2 diabetes who had Roux-en-Y gastric bypass or sleeve gastrectomy.

Authors: Kathleen McTigue, M.D., of the University of Pittsburgh, is the corresponding author.

To access the embargoed study: Visit our For The Media website at this link https://media.jamanetwork.com/ 

(doi:10.1001/jamasurg.2020.0087)

Editor's Note: The article includes conflict of interest and funding/support disclosures. Please see the articles for additional information, including other authors, author contributions and affiliations, conflicts of interest and financial disclosures, and funding and support.

Credit: 
JAMA Network

Bilingualism acts as a cognitive reserve factor against dementia

The conclusions of a study carried out by Víctor Costumero, as the first author, Marco Calabria and Albert Costa (died in 2018), members of the Speech Production and Bilingualism (SPB) group at the Cognition and Brain Center (CBC) of the Department of Information Technology and the Communications (DTIC) of the UPF, together with researchers from the Universities of Jaume I, Valencia, Barcelona and Jaén; IDIBELL, Hospital La Fe (Valencia) and Grupo Médico ERESA (Valencia) show that bilingualism acts as a cognitive reserve factor against dementia. Lidón Marín, one of the authors of the article, states that "although sick bilinguals show a greater brain atrophy, the cognitive level among bilinguals and monolinguals is the same. The work has been published in the scientific journal Alzheimer's Research and Therapy with the title "A cross-sectional and longitudinal study on the protective effect of bilingualism against dementia using brain atrophy and cognitive measures". It has been financed by La Marató de TV3 Foundation.

The research has analysed a hundred patients with mild cognitive impairment who are bilingual and monolingual, with an average age of 73 years. Those people who use Catalan and Spanish alternately, regardless of the register, have been considered bilingual. Those people who do not use it indiscriminately although they know, understand and can use Catalan occasionally have been considered monolingual (or passive bilingual). César Ávila, director of the research group, explains that "the alternative use of these two languages (Catalan and Spanish) in any situation is complex at the cognitive level because there are many similarities between them".

At the beginning of the study, the two groups of patients showed the same level of cognitive impairment (language, memory, etc.). However, in the case of bilinguals, brain atrophy was greater than in the case of monolinguals. This fact implies the need for more brain injury load to show the same symptoms. The researchers have followed the evolution of the patients for seven months, in which they have been able to observe that the group of bilinguals has had a lower loss of brain volume and has better maintained their cognitive abilities. Researchers consider that "this explains that there is a cognitive reserve of bilingualism". These results are especially relevant because "this would be the first longitudinal evidence of this possible protective effect of bilingualism against dementia", indicates Ávila.

The study was carried out with patients from the General University Hospital of València and La Fe University and Polytechnic Hospital, with similar socio-demographic characteristics and educational levels. Previous data already indicated that bilingual people (of any language) take five years longer to reach dementia than monolingual people. However, one of the contributions of this study, in addition to comparing two different moments in time, has been to reveal that the mechanism that makes it so is the cognitive stimulation favoured by the alternation in use between one language and another. Although it is too early to apply the results obtained in treatments for dementia, "we do know that there are cognitive stimulation therapies that include practical exercises in the use of different languages", explains the researcher Victor Costumero.

In addition to the research group from Castelló, the study has also involved the Cognition and Brain Centre of the Pompeu Fabra University of Barcelona; the ERI se Lectura of the University of València; the ERESA Medical Group of València; the Department of Neurology of the General Hospital of València; the Neurology Unit of the University and Polytechnic Hospital La Fe; the Cognitive Processes Section of the Department of Cognition, Development and Educational Psychology of the University of Barcelona; the Cognition and Brain Plasticity Group of the Bellvitge Biomedical Research Institute (IDIBELL) of L'Hospitalet de Llobregat and the Department of Computer Science of the University of Jaén.

Credit: 
Universitat Pompeu Fabra - Barcelona

All optical control of exciton flow in a colloidal quantum well complex

image: (a) The normalized contour map of emission spectra when the nanomaterial mixture is coated in a capillary tube. White dashed lines indicate the thresholds of red lasing (acceptor) and green lasing (donor). Top inset: photography images corresponding to spontaneous emission, acceptor lasing and dual lasing, respectively. (b) Lasing's integrated intensity as a function of the pump fluence for the donors (green dots/line) and the acceptors (red dots/line). Three emission regimes (i.e. spontaneous emission, acceptor lasing and dual lasing) are shaded in grey, light red and light green, respectively. (c) The normalized integrated intensity of donors' spontaneous emission. In the acceptor lasing regime, excitons are transferred to acceptors more efficiently, therefore the donors' spontaneous emission increases sub-linearly with respect to excitation power. Then it increases super-linearly when entering dual lasing regime (d) The calculated exciton outflowing efficiency in the donor. Three distinct efficiencies (50%, 90% and 2%) are achieved and controlled by excitation fluence corresponding to spontaneous emission, acceptor lasing and dual lasing regime. (e) Illustration of controlling exciton flow by stimulated emission. The fundamental mechanism is to control the density of the excited donors N1D and the unexcited (ground state) acceptors N0A by utilizing super high exciton recombination rate of stimulate emission.

Image: 
Junhong Yu, Manoj Sharma, Ashma Sharma, Savas Delikanli, Hilmi Volkan Demir, Cuong Dang

Exciton-based solid-state devices have the potential to be essential building blocks for modern information technology to slow down the end of Moore's law. Exploiting excitonic devices requires the ability to control the excitonic properties (e.g., exciton flow, exciton recombination rates or exciton energy) in an active medium. However, until now, the demonstrated techniques for excitonic control have either been inherently complex or sacrificed the operation speed, which is self-defeating and impractical for actual implementation. Hence, a scheme with an emphasis on all-optical control, bottom-up fabrication and self-assembly is highly desired for real-world applications.

In a new paper published in Light Science & Applications, scientists from the School of Electrical & Electronic Engineering, Nanyang Technological University, Singapore, developed a convenient way to control exciton flow between different colloidal quantum wells (CQWs) at room temperature, all through optical signals. Through the combination of stimulated emission and Förster resonance energy transfer (FRET), the flow of excitons between donor Cadium selenide (CdSe) core-only CQWs and acceptor CdS/CdSe/CdS core-shell CQWs can be strongly manipulated. Using this method, continuous transition among three distinct exciton flow regimes with efficiencies of ~50%, ~90% and ~2% has been demonstrated. The reported method and technique, which demonstrate a lab-prototype of an all-optical controllable exciton flow device with multiple modulation stages, may inspire the design of all-optical excitonic circuits operating at room temperature.

The core idea of the method is based on the competition of stimulated emission rate, spontaneous emission rate and FRET rate together with the threshold behavior of stimulated emission. These scientists summarize the excitonic flow control process in their works:

"At low pump fluence when the emission of both donors and acceptors is spontaneous, nearly 50% of the exciton population in the donors outflows into the acceptors via FRET. By increasing the pumping level to achieve stimulated emission in the acceptors, we can greatly enhance the exciton flow efficiency up to 90% since quick depletion of excitons in the acceptors significantly promotes the FRET process. Upon further increasing the fluence to initiate stimulated emission in the donors, the exciton flow towards the acceptors almost switches off because the stimulated emission rate in donors is much faster than the FRET rate."

"To get deeper insight into this process, we have developed a FRET-coupled kinetic model to identify the competing processes responsible for the manipulation of exciton flow at different level of optical excitation. The simulation results can qualitatively reproduce the exciton flow trend from the donors to the acceptors demonstrated in our experiments." Junhong Yu, the first author of the research, added.

"This active excitonic control in an all-optical device (i.e., a whispering gallery mode laser configuration) not only offers a platform to gain deeper insight of the FRET physics but also is highly preferable for excitonic-based information processing with potentials of all-optical-control excitonic circuits." Dr. Cuong Dang, the senior author of the research said.

"The authors discuss a very timely scientific challenge, which is to move towards the excitonic devices. Controlling the exciton flow in the optically active media is the essential requirement for the development of solid-state device, and thus, has been the center of attention. The use of population overlap modulated by the lasing action in the donor-acceptor pairs will be an interesting addition to the extension excitonic studies on optically active materials. This study has merits and the advance is technological, offering an all-optical route to manipulate exciton flow in colloidal quantum well structures." Dr. Lei, one of the reviewer of LSA said.

Credit: 
Light Publishing Center, Changchun Institute of Optics, Fine Mechanics And Physics, CAS

Neanderthal migration

At least two different groups of Neanderthals lived in Southern Siberia and an international team of researchers including scientists from Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU) have now proven that one of these groups migrated from Eastern Europe. The researchers have now published their findings in the journal Proceedings of the National Academy of Sciences of the United States of America (PNAS).

Neanderthals were widespread in Europe and also migrated to Southern Siberia, but the origins of these Siberian Neanderthals and when they migrated was not known.

An international team of researchers including archaeologist Thorsten Uthmeier, professor of Prehistory and Protohistory at FAU, have now examined tools found in the Chagyrskaya cave in the Altai mountains in Russia in order to find the answer.

Parallels to sites in Central and Eastern Europe

The site has been excavated since 2019 as part of a DFG research project in conjunction with the Siberian Branch of the Russian Academy of the Sciences in Novosibirsk. In addition to stone tools and bones from hunting remains, two main find layers yielded numerous Neanderthal fossils. After discovering that the stone tools did not resemble any of the tools from groups living in the Altai during the same period, the team searched for comparable finds in a larger radius.

Geometric morphological analyses of 3D models of scanned tools showed that the stone tools found in the Chagyrskaya cave were very similar to artefacts from the Micoquien, which is the name given to the corresponding stone tool industry in Central and Eastern Europe. The comparative scans originate among others from find sites in Bavaria including FAU's own Sesselfelsgrotte cave, in which most of the artefacts used in the comparison were found.

The researchers were able to reconstruct the route of migration of the Siberian Neanderthals using DNA analyses of Neanderthal bones and sediments from the Chagyrskaya cave. The route led the groups during the course of several generations via Croatia and the North Caucasus to the Altai.

Several groups of Neanderthals migrated to Siberia

The DNA analyses also showed that the Neanderthals from the Chagyrskaya cave differ significantly in terms of their DNA from a second Altai group found in the Denisova cave. This discovery fits with the observation that the Denisova Neanderthals were apparently not familiar with tools from the Micoquien. The research team therefore presumes that several groups of Neanderthals migrated to Siberia.

The interdisciplinary examinations of the Neanderthals found in the Chagyrskaya cave, in which Bavarian find sites investigated by FAU play an important role, clearly show that the wave of migration of groups of this species of human 60,000 years ago originated in Central and Eastern Europe.

At the same time, the researchers from Novosibirsk led by Professor Ksenia Kolobova and from FAU found rare evidence that artefacts are culturally informative indicators of population movements.

Credit: 
Friedrich-Alexander-Universität Erlangen-Nürnberg

Honeybee dance dialects

image: Dwarf honeybee, giant honeybee and eastern honeybee (from left): researchers have studied the dance dialects of these three bee species.

Image: 
(Photos: Patrick Kohl / Fabienne Maihoff)

After more than 70 years, a great mystery of zoology has been solved: Honeybees actually use different dance dialects in their waggle dance. Which dialect has developed during evolution is related to the radius of action in which they collect food around the hive.

This is reported by research teams from the Biocenter of Julius-Maximilians-Universität Würzburg (JMU) in Bavaria, Germany, and the National Centre for Biological Sciences (NCBS) in Bangalore, India, in the journal Proceedings of the Royal Society B.

That honey bees might have dance dialects was first proposed in the 1940s by nobel laureate Karl von Frisch and his student Martin Lindauer. Later experiments, however, raised doubts about the existence of the dialects. The new results now prove that Frisch and Lindauer were right. The two pioneers of behavioural research were also right with their explanation why the dance dialects exist at all.

This is what the bees' dances are about

The dance language of the honeybees is a unique form of symbolic communication in the animal kingdom. For example, when a bee has discovered a blossoming cherry tree, it returns to the hive. There it informs the other bees with a dance about the direction in which the food source is located and how far away it is.

Part of the dance is the so-called waggle run, in which the bees energetically shake their abdomen. The direction of the waggle run on the honeycomb communicates the direction of the destination in relation to the position of the sun while the duration of the wagging indicates the distance.

"As the distance of the food source from the nest increases, the duration of the wagging increases in a linear fashion," explains JMU PhD student Patrick Kohl, first author of the publication. However, this increase is different for different bee species. This was shown in experiments carried out by the research team in southern India.

Experiments with three honeybee species in South India

There, three bee species with different radii of action were studied. The eastern honeybees (Apis cerana) fly up to about one kilometre away from the nest. The dwarf honeybees (Apis florea) fly up to 2.5 kilometres, the giant honeybees (Apis dorsata) about three kilometres.

The opposite relationships apply for the increase in the duration of the wagging. For example, if a food source is 800 meters away, an eastern honeybee will have a much longer wagging than a dwarf honeybee, and the latter will have a longer wagging than the giant honeybee. In order to communicate an identical distance to the food, each species uses its own dance dialect.

"We also saw this when we compared our results with published data from other research groups," says Patrick Kohl. The correlation between foraging range and dance dialect was corroborated when looking at honeybee species native to England, Botswana, and Japan.

Why did JMU researchers go to South India in the first place? "India has the advantage that three honeybee species live in the same area, so that their dance dialects can be easily compared," said Kohl. "We also have very good contacts with researchers at NCBS, a top research address in South Asia."

Dialects as evolutionary adaptations

The results also confirm what von Frisch and Lindauer had suspected about the meaning of the dance dialects. These are evolutionary adaptations to the honeybee species' typical foraging distances. Honeybees, for example, which regularly fly long distances, cannot afford to communicate these distances in the hive with very long waggle runs: On the crowded dance floor in the hive, other bees would have difficulties following such "marathon waggings".

The scientists' conclusion: The dance dialects of the bees are an excellent example of how complex behaviours can be tuned as an evolutionary adaptation to the environment.

Credit: 
University of Würzburg

First bufferless 1.5 μm III-V lasers grown directly on silicon wafers in Si-photonics

image: Schematic of III-V laser array directly grown on Si-photonics 220 nm SOI platform

Image: 
HKUST

Researchers from the Hong Kong University of Science and Technology (HKUST) have reported the world's first 1.5 μm III-V lasers directly grown on the industry-standard 220 nm SOI (silicon-on-insulators) wafers without buffer, potentially paving an opening to the "holy grail" for present silicon (Si-) photonics research.

Seamlessly bridging the active III-V light sources with the passive Si-based photonic devices, the demonstration could be deployed as light sources in integrated circuits to greatly improve circuit speed, power efficiency and cost-effectiveness.

In other conventional approaches of integrating III-V lasers on Si in the literature, thick III-V buffers up to a few micrometers are used to reduce the defect densities, which posts huge challenges for efficient light interfacing between the epitaxial III-V lasers and the Si-based waveguides.

For the first time in history, the research team led by Prof. LAU Kei-May of HKUST's Department of Electronic and Computer Engineering and Post-doctoral Fellow Dr. HAN Yu devised a novel growth scheme to eliminate the requirement of thick III-V buffers and thus promoted efficient light coupling into the Si-waveguides. The bufferless feature points to a fully integrated Si-based photonic integrated circuits.

That has enabled the first demonstration of 1.5 μm III-V lasers directly grown on the industry-standard 220 nm SOI wafers using metal organic chemical vapor deposition (MOCVD). Previous demonstrations required non-industry-standard bulk Si or thick SOI wafers.

The research findings were recently published online in Optica in February 2020.

The world's growing appetite for Internet services and the digitization of our lives leads to a vast amount of digital data being generated, processed, stored, and transmitted.

Silicon is the most widely used material in the manufacturing of semiconductors, which are embedded into nearly every piece of communications technology that we rely on every day, from computers and smartphones to datacenters and satellite communications.

But improvements in efficiency of conventional electronic data systems cannot catch up with the soaring data traffic, which calls for the integration of photonic functionalities onto the conventional Si-based electronic platform. The integration could produce optoelectronic integrated circuits with unparalleled speed and functionalities, and enable new applications.

Yet fundamental differences between Si and III-V materials means it is extremely challenging to directly grow III-V functionalities on the Si-platform.

Prof. Lau’s group at HKUST’s Phonics Technology Center has endeavored to integrate III-V materials and functionalities on mainstream silicon wafers for over a decade, innovating and optimizing various approaches to improve the performance of III-V lasers grown on Si, with the goal of progressively approaching the requirements of the industry. This work is part of their project on monolithic integration of III-V lasers on silicon.

Their method saw them first devising a unique growth scheme to directly grow high quality III-V materials on the industry-standard 220 SOI platforms. Then, they characterized and evidenced the excellent crystalline quality of these epitaxial III-V materials through extensive transmission electron microscopy and photoluminescence measurements. The team designed and fabricated the air-cladded laser cavities based on numerical simulations and they eventually tested the devices which showed that the lasers could sustain room-temperature and low-threshold lasing in the technologically important 1.5 μm band under optical excitation.

The demonstration leads to the possibility and potential to monolithically integrate III-V lasers on the industry-standard 220 nm SOI wafers in an economical, compact, and scalable way.

Prof. Lau said: "If practically applied, our technology could enable a significant improvement of the speed, power consumption, cost-effectiveness, and functionality of current Si-based integrated circuits. Our daily electronic devices, such as smartphones, laptops and TVs - basically everything connected to the internet - will be much faster, cheaper, using much less power and multi-functional."

Dr. Han added: "The next step of our research will be to design and demonstrate the first electrically-driven 1.5 μm III-V lasers directly grown on the 220 nm SOI platforms, and devise a scheme to efficiently couple light from the III-V lasers into Si-waveguides and thereby conceptually demonstrate fully-integrated Si-photonics circuits."

Credit: 
Hong Kong University of Science and Technology

A small step for atoms, a giant leap for microelectronics

image: Researchers in Taiwan, China and at Rice University made wafer-sized, two-dimensional sheets of hexagonal boron nitride, as reported in Nature. The material may be removed from its copper substrate and used as a dielectric for two-dimensional electronics.

Image: 
TSMC/Rice University

HOUSTON - (March 4, 2020) - Step by step, scientists are figuring out new ways to extend Moore's Law. The latest reveals a path toward integrated circuits with two-dimensional transistors.

A Rice University scientist and his collaborators in Taiwan and China reported in Nature today that they have successfully grown atom-thick sheets of hexagonal boron nitride (hBN) as two-inch diameter crystals across a wafer.

Surprisingly, they achieved the long-sought goal of making perfectly ordered crystals of hBN, a wide band gap semiconductor, by taking advantage of disorder among the meandering steps on a copper substrate. The random steps keep the hBN in line.

Set into chips as a dielectric between layers of nanoscale transistors, wafer-scale hBN would excel in damping electron scattering and trapping that limit the efficiency of an integrated circuit. But until now, nobody has been able to make perfectly ordered hBN crystals that are large enough -- in this case, on a wafer -- to be useful.

Brown School of Engineering materials theorist Boris Yakobson is co-lead scientist on the study with Lain-Jong (Lance) Li of the Taiwan Semiconductor Manufacturing Co. (TSMC) and his team. Yakobson and Chih-Piao Chuu of TSMC performed theoretical analysis and first principles calculations to unravel the mechanisms of what their co-authors saw in experiments.

As a proof of concept for manufacturing, experimentalists at TSMC and Taiwan's National Chiao Tung University grew a two-inch, 2D hBN film, transferred it to silicon and then placed a layer of field-effect transistors patterned onto 2D molybdenum disulfide atop the hBN.

"The main discovery in this work is that a monocrystal across a wafer can be achieved, and then they can move it," Yakobson said. "Then they can make devices."

"There is no existing method that can produce hBN monolayer dielectrics with extremely high reproducibility on a wafer, which is necessary for the electronics industry," Li added. "This paper reveals the scientific reasons why we can achieve this."

Yakobson hopes the technique may also apply broadly to other 2D materials, with some trial and error. "I think the underlying physics is pretty general," he said. "Boron nitride is a big-deal material for dielectrics, but many desirable 2D materials, like the 50 or so transition metal dichalcogenides, have the same issues with growth and transfer, and may benefit from what we discovered."

In 1975, Intel's Gordon Moore predicted that the number of transistors in an integrated circuit would double every two years. But as integrated circuit architectures get smaller, with circuit lines down to a few nanometers, the pace of progress has been hard to maintain.

The ability to stack 2D layers, each with millions of transistors, may overcome such limitations if they can be isolated from one other. Insulating hBN is a prime candidate for that purpose because of its wide band gap.

Despite having "hexagonal" in its name, monolayers of hBN as seen from above appear as a superposition of two distinct triangular lattices of boron and nitrogen atoms. For the material to perform up to spec, hBN crystals must be perfect; that is, the triangles have to be connected and all point in the same direction. Non-perfect crystals have grain boundaries that degrade the material's electronic properties.

For hBN to become perfect, its atoms have to precisely align with those on the substrate below. The researchers found that copper in a (111) arrangement -- the number refers to how the crystal surface is oriented -- does the job, but only after the copper is annealed at high temperature on a sapphire substrate and in the presence of hydrogen.

Annealing eliminates grain boundaries in the copper, leaving a single crystal. Such a perfect surface would, however, be "way too smooth" to enforce the hBN orientation, Yakobson said.

Yakobson reported on research last year to grow pristine borophene on silver (111), and also a theoretical prediction that copper can align hBN by virtue of the complementary steps on its surface. The copper surface was vicinal -- that is, slightly miscut to expose atomic steps between the expansive terraces. That paper caught the attention of industrial researchers in Taiwan, who approached the professor after a talk there last year.

"They said, 'We read your paper,'" Yakobson recalled. "'We see something strange in our experiments. Can we talk?' That's how it started."

Informed by his earlier experience, Yakobson suggested that thermal fluctuations allow copper (111) to retain step-like terraces across its surface, even when its own grain boundaries are eliminated. The atoms in these meandering "steps" present just the right interfacial energies to bind and constrain hBN, which then grows in one direction while it attaches to the copper plane via the very weak van der Waals force.

"Every surface has steps, but in the prior work, the steps were on a hard-engineered vicinal surface, which means they all go down, or all up," he said. "But on copper (111), the steps are up and down, by just an atom or two randomly, offered by the fundamental thermodynamics."

Because of the copper's orientation, the horizontal atomic planes are offset by a fraction to the lattice underneath. "The surface step-edges look the same, but they're not exact mirror-twins," Yakobson explained. "There's a larger overlap with the layer below on one side than on the opposite."

That makes the binding energies on each side of the copper plateau different by a minute 0.23 electron volts (per every quarter-nanometer of contact), which is enough to force docking hBN nuclei to grow in the same direction, he said.

The experimental team found the optimal copper thickness was 500 nanometers, enough to prevent its evaporation during hBN growth via chemical vapor deposition of ammonia borane on a copper (111)/sapphire substrate.

Credit: 
Rice University

Scientists discover new repair mechanism for alcohol-induced DNA damage

image: Artist impression of an alcohol-induced interstrand crosslink (ICL). The ICL is the yellow connection between both DNA strands, making them stick together.

Image: 
Image copyright: MRC Laboratory of Molecular Biology or MRC LMB

Researchers of the Hubrecht Institute (KNAW) in Utrecht, The Netherlands, and the MRC Laboratory of Molecular Biology in Cambridge, United Kingdom, have discovered a new way in which the human body repairs DNA damage caused by a degradation product of alcohol. That knowledge underlines the link between alcohol consumption and cancer. The research groups of Puck Knipscheer and Ketan J. Patel worked together on this study and published the results in the scientific journal Nature on the 4th of March.

Our DNA is a daily target for a barrage of damage caused by radiation or toxic substances such as alcohol. When alcohol is metabolized, acetaldehyde is formed. Acetaldehyde causes a dangerous kind of DNA damage - the interstrand crosslink (ICL) - that sticks together the two strands of the DNA. As a result, it obstructs cell division and protein production. Ultimately, an accumulation of ICL damage may lead to cell death and cancer.

Defense against DNA damage

Thankfully, every cell in our body possesses a toolkit with which it can repair this type of damage to the DNA. The first line of defense against ICLs caused by acetaldehyde is the ALDH2 enzyme, that largely breaks down acetaldehyde before it causes any harm. However, not everyone profits from this enzyme - about half of the Asian population, more than 2 billion people worldwide, possess a mutation in the gene coding for this enzyme. Because they are not able to break down acetaldehyde, they are more prone to develop alcohol-related cancer.

New line of defense

Scientists from the groups of Puck Knipscheer (Hubrecht Institute) and Ketan J. Patel (MRC Laboratory of Molecular Biology) studied the second line of defense against alcohol-induced ICLs: mechanisms that remove the damage from the DNA. The investigators studied these mechanisms using protein extracts made from the eggs of the clawed frog (Xenopus laevis), an animal model commonly used in biology research. By using these extracts to repair an ICL formed by acetaldehyde, they discovered the existence of two mechanisms that repair ICL damage: the previously known Fanconi anemia (FA) pathway and a novel, faster route. These two mechanisms differ from each other: in the FA pathway the DNA is cut to remove the ICL, whereas the enzymes in the newly discovered route cut the crosslink itself.

Specific damage

With this research, the scientists provide a mechanistic sneak peek in the process of DNA damage repair. 'We now know that there are multiple ways in which the body can repair ICLs in the DNA', says co-lead author Puck Knipscheer. She thinks that this type of research may lead to a better understanding of treatment for alcohol-related types of cancer. 'But before we can do that, we first have to know exactly how this novel mechanism for ICL repair works.'

Credit: 
Hubrecht Institute

Destruction of an Atlantic rain forest fragment raises the local temperature

image: If 25% of a one-hectare forest remnant is cut down, the impact on the local climate will be a temperature increase of 1 °C

Image: 
Carlos Joly

A study conducted in Brazil by researchers at the University of São Paulo (USP) and the University of Campinas (UNICAMP) shows that if 25% of an Atlantic rainforest fragment that is approximately 1 hectare is deforested, then the local temperature will increase by 1 °C. Clear-cutting the entire fragment would increase the local temperature by as much as 4 °C. The findings are published in the journal PLOS ONE.

"We were able to detect the warming effects of the climate due to the deforestation of Atlantic rainforest fragments, of which there are many in Southeast Brazil," Humberto Ribeiro da Rocha, principal investigator of the study, told. Rocha is a professor at the University of São Paulo Institute of Astronomy, Geophysics and Atmospheric Sciences (IAG-USP).

The investigation was conducted under the aegis of two projects supported by São Paulo Research Foundation
- FAPESP, one associated with its Research Program on Global Climate Change (RPGCC) and the other with its Research Program on Biodiversity Characterization, Conservation, Restoration and Sustainable Use (BIOTA-FAPESP).

According to Rocha, scientific evidence is already available that shows that the destruction of tropical forests leads to warmer air at a local scale, but this evidence is based on measurements taken in large areas, mainly by research conducted in the Amazon.

"No one had ever produced detailed information on the deforestation of small fragments or studies that take into account different levels of anthropization [transformation of the environment by human activity]," said Rocha, who is a member of the RPGCC's steering committee.

To fill this research gap, researchers analyzed the relationship between the degree of deforestation and local temperature increases in Atlantic rainforest remnants located in Serra do Mar, a mountain range that stretches along the northern coast of São Paulo state.

Land surface temperature (LST) was estimated using heat flux data continuously recorded around the globe by infrared optical sensors such as those on board NASA's Landsat Earth observation satellites.

Based on these data, the researchers calculated an annual average LST for tens of thousands of Atlantic rainforest samples, each with an area of approximately 1 hectare. The forest cover in these areas ranged from the entirety of the area being covered to no area having forest cover (deforested). These areas also displayed different degrees of anthropization, with a 1% variation gradient.

The calculations were performed during the PhD research of Raianny Leite do Nascimento Wanderley, under Rocha's supervision. They showed higher temperatures in less forested areas. Each 25% increase in the destruction of native vegetation resulted in an LST increase of 1 °C; thus, total deforestation was correlated with a warming of 4 °C.

"This detected pattern is interpreted as characterizing the impact of forest cover loss on the microclimate," Rocha said.

Impact on the forest

According to the researchers, the Atlantic rainforest fragments they studied were located at relatively higher altitudes and had proportionately more carbon stored in the ground than those in the Amazon rainforest areas. Deforestation of Atlantic rainforest areas can therefore jeopardize the biome's carbon balance.

"The Atlantic rainforest is currently in equilibrium and may even be marginally absorbing carbon from the atmosphere but could become a source of carbon emissions," said Carlos Joly (https://bv.fapesp.br/en/pesquisador/283/carlos-alfredo-joly/), a professor at UNICAMP and one of the authors of the study. Joly is a member of BIOTA-FAPESP's steering committee.

Rising temperatures in these forest fragments affect plant respiration more than photosynthesis. This also contributes to the release of larger amounts of carbon from the forest into the atmosphere, Joly explained.

"The two processes combined create a hazardous synergy that leads to a rise in carbon emissions from the forest to the atmosphere," he said.

The effects of deforestation-driven warming in Atlantic rainforest fragments may vary from one tree species to another, he added. Pioneer species, which survive under adverse conditions owing to their high reproductive capacity, usually display greater resilience to temperature changes.

"We don't have enough data yet to predict how long it will take, but in the long run, rising temperatures in Atlantic rainforest fragments due to deforestation could certainly influence the survival of tree species in the forest, albeit some species more than others," he said.

"The proportion of typical mature forest species may diminish, while that of pioneer or initial secondary species, which are more plastic, could increase."

Functions impaired

Considered one of the world's richest and most endangered forests, the Atlantic rainforest occupies 15% of Brazil's land mass in an area that is home to 72% of the population. The biome decreased by 113 square kilometers between 2017 and 2018, according to recent data from the Atlas da Mata Atlântica based on continuous monitoring by NGO Fundação SOS Mata Atlântica in partnership with the National Space Research Institute (INPE).

In addition to the impact on biodiversity, the researchers stressed that even small-scale deforestation impairs important ecosystem services provided by the Atlantic rainforest, such as heat regulation.

"The forest is extremely important to maintaining milder temperatures on the local and regional scale. Changes in its functioning could disrupt this type of ecosystem service," Joly said.

Water supply may also be affected. The Atlantic rainforest is home to seven of Brazil's nine largest drainage basins, where rivers originate that flow into reservoirs that are responsible for almost 60% of the nation's hydroelectric power and supply water to 130 million people.

"The Atlantic rainforest doesn't produce water but protects the springs and permits the storage of water in reservoirs for consumption, power generation, agricultural irrigation and fishing, among other activities," Joly said.

Located in extremely rugged terrain, the Atlantic rainforest helps prevent landslides at times of heavy rain. "Destruction of these forest fragments or changes in their functioning could greatly diminish this protection," Joly said.

Deforestation of the biome, now reduced to 12.4% of its original size, is more severe in São Paulo state than in other areas owing to the construction of roads, gas pipelines and other kinds of infrastructure, he added. This area has also suffered from urban expansion, including the construction of both shantytowns and high-income gated communities.

As one of the most endangered biomes in South America, the Atlantic rainforest has been a focus for numerous studies regarding restoration in recent years. Most of the studies have been conducted by researchers affiliated with BIOTA-FAPESP, according to Joly.

The largest initiative to restore the biome is governed by the Atlantic Rainforest Restoration Pact, launched in 2009 as a multi-stakeholder movement to restore 15 million hectares by 2050.

"A great deal of knowledge has been acquired regarding restoration of the Atlantic rainforest. Evidently, we won't be able to replace everything that has been lost, but at least some of the biome's functions can be restored," Joly said.

Credit: 
Fundação de Amparo à Pesquisa do Estado de São Paulo

Teaming basic scientists with clinicians may improve medical education retention

(Boston)--There is a trend in modern medical school curriculum design to integrate the basic sciences and clinical sciences. Integrating basic science education with its clinical application from the initial stages of learning is thought to improve retention of information and facilitate the transfer of knowledge to the clinical setting.

Basic science educators are not clinicians, yet to accommodate integration they must adjust their content to mesh appropriately with its clinical application. While achievable, this is a challenge that requires intentional effort on the part of the basic science educators.

Researchers from Boston University School of Medicine (BUSM) believe a practical way to facilitate curricular integration is to create opportunities for basic science educators to learn about the clinical application of their area of expertise through shadowing and collaborations with clinician educators and to pair these initiatives with training in effective medical education practices.

"By shadowing clinician educators during patient care or clinical teaching, basic scientists can observe how clinicians apply basic science concepts. Such opportunities help basic science educators better understand how to prioritize and communicate information that has long-term relevance for their learners," explains corresponding author M. Isabel Dominguez, PhD, assistant professor of medicine at BUSM.

Most medical schools are wrestling with the challenge of integration in medical education. Dominguez along with co-author Ann Zumwalt, PhD, BUSM associate professor of anatomy & neurobiology, discuss practical strategies to develop these opportunities and how they bene?t educators.

They believe there are numerous ways that both individuals and institutions can create and facilitate such faculty development opportunities, both for basic science faculty who are full-time educators and those who engage in medical education part time. "Ultimately, these interventions and initiatives will bene?t both the institution's curriculum and the student learners impacted by the curriculum," adds Zumwalt.

Credit: 
Boston University School of Medicine

Using ultrasound localization microscopy to detect oxygen levels in tissues

image: Pengfei Song, an assistant professor of electrical and computer engineering at the Beckman Institute at the University of Illinois, used ultrasound localization microscopy to demonstrate that oxygen levels are lower in tumors compared to healthy tissue.

Image: 
Doris Dahl, Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign

Researchers at the University of Illinois at Urbana-Champaign are using a new application of an existing imaging technique that may help detect tumors in humans.

The technique, super-resolution ultrasound localization microscopy, was used to visualize the distribution of blood vessels and measure oxygen levels in tumors. The study was carried out in chicken embryos, but the researchers hope to extend the studies in humans.

The paper "Ultrasound localization microscopy of renal tumor xenografts in chicken embryo is correlated to hypoxia" was published in Scientific Reports.

ULM uses microbubbles, which are the size of red blood cells, to image tissues.

"We track these bubbles as they flow through blood vessels to obtain a higher resolution image than traditional ultrasounds," said Matthew Lowerison, a postdoctoral research associate in the Song Research Group at the Beckman Institute for Advanced Science and Technology.

Researchers have long known that tumors can be resistant to therapy because of their lower oxygen levels. "Red blood cells can flow through straight blood vessels quickly and efficiently. As a result, the delivery of oxygen and nutrients is efficient," Lowerison said. "In contrast, the blood vessels in tumors are twisted onto each other. It is chaotic and disorganized, and the delivery of oxygen is inefficient."

The members of the Song group have used ULM to demonstrate that oxygen levels are lower in tumors compared to healthy tissue.

"This study is unique because we can image tissue that is deeper inside humans without losing image resolution," said Pengfei Song, an assistant professor of electrical and computer engineering and a full-time faculty member at the Beckman Institute. "Although this technique requires us to inject these microbubbles, they do not have toxicity problems as other imaging agents. Additionally, the microbubbles are approved by the Food and Drug Administration and are widely used in clinic around the world."

Currently the main challenge posed by this technique is the acquisition time. "We need to have a large data set to process the images," Lowerison said. "Although as engineers we are focused on getting the best possible images we can, this technique might work for doctors who want a better vascular image than the conventional imaging methods."

"We are starting to see good results when we use artificial intelligence and machine learning with these technologies, which can help to make this process faster," Song said. "Ultimately, we want to be able to use this technique in a clinical setting for cancer detection, diagnosis, and therapy evaluation."

Credit: 
Beckman Institute for Advanced Science and Technology

Biomaterial discovery enables 3D printing of tissue-like vascular structures

image: Close-up of a tubular structure made by simultaneous printing and self-assembling between graphene oxide and a protein.

Image: 
Professor Alvaro Mata

An international team of scientists have discovered a new material that can be 3D printed to create tissue-like vascular structures.

In a new study published today in Nature Communications, led by Professor Alvaro Mata at the University of Nottingham and Queen Mary University London, researchers have developed a way to 3D print graphene oxide with a protein which can organise into tubular structures that replicate some properties of vascular tissue.

Professor Mata said: "This work offers opportunities in biofabrication by enabling simulatenous top-down 3D bioprinting and bottom-up self-assembly of synthetic and biological components in an orderly manner from the nanoscale. Here, we are biofabricating micro-scale capillary-like fluidic structures that are compatible with cells, exhibit physiologically relevant properties, and have the capacity to withstand flow. This could enable the recreation of vasculature in the lab and have implications in the development of safer and more efficient drugs, meaning treatments could potentially reach patients much more quickly."

Material with remarkable properties

Self-assembly is the process by which multiple components can organise into larger well-defined structures. Biological systems rely on this process to controllably assemble molecular building-blocks into complex and functional materials exhibiting remarkable properties such as the capacity to grow, replicate, and perform robust functions.

The new biomaterial is made by the self-assembly of a protein with graphene oxide. The mechanism of assembly enables the flexible (disordered) regions of the protein to order and conform to the graphene oxide, generating a strong interaction between them. By controlling the way in which the two components are mixed, it is possible to guide their assembly at multiple size scales in the presence of cells and into complex robust structures.

The material can then be used as a 3D printing bioink to print structures with intricate geometries and resolutions down to 10 ?m. The research team have demonstrated the capacity to build vascular-like structures in the presence of cells and exhibiting biologically relevant chemical and mechanical properties.

Dr. Yuanhao Wu is the lead researcher on the project, she said: "There is a great interest to develop materials and fabrication processes that emulate those from nature. However, the ability to build robust functional materials and devices through the self-assembly of molecular components has until now been limited. This research introduces a new method to integrate proteins with graphene oxide by self-assembly in a way that can be easily integrated with additive manufacturing to easily fabricate biofluidic devices that allow us replicate key parts of human tissues and organs in the lab."

Credit: 
University of Nottingham

Bereaved individuals may face higher risk of dying from melanoma

Individuals who experience the loss of a partner are less likely to be diagnosed with melanoma but face an increased risk of dying from the disease, according to research published in the British Journal of Dermatology.

The researchers, led by the London School of Hygiene & Tropical Medicine and Aarhus University Hospital, investigated whether bereaved individuals had a higher risk of being diagnosed with, or dying from, melanoma than the non-bereaved. They used data from two large population-based studies between 1997 and 2017 in the UK and Denmark.

They found that melanoma patients who experienced bereavement had a 17% higher risk of dying from their melanoma compared with those who were not bereaved, with similar results seen in both the UK and Denmark.

This study also showed that those who had lost a partner were 12% less likely to be diagnosed with melanoma compared with non-bereaved persons, with 620 and 1667 bereaved diagnosed in the UK and Denmark respectively over the 20 year period, compared with 6430 and 16,166 non-bereaved.

While previous studies have suggested a link between various types of stress and progression of melanoma, which may have played a role in the finding, the researchers suggest that an alternative explanation could be that bereaved people no longer have a close person to help notice skin changes.

This delays detection of a possible melanoma, and therefore diagnosis, until the cancer has progressed to later stages, when it is generally more aggressive and harder to treat.

Each year, 197,000 people are diagnosed with melanoma globally. Melanoma makes up around 5% of all cancer cases in the UK and Denmark. The survival rate of melanoma patients is relatively high, depending on what stage the cancer is at detection. Early detection and treatment are crucial for improving survival.

Angel Wong, lead author and Research Fellow at the London School of Hygiene & Tropical Medicine, said:

"Many factors can influence melanoma survival. Our work suggests that melanoma may take longer to detect in bereaved people, potentially because partners play an important role in spotting early signs of skin cancer.

"Support for recently bereaved people, including showing how to properly check their skin, could be vital for early detection of skin cancer, and thus improved survival."

The researchers also encourage family members or caregivers to perform skin examinations for the remaining partner, and call for clinicians to lower their threshold for undertaking skin examinations in bereaved people.

They acknowledge the study's limitations, including the lack of information on some risk factors of melanoma, such as sun exposure or family history, but consider that this had limited impact on the conclusions drawn from this study.

Dr Walayat Hussain of the British Association of Dermatologists said:

"Detecting melanoma early can greatly improve survival and partners are key to this. Those without a partner should be vigilant in checking their skin, particularly in hard to reach locations such as the back, scalp, and ears.

"Skin cancer is a disease which is most common in older people, who are also most likely to be bereaved, so targeting skin checking advice at this group should be a priority."

Credit: 
London School of Hygiene & Tropical Medicine

Electrical stimulation helps treat constipation in clinical trial

Electrical stimulation benefited women with constipation in a recent clinical trial published in Alimentary Pharmacology & Therapeutics.

In the trial, 33 women with constipation that had not improved with standard treatment received either real or sham electrical stimulation on the stomach and back for 1 hour each day for 6 weeks. The women did not know whether they were receiving the real or the sham treatment. Treatment was successful in 53% of the women in the first group but only 12% in the second. Furthermore, the improvement in symptoms lasted for at least 3 months after the treatment ended, and there were no reported side effects.

"This treatment is very promising and offers patients a well-tolerated alternative to laxative medications," said lead author Judith S. Moore, PhD, RN, of Monash University in Australia.

Credit: 
Wiley