Culture

Researchers gather interventions addressing 'word gap' into special edition of journa

LAWRENCE -- Some children in the U.S. grow up under severe disadvantage in terms of the amount and quality of language they are exposed to in their earliest years. Researchers have documented that some children are exposed to roughly 30 million fewer words than other children during years that are critical for learning language. Researchers call this the "word gap" and say it portends lifelong consequences.

"The word gap represents inequities in early experience with language that can place children at a disadvantage in terms of not only their early language and vocabulary development, but also literacy and school readiness," said Dale Walker, associate research professor and scientist at Juniper Gardens Children's Project and the Life Span Institute at the University of Kansas.

"When young children do not have sufficient opportunities to hear and practice language, they are less likely to develop vocabulary by age 3," Walker said. "That can lead not only to decreased language and social outcomes, but children are less likely to be ready to succeed in school. They begin school already at a disadvantage in terms of the vocabulary that they've been exposed to and can continue to fall behind in reading and in school achievement. And as noted by economists, that inequity can follow a child into adulthood in terms of educational opportunities and their earning potential."

Walker and Judith Carta, professor of special education at KU, recently guest-edited a special issue of the peer-reviewed Early Childhood Research Quarterly that brings together 18 language-intervention research and empirical studies that address the word gap.

"Understanding research evidence is necessary to inform education and intervention efforts to improve the language-learning experiences for young children and prevent the word gap," Walker said.

The research in the special issue covers interventions conducted with parents, educators and health care providers. Further, it includes cultural and linguistic diversity of study participants, as well as training and implementation practices, and the methodological factors informing intervention research.

Walker, who recently earned KU's Steven F. Warren Research Achievement Award, said she hoped research gathered in the special edition could be used by academics, early childhood educators, health care professionals and policymakers.

"We are encouraged about the potential of efforts to address the word gap and hope that our special issue on this topic can be used as a resource that can help inform future research, practice and prevention to close the word gap," she said.

Credit: 
University of Kansas

Reducing problem behaviors for children with autism

image: Kyle Hamilton is a behavior analyst

Image: 
MU Thompson Center

COLUMBIA, Mo. - Self-inflicted injury, aggression toward others and yelling are common problem behaviors associated with young children diagnosed with autism spectrum disorder. These actions can result from the child being denied attention or access to items they enjoy, as well as from internal discomfort or environmental stressors such as noise or large crowds.

Now, a researcher at the University of Missouri has adjusted an existing treatment procedure aimed at reducing problem behaviors for children with autism spectrum disorder. Instead of traditional techniques, which require constant monitoring of the child, the new approach, which emphasizes momentary check-ins, provides more flexibility for parents and caregivers.

Kyle Hamilton, a behavior analyst at the Thompson Center for Autism and Neurodevelopmental Disorders, said that while existing intervention methods can be effective in controlled environments, they can be harder for busy parents, teachers and caregivers to implement in everyday situations.

Currently, experts advise parents to watch their children for long periods of time (up to several minutes) and give a reward only if the child's behavior is appropriate the entire time. However, a parent that is cooking dinner in the kitchen may not be able to simultaneously supervise children playing in a nearby room for long periods of time. With Hamilton's new approach, parents would only check their children periodically for a few seconds. If the child was behaving appropriately at the moment of the check, a small reward could be given.

"Rather than constantly monitoring the child, this new technique allows for periodic check-ins to see if the child is engaging in problem behaviors and reward them if we are seeing improvements," Hamilton said. "Through positive reinforcement, we can help reduce problem behaviors for kids with autism, which will allow them to be around their typically developing peers more often in society."

Given the broadness of the autism spectrum, these findings can lead to additional studies into which treatment options are most effective for reducing various problem behaviors. In addition to minimizing self-inflicted harm that can damage children's long-term health, reducing problem behaviors can help remove the social stigma that many kids with autism face.

"By reducing problem behaviors, we can help these kids spend more time in natural environments, whether that is at the grocery store, pool, restaurants or school," Hamilton said. "We want them to have every opportunity to live the most normal life possible and provide them more exposure to the natural world."

Credit: 
University of Missouri-Columbia

Newfound cell defense system features toxin-isolating 'sponges'

image: Electron microscopy images show 'exosomes' (in yellow) soaking up toxins (in purple) released by bacteria (in blue), which are trying to kill a human lung cell (in green).

Image: 
Courtesy of Nature

A "decoy" mechanism has been found in human and animal cells to protect them from potentially dangerous toxins released by foreign invaders, such as bacteria.

Scientists at NYU Grossman School of Medicine have found that cells exposed to bacteria release tiny, protein-coated packages called exosomes, which act like decoys to bind to bacterial toxins, including those produced by MRSA (methicillin-resistant Staphylococcus aureus), a bacterium that has become resistant to many antibiotics, or Corynebacterium diphtheriae (the bacterium responsible for highly contagious diphtheria).

Researchers say this "soaking up" of toxins neutralizes their action and helps keep cells safe. If left to move around freely, the toxins would normally bind to the cells' outer membranes, creating holes in those membranes and killing the cells.

Publishing in the journal Nature online March 4, the new study showed that bacteria-exposed cells died on their own but lived only when toxin-absorbing exosomes were present.

Researchers say their latest findings show that this cell defense system is common among mammals, including humans, and may help explain why as many as one-fifth of Americans have community-based MRSA bacteria on their body yet very few, no more than 1 in 10,000, die from infection by it.

"Exosomes act much like a sponge, preventing the toxins for a time from attacking the cell, while toxins that are not corralled are left to burrow through cell membranes," says study co-senior investigator Ken Cadwell, PhD. "This defense mechanism also buys some time for other widely recognized immune defenses, such as bacteria-attacking T cells, or antibodies, to kick in and fight the infection directly," adds Cadwell, an associate professor at NYU Langone Health and its Skirball Institute for Biomolecular Medicine.

Cadwell says many disease-carrying pathogens, such as bacteria and viruses, initially target cells' outer membranes, so the NYU team plans to investigate whether similar, generic "sponge-like" exosomes exist and take defensive action in other infections.

According to study co-senior investigator Victor J. Torres, PhD, the C.V. Starr Associate Professor of Microbiology at NYU Langone, the team's results not only add knowledge of mammalian defenses against infection, but also suggest new strategies for strengthening the immune system, either by injecting artificial, exosome-like vesicles into the body to soak up toxins or by boosting exosome production to ramp up the body's defenses.

The NYU researchers based their latest experiments on their previous work showing how bacterial toxins bind to cells during an infection. One earlier finding was that a specific protein called ATG16L1 was always present in cells that lived longer or survived infection. However, cells that lacked ATG16L1 all died from infection. ATG16L1, they say, is a known autophagy protein, acting as a key component in molecules involved in enveloping cellular waste so it can be broken down and disposed of. The researchers say the action of exosomes outside of cells "mirrors" this autophagy/ATG16L1 waste removal pathway observed inside cells.

For the new study, researchers injected MRSA into healthy mice and doubled how long and how many mice lived by fortifying the animals with injections of exosomes extracted from mice infected with the same bacterium. Normal mice injected with MRSA all died.

In other experiments in mice and human cells, when exosome production was chemically and/or genetically blocked, the cells all died, demonstrating to researchers the critical role played by these exosomes in cell survival.

Credit: 
NYU Langone Health / NYU Grossman School of Medicine

Researchers catalog dozens of mutations in crucial brain development gene

image: Genetic samples from developmentally disabled children have identified dozens of new mutations in the DDX3X gene that lead to smaller brains and intellectual disability because of the gene's essential role in neuron genesis and transport.

Image: 
Silver Lab, Duke University

DURHAM, N.C. -- An international team of researchers that pooled genetic samples from developmentally disabled patients from around the world has identified dozens of new mutations in a single gene that appears to be critical for brain development.

"This is important because there are a handful of genes that are recognized as 'hot spots' for mutations causing neurodevelopmental disorders," said lead author Debra Silver, an associate professor of molecular genetics and microbiology in the Duke School of Medicine. "This gene, DDX3X, is going to be added to that list now."

An analysis led by the Elliott Sherr lab at the University of California-San Francisco found that half of the DDX3X mutations in the 107 children studied caused a loss of function that made the gene stop working altogether, but the other half caused changes predicted to disrupt the function of the gene.

The DDX3X gene is carried by the X chromosome, which occurs twice in females and only once in males. Only three of the children in the study were male, indicating that an aberrant copy of the gene is probably most often a lethal problem for males who only have a single copy of X.

In humans, this syndrome often results in smaller brains and intellectual disability. Understanding how and why DDX3X mutations lead to developmental issues provides insight into how the gene functions normally.

With the finding that DDX3X was a common element in the developmental disabilities of these children, Silver's team "used a set of experimental tricks to see how it would lead to disease." In mice, her team manipulated levels of the gene to see how development of the cerebral cortex would be altered.

Changes in the gene led to fewer neurons being produced in a dosage-dependent manner, Silver said.

In the most severe cases, Sherr's team showed that functional changes in DDX3X resulted in a smaller or even completely missing corpus collosum, the broad communications structure between the two halves of the brain. In some cases, identical genetic spelling errors that occurred in several children also led to polymicrogyria, an abnormal folding pattern on the surface of the brain.

"Not every mutation acts the same," Silver said.

The collaborative team also tested how 'missense' mutations, in which the protein is made but somehow defective, would impair brain development. In the most severe missense mutations, the way protein was made was affected, leading to the formation of 'clumps' of RNA-protein aggregates in neural stem cells, similar to the protein clumps found in Alzheimer's disease, Silver said.

Together, these issues point to a role for DDX3X in the genesis of developing neurons as the brain grows. "The way neurons are made and organized is disrupted," Silver said. "We know that this gene is required for early brain development which can cause a whole host of developmental problems."

Almost all of the mutations seen in the study children were 'de novo,' meaning they happened during the child's early development, rather than being inherited from a parent.

Parents of the children with these mutations have established the DDX3X Foundation to pursue better understanding of what causes the disease, identify therapies, and provide a supportive community for families.

Credit: 
Duke University

Diabetes remission rates after 2 common weight-loss surgeries

What The Study Did: Researchers examined associations between two of the most common weight-loss surgeries on type 2 diabetes outcomes by comparing diabetes remission and relapse rates, glycemic control and weight loss after five years among 9,700 adults with type 2 diabetes who had Roux-en-Y gastric bypass or sleeve gastrectomy.

Authors: Kathleen McTigue, M.D., of the University of Pittsburgh, is the corresponding author.

To access the embargoed study: Visit our For The Media website at this link https://media.jamanetwork.com/ 

(doi:10.1001/jamasurg.2020.0087)

Editor's Note: The article includes conflict of interest and funding/support disclosures. Please see the articles for additional information, including other authors, author contributions and affiliations, conflicts of interest and financial disclosures, and funding and support.

Credit: 
JAMA Network

Bilingualism acts as a cognitive reserve factor against dementia

The conclusions of a study carried out by Víctor Costumero, as the first author, Marco Calabria and Albert Costa (died in 2018), members of the Speech Production and Bilingualism (SPB) group at the Cognition and Brain Center (CBC) of the Department of Information Technology and the Communications (DTIC) of the UPF, together with researchers from the Universities of Jaume I, Valencia, Barcelona and Jaén; IDIBELL, Hospital La Fe (Valencia) and Grupo Médico ERESA (Valencia) show that bilingualism acts as a cognitive reserve factor against dementia. Lidón Marín, one of the authors of the article, states that "although sick bilinguals show a greater brain atrophy, the cognitive level among bilinguals and monolinguals is the same. The work has been published in the scientific journal Alzheimer's Research and Therapy with the title "A cross-sectional and longitudinal study on the protective effect of bilingualism against dementia using brain atrophy and cognitive measures". It has been financed by La Marató de TV3 Foundation.

The research has analysed a hundred patients with mild cognitive impairment who are bilingual and monolingual, with an average age of 73 years. Those people who use Catalan and Spanish alternately, regardless of the register, have been considered bilingual. Those people who do not use it indiscriminately although they know, understand and can use Catalan occasionally have been considered monolingual (or passive bilingual). César Ávila, director of the research group, explains that "the alternative use of these two languages (Catalan and Spanish) in any situation is complex at the cognitive level because there are many similarities between them".

At the beginning of the study, the two groups of patients showed the same level of cognitive impairment (language, memory, etc.). However, in the case of bilinguals, brain atrophy was greater than in the case of monolinguals. This fact implies the need for more brain injury load to show the same symptoms. The researchers have followed the evolution of the patients for seven months, in which they have been able to observe that the group of bilinguals has had a lower loss of brain volume and has better maintained their cognitive abilities. Researchers consider that "this explains that there is a cognitive reserve of bilingualism". These results are especially relevant because "this would be the first longitudinal evidence of this possible protective effect of bilingualism against dementia", indicates Ávila.

The study was carried out with patients from the General University Hospital of València and La Fe University and Polytechnic Hospital, with similar socio-demographic characteristics and educational levels. Previous data already indicated that bilingual people (of any language) take five years longer to reach dementia than monolingual people. However, one of the contributions of this study, in addition to comparing two different moments in time, has been to reveal that the mechanism that makes it so is the cognitive stimulation favoured by the alternation in use between one language and another. Although it is too early to apply the results obtained in treatments for dementia, "we do know that there are cognitive stimulation therapies that include practical exercises in the use of different languages", explains the researcher Victor Costumero.

In addition to the research group from Castelló, the study has also involved the Cognition and Brain Centre of the Pompeu Fabra University of Barcelona; the ERI se Lectura of the University of València; the ERESA Medical Group of València; the Department of Neurology of the General Hospital of València; the Neurology Unit of the University and Polytechnic Hospital La Fe; the Cognitive Processes Section of the Department of Cognition, Development and Educational Psychology of the University of Barcelona; the Cognition and Brain Plasticity Group of the Bellvitge Biomedical Research Institute (IDIBELL) of L'Hospitalet de Llobregat and the Department of Computer Science of the University of Jaén.

Credit: 
Universitat Pompeu Fabra - Barcelona

All optical control of exciton flow in a colloidal quantum well complex

image: (a) The normalized contour map of emission spectra when the nanomaterial mixture is coated in a capillary tube. White dashed lines indicate the thresholds of red lasing (acceptor) and green lasing (donor). Top inset: photography images corresponding to spontaneous emission, acceptor lasing and dual lasing, respectively. (b) Lasing's integrated intensity as a function of the pump fluence for the donors (green dots/line) and the acceptors (red dots/line). Three emission regimes (i.e. spontaneous emission, acceptor lasing and dual lasing) are shaded in grey, light red and light green, respectively. (c) The normalized integrated intensity of donors' spontaneous emission. In the acceptor lasing regime, excitons are transferred to acceptors more efficiently, therefore the donors' spontaneous emission increases sub-linearly with respect to excitation power. Then it increases super-linearly when entering dual lasing regime (d) The calculated exciton outflowing efficiency in the donor. Three distinct efficiencies (50%, 90% and 2%) are achieved and controlled by excitation fluence corresponding to spontaneous emission, acceptor lasing and dual lasing regime. (e) Illustration of controlling exciton flow by stimulated emission. The fundamental mechanism is to control the density of the excited donors N1D and the unexcited (ground state) acceptors N0A by utilizing super high exciton recombination rate of stimulate emission.

Image: 
Junhong Yu, Manoj Sharma, Ashma Sharma, Savas Delikanli, Hilmi Volkan Demir, Cuong Dang

Exciton-based solid-state devices have the potential to be essential building blocks for modern information technology to slow down the end of Moore's law. Exploiting excitonic devices requires the ability to control the excitonic properties (e.g., exciton flow, exciton recombination rates or exciton energy) in an active medium. However, until now, the demonstrated techniques for excitonic control have either been inherently complex or sacrificed the operation speed, which is self-defeating and impractical for actual implementation. Hence, a scheme with an emphasis on all-optical control, bottom-up fabrication and self-assembly is highly desired for real-world applications.

In a new paper published in Light Science & Applications, scientists from the School of Electrical & Electronic Engineering, Nanyang Technological University, Singapore, developed a convenient way to control exciton flow between different colloidal quantum wells (CQWs) at room temperature, all through optical signals. Through the combination of stimulated emission and Förster resonance energy transfer (FRET), the flow of excitons between donor Cadium selenide (CdSe) core-only CQWs and acceptor CdS/CdSe/CdS core-shell CQWs can be strongly manipulated. Using this method, continuous transition among three distinct exciton flow regimes with efficiencies of ~50%, ~90% and ~2% has been demonstrated. The reported method and technique, which demonstrate a lab-prototype of an all-optical controllable exciton flow device with multiple modulation stages, may inspire the design of all-optical excitonic circuits operating at room temperature.

The core idea of the method is based on the competition of stimulated emission rate, spontaneous emission rate and FRET rate together with the threshold behavior of stimulated emission. These scientists summarize the excitonic flow control process in their works:

"At low pump fluence when the emission of both donors and acceptors is spontaneous, nearly 50% of the exciton population in the donors outflows into the acceptors via FRET. By increasing the pumping level to achieve stimulated emission in the acceptors, we can greatly enhance the exciton flow efficiency up to 90% since quick depletion of excitons in the acceptors significantly promotes the FRET process. Upon further increasing the fluence to initiate stimulated emission in the donors, the exciton flow towards the acceptors almost switches off because the stimulated emission rate in donors is much faster than the FRET rate."

"To get deeper insight into this process, we have developed a FRET-coupled kinetic model to identify the competing processes responsible for the manipulation of exciton flow at different level of optical excitation. The simulation results can qualitatively reproduce the exciton flow trend from the donors to the acceptors demonstrated in our experiments." Junhong Yu, the first author of the research, added.

"This active excitonic control in an all-optical device (i.e., a whispering gallery mode laser configuration) not only offers a platform to gain deeper insight of the FRET physics but also is highly preferable for excitonic-based information processing with potentials of all-optical-control excitonic circuits." Dr. Cuong Dang, the senior author of the research said.

"The authors discuss a very timely scientific challenge, which is to move towards the excitonic devices. Controlling the exciton flow in the optically active media is the essential requirement for the development of solid-state device, and thus, has been the center of attention. The use of population overlap modulated by the lasing action in the donor-acceptor pairs will be an interesting addition to the extension excitonic studies on optically active materials. This study has merits and the advance is technological, offering an all-optical route to manipulate exciton flow in colloidal quantum well structures." Dr. Lei, one of the reviewer of LSA said.

Credit: 
Light Publishing Center, Changchun Institute of Optics, Fine Mechanics And Physics, CAS

Neanderthal migration

At least two different groups of Neanderthals lived in Southern Siberia and an international team of researchers including scientists from Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU) have now proven that one of these groups migrated from Eastern Europe. The researchers have now published their findings in the journal Proceedings of the National Academy of Sciences of the United States of America (PNAS).

Neanderthals were widespread in Europe and also migrated to Southern Siberia, but the origins of these Siberian Neanderthals and when they migrated was not known.

An international team of researchers including archaeologist Thorsten Uthmeier, professor of Prehistory and Protohistory at FAU, have now examined tools found in the Chagyrskaya cave in the Altai mountains in Russia in order to find the answer.

Parallels to sites in Central and Eastern Europe

The site has been excavated since 2019 as part of a DFG research project in conjunction with the Siberian Branch of the Russian Academy of the Sciences in Novosibirsk. In addition to stone tools and bones from hunting remains, two main find layers yielded numerous Neanderthal fossils. After discovering that the stone tools did not resemble any of the tools from groups living in the Altai during the same period, the team searched for comparable finds in a larger radius.

Geometric morphological analyses of 3D models of scanned tools showed that the stone tools found in the Chagyrskaya cave were very similar to artefacts from the Micoquien, which is the name given to the corresponding stone tool industry in Central and Eastern Europe. The comparative scans originate among others from find sites in Bavaria including FAU's own Sesselfelsgrotte cave, in which most of the artefacts used in the comparison were found.

The researchers were able to reconstruct the route of migration of the Siberian Neanderthals using DNA analyses of Neanderthal bones and sediments from the Chagyrskaya cave. The route led the groups during the course of several generations via Croatia and the North Caucasus to the Altai.

Several groups of Neanderthals migrated to Siberia

The DNA analyses also showed that the Neanderthals from the Chagyrskaya cave differ significantly in terms of their DNA from a second Altai group found in the Denisova cave. This discovery fits with the observation that the Denisova Neanderthals were apparently not familiar with tools from the Micoquien. The research team therefore presumes that several groups of Neanderthals migrated to Siberia.

The interdisciplinary examinations of the Neanderthals found in the Chagyrskaya cave, in which Bavarian find sites investigated by FAU play an important role, clearly show that the wave of migration of groups of this species of human 60,000 years ago originated in Central and Eastern Europe.

At the same time, the researchers from Novosibirsk led by Professor Ksenia Kolobova and from FAU found rare evidence that artefacts are culturally informative indicators of population movements.

Credit: 
Friedrich-Alexander-Universität Erlangen-Nürnberg

Honeybee dance dialects

image: Dwarf honeybee, giant honeybee and eastern honeybee (from left): researchers have studied the dance dialects of these three bee species.

Image: 
(Photos: Patrick Kohl / Fabienne Maihoff)

After more than 70 years, a great mystery of zoology has been solved: Honeybees actually use different dance dialects in their waggle dance. Which dialect has developed during evolution is related to the radius of action in which they collect food around the hive.

This is reported by research teams from the Biocenter of Julius-Maximilians-Universität Würzburg (JMU) in Bavaria, Germany, and the National Centre for Biological Sciences (NCBS) in Bangalore, India, in the journal Proceedings of the Royal Society B.

That honey bees might have dance dialects was first proposed in the 1940s by nobel laureate Karl von Frisch and his student Martin Lindauer. Later experiments, however, raised doubts about the existence of the dialects. The new results now prove that Frisch and Lindauer were right. The two pioneers of behavioural research were also right with their explanation why the dance dialects exist at all.

This is what the bees' dances are about

The dance language of the honeybees is a unique form of symbolic communication in the animal kingdom. For example, when a bee has discovered a blossoming cherry tree, it returns to the hive. There it informs the other bees with a dance about the direction in which the food source is located and how far away it is.

Part of the dance is the so-called waggle run, in which the bees energetically shake their abdomen. The direction of the waggle run on the honeycomb communicates the direction of the destination in relation to the position of the sun while the duration of the wagging indicates the distance.

"As the distance of the food source from the nest increases, the duration of the wagging increases in a linear fashion," explains JMU PhD student Patrick Kohl, first author of the publication. However, this increase is different for different bee species. This was shown in experiments carried out by the research team in southern India.

Experiments with three honeybee species in South India

There, three bee species with different radii of action were studied. The eastern honeybees (Apis cerana) fly up to about one kilometre away from the nest. The dwarf honeybees (Apis florea) fly up to 2.5 kilometres, the giant honeybees (Apis dorsata) about three kilometres.

The opposite relationships apply for the increase in the duration of the wagging. For example, if a food source is 800 meters away, an eastern honeybee will have a much longer wagging than a dwarf honeybee, and the latter will have a longer wagging than the giant honeybee. In order to communicate an identical distance to the food, each species uses its own dance dialect.

"We also saw this when we compared our results with published data from other research groups," says Patrick Kohl. The correlation between foraging range and dance dialect was corroborated when looking at honeybee species native to England, Botswana, and Japan.

Why did JMU researchers go to South India in the first place? "India has the advantage that three honeybee species live in the same area, so that their dance dialects can be easily compared," said Kohl. "We also have very good contacts with researchers at NCBS, a top research address in South Asia."

Dialects as evolutionary adaptations

The results also confirm what von Frisch and Lindauer had suspected about the meaning of the dance dialects. These are evolutionary adaptations to the honeybee species' typical foraging distances. Honeybees, for example, which regularly fly long distances, cannot afford to communicate these distances in the hive with very long waggle runs: On the crowded dance floor in the hive, other bees would have difficulties following such "marathon waggings".

The scientists' conclusion: The dance dialects of the bees are an excellent example of how complex behaviours can be tuned as an evolutionary adaptation to the environment.

Credit: 
University of Würzburg

First bufferless 1.5 μm III-V lasers grown directly on silicon wafers in Si-photonics

image: Schematic of III-V laser array directly grown on Si-photonics 220 nm SOI platform

Image: 
HKUST

Researchers from the Hong Kong University of Science and Technology (HKUST) have reported the world's first 1.5 μm III-V lasers directly grown on the industry-standard 220 nm SOI (silicon-on-insulators) wafers without buffer, potentially paving an opening to the "holy grail" for present silicon (Si-) photonics research.

Seamlessly bridging the active III-V light sources with the passive Si-based photonic devices, the demonstration could be deployed as light sources in integrated circuits to greatly improve circuit speed, power efficiency and cost-effectiveness.

In other conventional approaches of integrating III-V lasers on Si in the literature, thick III-V buffers up to a few micrometers are used to reduce the defect densities, which posts huge challenges for efficient light interfacing between the epitaxial III-V lasers and the Si-based waveguides.

For the first time in history, the research team led by Prof. LAU Kei-May of HKUST's Department of Electronic and Computer Engineering and Post-doctoral Fellow Dr. HAN Yu devised a novel growth scheme to eliminate the requirement of thick III-V buffers and thus promoted efficient light coupling into the Si-waveguides. The bufferless feature points to a fully integrated Si-based photonic integrated circuits.

That has enabled the first demonstration of 1.5 μm III-V lasers directly grown on the industry-standard 220 nm SOI wafers using metal organic chemical vapor deposition (MOCVD). Previous demonstrations required non-industry-standard bulk Si or thick SOI wafers.

The research findings were recently published online in Optica in February 2020.

The world's growing appetite for Internet services and the digitization of our lives leads to a vast amount of digital data being generated, processed, stored, and transmitted.

Silicon is the most widely used material in the manufacturing of semiconductors, which are embedded into nearly every piece of communications technology that we rely on every day, from computers and smartphones to datacenters and satellite communications.

But improvements in efficiency of conventional electronic data systems cannot catch up with the soaring data traffic, which calls for the integration of photonic functionalities onto the conventional Si-based electronic platform. The integration could produce optoelectronic integrated circuits with unparalleled speed and functionalities, and enable new applications.

Yet fundamental differences between Si and III-V materials means it is extremely challenging to directly grow III-V functionalities on the Si-platform.

Prof. Lau’s group at HKUST’s Phonics Technology Center has endeavored to integrate III-V materials and functionalities on mainstream silicon wafers for over a decade, innovating and optimizing various approaches to improve the performance of III-V lasers grown on Si, with the goal of progressively approaching the requirements of the industry. This work is part of their project on monolithic integration of III-V lasers on silicon.

Their method saw them first devising a unique growth scheme to directly grow high quality III-V materials on the industry-standard 220 SOI platforms. Then, they characterized and evidenced the excellent crystalline quality of these epitaxial III-V materials through extensive transmission electron microscopy and photoluminescence measurements. The team designed and fabricated the air-cladded laser cavities based on numerical simulations and they eventually tested the devices which showed that the lasers could sustain room-temperature and low-threshold lasing in the technologically important 1.5 μm band under optical excitation.

The demonstration leads to the possibility and potential to monolithically integrate III-V lasers on the industry-standard 220 nm SOI wafers in an economical, compact, and scalable way.

Prof. Lau said: "If practically applied, our technology could enable a significant improvement of the speed, power consumption, cost-effectiveness, and functionality of current Si-based integrated circuits. Our daily electronic devices, such as smartphones, laptops and TVs - basically everything connected to the internet - will be much faster, cheaper, using much less power and multi-functional."

Dr. Han added: "The next step of our research will be to design and demonstrate the first electrically-driven 1.5 μm III-V lasers directly grown on the 220 nm SOI platforms, and devise a scheme to efficiently couple light from the III-V lasers into Si-waveguides and thereby conceptually demonstrate fully-integrated Si-photonics circuits."

Credit: 
Hong Kong University of Science and Technology

A small step for atoms, a giant leap for microelectronics

image: Researchers in Taiwan, China and at Rice University made wafer-sized, two-dimensional sheets of hexagonal boron nitride, as reported in Nature. The material may be removed from its copper substrate and used as a dielectric for two-dimensional electronics.

Image: 
TSMC/Rice University

HOUSTON - (March 4, 2020) - Step by step, scientists are figuring out new ways to extend Moore's Law. The latest reveals a path toward integrated circuits with two-dimensional transistors.

A Rice University scientist and his collaborators in Taiwan and China reported in Nature today that they have successfully grown atom-thick sheets of hexagonal boron nitride (hBN) as two-inch diameter crystals across a wafer.

Surprisingly, they achieved the long-sought goal of making perfectly ordered crystals of hBN, a wide band gap semiconductor, by taking advantage of disorder among the meandering steps on a copper substrate. The random steps keep the hBN in line.

Set into chips as a dielectric between layers of nanoscale transistors, wafer-scale hBN would excel in damping electron scattering and trapping that limit the efficiency of an integrated circuit. But until now, nobody has been able to make perfectly ordered hBN crystals that are large enough -- in this case, on a wafer -- to be useful.

Brown School of Engineering materials theorist Boris Yakobson is co-lead scientist on the study with Lain-Jong (Lance) Li of the Taiwan Semiconductor Manufacturing Co. (TSMC) and his team. Yakobson and Chih-Piao Chuu of TSMC performed theoretical analysis and first principles calculations to unravel the mechanisms of what their co-authors saw in experiments.

As a proof of concept for manufacturing, experimentalists at TSMC and Taiwan's National Chiao Tung University grew a two-inch, 2D hBN film, transferred it to silicon and then placed a layer of field-effect transistors patterned onto 2D molybdenum disulfide atop the hBN.

"The main discovery in this work is that a monocrystal across a wafer can be achieved, and then they can move it," Yakobson said. "Then they can make devices."

"There is no existing method that can produce hBN monolayer dielectrics with extremely high reproducibility on a wafer, which is necessary for the electronics industry," Li added. "This paper reveals the scientific reasons why we can achieve this."

Yakobson hopes the technique may also apply broadly to other 2D materials, with some trial and error. "I think the underlying physics is pretty general," he said. "Boron nitride is a big-deal material for dielectrics, but many desirable 2D materials, like the 50 or so transition metal dichalcogenides, have the same issues with growth and transfer, and may benefit from what we discovered."

In 1975, Intel's Gordon Moore predicted that the number of transistors in an integrated circuit would double every two years. But as integrated circuit architectures get smaller, with circuit lines down to a few nanometers, the pace of progress has been hard to maintain.

The ability to stack 2D layers, each with millions of transistors, may overcome such limitations if they can be isolated from one other. Insulating hBN is a prime candidate for that purpose because of its wide band gap.

Despite having "hexagonal" in its name, monolayers of hBN as seen from above appear as a superposition of two distinct triangular lattices of boron and nitrogen atoms. For the material to perform up to spec, hBN crystals must be perfect; that is, the triangles have to be connected and all point in the same direction. Non-perfect crystals have grain boundaries that degrade the material's electronic properties.

For hBN to become perfect, its atoms have to precisely align with those on the substrate below. The researchers found that copper in a (111) arrangement -- the number refers to how the crystal surface is oriented -- does the job, but only after the copper is annealed at high temperature on a sapphire substrate and in the presence of hydrogen.

Annealing eliminates grain boundaries in the copper, leaving a single crystal. Such a perfect surface would, however, be "way too smooth" to enforce the hBN orientation, Yakobson said.

Yakobson reported on research last year to grow pristine borophene on silver (111), and also a theoretical prediction that copper can align hBN by virtue of the complementary steps on its surface. The copper surface was vicinal -- that is, slightly miscut to expose atomic steps between the expansive terraces. That paper caught the attention of industrial researchers in Taiwan, who approached the professor after a talk there last year.

"They said, 'We read your paper,'" Yakobson recalled. "'We see something strange in our experiments. Can we talk?' That's how it started."

Informed by his earlier experience, Yakobson suggested that thermal fluctuations allow copper (111) to retain step-like terraces across its surface, even when its own grain boundaries are eliminated. The atoms in these meandering "steps" present just the right interfacial energies to bind and constrain hBN, which then grows in one direction while it attaches to the copper plane via the very weak van der Waals force.

"Every surface has steps, but in the prior work, the steps were on a hard-engineered vicinal surface, which means they all go down, or all up," he said. "But on copper (111), the steps are up and down, by just an atom or two randomly, offered by the fundamental thermodynamics."

Because of the copper's orientation, the horizontal atomic planes are offset by a fraction to the lattice underneath. "The surface step-edges look the same, but they're not exact mirror-twins," Yakobson explained. "There's a larger overlap with the layer below on one side than on the opposite."

That makes the binding energies on each side of the copper plateau different by a minute 0.23 electron volts (per every quarter-nanometer of contact), which is enough to force docking hBN nuclei to grow in the same direction, he said.

The experimental team found the optimal copper thickness was 500 nanometers, enough to prevent its evaporation during hBN growth via chemical vapor deposition of ammonia borane on a copper (111)/sapphire substrate.

Credit: 
Rice University

Scientists discover new repair mechanism for alcohol-induced DNA damage

image: Artist impression of an alcohol-induced interstrand crosslink (ICL). The ICL is the yellow connection between both DNA strands, making them stick together.

Image: 
Image copyright: MRC Laboratory of Molecular Biology or MRC LMB

Researchers of the Hubrecht Institute (KNAW) in Utrecht, The Netherlands, and the MRC Laboratory of Molecular Biology in Cambridge, United Kingdom, have discovered a new way in which the human body repairs DNA damage caused by a degradation product of alcohol. That knowledge underlines the link between alcohol consumption and cancer. The research groups of Puck Knipscheer and Ketan J. Patel worked together on this study and published the results in the scientific journal Nature on the 4th of March.

Our DNA is a daily target for a barrage of damage caused by radiation or toxic substances such as alcohol. When alcohol is metabolized, acetaldehyde is formed. Acetaldehyde causes a dangerous kind of DNA damage - the interstrand crosslink (ICL) - that sticks together the two strands of the DNA. As a result, it obstructs cell division and protein production. Ultimately, an accumulation of ICL damage may lead to cell death and cancer.

Defense against DNA damage

Thankfully, every cell in our body possesses a toolkit with which it can repair this type of damage to the DNA. The first line of defense against ICLs caused by acetaldehyde is the ALDH2 enzyme, that largely breaks down acetaldehyde before it causes any harm. However, not everyone profits from this enzyme - about half of the Asian population, more than 2 billion people worldwide, possess a mutation in the gene coding for this enzyme. Because they are not able to break down acetaldehyde, they are more prone to develop alcohol-related cancer.

New line of defense

Scientists from the groups of Puck Knipscheer (Hubrecht Institute) and Ketan J. Patel (MRC Laboratory of Molecular Biology) studied the second line of defense against alcohol-induced ICLs: mechanisms that remove the damage from the DNA. The investigators studied these mechanisms using protein extracts made from the eggs of the clawed frog (Xenopus laevis), an animal model commonly used in biology research. By using these extracts to repair an ICL formed by acetaldehyde, they discovered the existence of two mechanisms that repair ICL damage: the previously known Fanconi anemia (FA) pathway and a novel, faster route. These two mechanisms differ from each other: in the FA pathway the DNA is cut to remove the ICL, whereas the enzymes in the newly discovered route cut the crosslink itself.

Specific damage

With this research, the scientists provide a mechanistic sneak peek in the process of DNA damage repair. 'We now know that there are multiple ways in which the body can repair ICLs in the DNA', says co-lead author Puck Knipscheer. She thinks that this type of research may lead to a better understanding of treatment for alcohol-related types of cancer. 'But before we can do that, we first have to know exactly how this novel mechanism for ICL repair works.'

Credit: 
Hubrecht Institute

Destruction of an Atlantic rain forest fragment raises the local temperature

image: If 25% of a one-hectare forest remnant is cut down, the impact on the local climate will be a temperature increase of 1 °C

Image: 
Carlos Joly

A study conducted in Brazil by researchers at the University of São Paulo (USP) and the University of Campinas (UNICAMP) shows that if 25% of an Atlantic rainforest fragment that is approximately 1 hectare is deforested, then the local temperature will increase by 1 °C. Clear-cutting the entire fragment would increase the local temperature by as much as 4 °C. The findings are published in the journal PLOS ONE.

"We were able to detect the warming effects of the climate due to the deforestation of Atlantic rainforest fragments, of which there are many in Southeast Brazil," Humberto Ribeiro da Rocha, principal investigator of the study, told. Rocha is a professor at the University of São Paulo Institute of Astronomy, Geophysics and Atmospheric Sciences (IAG-USP).

The investigation was conducted under the aegis of two projects supported by São Paulo Research Foundation
- FAPESP, one associated with its Research Program on Global Climate Change (RPGCC) and the other with its Research Program on Biodiversity Characterization, Conservation, Restoration and Sustainable Use (BIOTA-FAPESP).

According to Rocha, scientific evidence is already available that shows that the destruction of tropical forests leads to warmer air at a local scale, but this evidence is based on measurements taken in large areas, mainly by research conducted in the Amazon.

"No one had ever produced detailed information on the deforestation of small fragments or studies that take into account different levels of anthropization [transformation of the environment by human activity]," said Rocha, who is a member of the RPGCC's steering committee.

To fill this research gap, researchers analyzed the relationship between the degree of deforestation and local temperature increases in Atlantic rainforest remnants located in Serra do Mar, a mountain range that stretches along the northern coast of São Paulo state.

Land surface temperature (LST) was estimated using heat flux data continuously recorded around the globe by infrared optical sensors such as those on board NASA's Landsat Earth observation satellites.

Based on these data, the researchers calculated an annual average LST for tens of thousands of Atlantic rainforest samples, each with an area of approximately 1 hectare. The forest cover in these areas ranged from the entirety of the area being covered to no area having forest cover (deforested). These areas also displayed different degrees of anthropization, with a 1% variation gradient.

The calculations were performed during the PhD research of Raianny Leite do Nascimento Wanderley, under Rocha's supervision. They showed higher temperatures in less forested areas. Each 25% increase in the destruction of native vegetation resulted in an LST increase of 1 °C; thus, total deforestation was correlated with a warming of 4 °C.

"This detected pattern is interpreted as characterizing the impact of forest cover loss on the microclimate," Rocha said.

Impact on the forest

According to the researchers, the Atlantic rainforest fragments they studied were located at relatively higher altitudes and had proportionately more carbon stored in the ground than those in the Amazon rainforest areas. Deforestation of Atlantic rainforest areas can therefore jeopardize the biome's carbon balance.

"The Atlantic rainforest is currently in equilibrium and may even be marginally absorbing carbon from the atmosphere but could become a source of carbon emissions," said Carlos Joly (https://bv.fapesp.br/en/pesquisador/283/carlos-alfredo-joly/), a professor at UNICAMP and one of the authors of the study. Joly is a member of BIOTA-FAPESP's steering committee.

Rising temperatures in these forest fragments affect plant respiration more than photosynthesis. This also contributes to the release of larger amounts of carbon from the forest into the atmosphere, Joly explained.

"The two processes combined create a hazardous synergy that leads to a rise in carbon emissions from the forest to the atmosphere," he said.

The effects of deforestation-driven warming in Atlantic rainforest fragments may vary from one tree species to another, he added. Pioneer species, which survive under adverse conditions owing to their high reproductive capacity, usually display greater resilience to temperature changes.

"We don't have enough data yet to predict how long it will take, but in the long run, rising temperatures in Atlantic rainforest fragments due to deforestation could certainly influence the survival of tree species in the forest, albeit some species more than others," he said.

"The proportion of typical mature forest species may diminish, while that of pioneer or initial secondary species, which are more plastic, could increase."

Functions impaired

Considered one of the world's richest and most endangered forests, the Atlantic rainforest occupies 15% of Brazil's land mass in an area that is home to 72% of the population. The biome decreased by 113 square kilometers between 2017 and 2018, according to recent data from the Atlas da Mata Atlântica based on continuous monitoring by NGO Fundação SOS Mata Atlântica in partnership with the National Space Research Institute (INPE).

In addition to the impact on biodiversity, the researchers stressed that even small-scale deforestation impairs important ecosystem services provided by the Atlantic rainforest, such as heat regulation.

"The forest is extremely important to maintaining milder temperatures on the local and regional scale. Changes in its functioning could disrupt this type of ecosystem service," Joly said.

Water supply may also be affected. The Atlantic rainforest is home to seven of Brazil's nine largest drainage basins, where rivers originate that flow into reservoirs that are responsible for almost 60% of the nation's hydroelectric power and supply water to 130 million people.

"The Atlantic rainforest doesn't produce water but protects the springs and permits the storage of water in reservoirs for consumption, power generation, agricultural irrigation and fishing, among other activities," Joly said.

Located in extremely rugged terrain, the Atlantic rainforest helps prevent landslides at times of heavy rain. "Destruction of these forest fragments or changes in their functioning could greatly diminish this protection," Joly said.

Deforestation of the biome, now reduced to 12.4% of its original size, is more severe in São Paulo state than in other areas owing to the construction of roads, gas pipelines and other kinds of infrastructure, he added. This area has also suffered from urban expansion, including the construction of both shantytowns and high-income gated communities.

As one of the most endangered biomes in South America, the Atlantic rainforest has been a focus for numerous studies regarding restoration in recent years. Most of the studies have been conducted by researchers affiliated with BIOTA-FAPESP, according to Joly.

The largest initiative to restore the biome is governed by the Atlantic Rainforest Restoration Pact, launched in 2009 as a multi-stakeholder movement to restore 15 million hectares by 2050.

"A great deal of knowledge has been acquired regarding restoration of the Atlantic rainforest. Evidently, we won't be able to replace everything that has been lost, but at least some of the biome's functions can be restored," Joly said.

Credit: 
Fundação de Amparo à Pesquisa do Estado de São Paulo

Teaming basic scientists with clinicians may improve medical education retention

(Boston)--There is a trend in modern medical school curriculum design to integrate the basic sciences and clinical sciences. Integrating basic science education with its clinical application from the initial stages of learning is thought to improve retention of information and facilitate the transfer of knowledge to the clinical setting.

Basic science educators are not clinicians, yet to accommodate integration they must adjust their content to mesh appropriately with its clinical application. While achievable, this is a challenge that requires intentional effort on the part of the basic science educators.

Researchers from Boston University School of Medicine (BUSM) believe a practical way to facilitate curricular integration is to create opportunities for basic science educators to learn about the clinical application of their area of expertise through shadowing and collaborations with clinician educators and to pair these initiatives with training in effective medical education practices.

"By shadowing clinician educators during patient care or clinical teaching, basic scientists can observe how clinicians apply basic science concepts. Such opportunities help basic science educators better understand how to prioritize and communicate information that has long-term relevance for their learners," explains corresponding author M. Isabel Dominguez, PhD, assistant professor of medicine at BUSM.

Most medical schools are wrestling with the challenge of integration in medical education. Dominguez along with co-author Ann Zumwalt, PhD, BUSM associate professor of anatomy & neurobiology, discuss practical strategies to develop these opportunities and how they bene?t educators.

They believe there are numerous ways that both individuals and institutions can create and facilitate such faculty development opportunities, both for basic science faculty who are full-time educators and those who engage in medical education part time. "Ultimately, these interventions and initiatives will bene?t both the institution's curriculum and the student learners impacted by the curriculum," adds Zumwalt.

Credit: 
Boston University School of Medicine

Using ultrasound localization microscopy to detect oxygen levels in tissues

image: Pengfei Song, an assistant professor of electrical and computer engineering at the Beckman Institute at the University of Illinois, used ultrasound localization microscopy to demonstrate that oxygen levels are lower in tumors compared to healthy tissue.

Image: 
Doris Dahl, Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign

Researchers at the University of Illinois at Urbana-Champaign are using a new application of an existing imaging technique that may help detect tumors in humans.

The technique, super-resolution ultrasound localization microscopy, was used to visualize the distribution of blood vessels and measure oxygen levels in tumors. The study was carried out in chicken embryos, but the researchers hope to extend the studies in humans.

The paper "Ultrasound localization microscopy of renal tumor xenografts in chicken embryo is correlated to hypoxia" was published in Scientific Reports.

ULM uses microbubbles, which are the size of red blood cells, to image tissues.

"We track these bubbles as they flow through blood vessels to obtain a higher resolution image than traditional ultrasounds," said Matthew Lowerison, a postdoctoral research associate in the Song Research Group at the Beckman Institute for Advanced Science and Technology.

Researchers have long known that tumors can be resistant to therapy because of their lower oxygen levels. "Red blood cells can flow through straight blood vessels quickly and efficiently. As a result, the delivery of oxygen and nutrients is efficient," Lowerison said. "In contrast, the blood vessels in tumors are twisted onto each other. It is chaotic and disorganized, and the delivery of oxygen is inefficient."

The members of the Song group have used ULM to demonstrate that oxygen levels are lower in tumors compared to healthy tissue.

"This study is unique because we can image tissue that is deeper inside humans without losing image resolution," said Pengfei Song, an assistant professor of electrical and computer engineering and a full-time faculty member at the Beckman Institute. "Although this technique requires us to inject these microbubbles, they do not have toxicity problems as other imaging agents. Additionally, the microbubbles are approved by the Food and Drug Administration and are widely used in clinic around the world."

Currently the main challenge posed by this technique is the acquisition time. "We need to have a large data set to process the images," Lowerison said. "Although as engineers we are focused on getting the best possible images we can, this technique might work for doctors who want a better vascular image than the conventional imaging methods."

"We are starting to see good results when we use artificial intelligence and machine learning with these technologies, which can help to make this process faster," Song said. "Ultimately, we want to be able to use this technique in a clinical setting for cancer detection, diagnosis, and therapy evaluation."

Credit: 
Beckman Institute for Advanced Science and Technology