Earth

Birds take flight with help from Sonic hedgehog

image: Grafts of Sonic hedgehog-producing cells produce a duplication of the wing, including its feathers.

Sonic hedgehog is normally produced at the posterior margin of the embryonic chicken wing bud. Grafts of Sonic hedgehog-producing cells were made to the anterior side of a wing bud of another chicken embryo. This operation duplicates the tissues of the mature wing, including the black-pigmented feather buds as shown in the image. The flight feathers buds are the ones protruding from the left and right margins of the wing. Duplicated tissues are on the anterior side of the image (left), and normal tissues are on the posterior side (right).

Image: 
Matthew Towers, University of Sheffield

Flight feathers are amazing evolutionary innovations that allowed birds to conquer the sky. A study led by Matthew Towers (University of Sheffield, UK) and Marian Ros (University of Cantabria, Spain) and published in the journal Development now reveals that flight feather identity is established thanks to Sonic hedgehog - a signalling molecule well-known for giving the digits of the limb their different identities (so that your thumb is different from your pinky, for example). These findings suggest the pre-existing digit identity mechanism was co-opted during the evolution of flight feathers, allowing birds take to the air.

Feathers and the flight they support have long fascainted humans. In the bird embryo, feathers begin as buds--thickenings of the epidermis--that then develop into follicles, from which the keratin-based feathers are produced. Not all feathers are equal, however--compare for instance the downy feathers on the breast of a robin with the flight feathers of its wing. Classical embryological experiments in the 1950s, which involved grafting one part of the embryo on to another, suggested that feather identity (e.g. the choice to become a down versus a flight feather) is established at the earliest stages of development, even before the feather buds form. But in the seventy-odd years since then, we still don't know much about which signals regulate feather identity.

The new study, carried out with Lara Busby as first author, reveals that flight feather identity is specified by Sonic hedgehog (Shh), a famous signalling molecule known to be involved in the development of limb digits, including human fingers. (And yes, Shh is named after the computer game character, but that's another story.) Using chicken embryos, the scientists found that Shh is required in the earliest stages of wing development for the mature birds to develop flight feathers. They also defined a set of genes that are likely to be involved in this process. Importantly, they discovered that Shh works in a defined temporal sequence to specify the different flight feather identities, mirroring how it specifies the different digit identities. This similarity suggests that the digit identity network was co-opted for flight feather development during evolution.

Dr. Towers said: "Flight feathers are one of the most important evolutionary adaptations that allowed birds to take to the air. Our unexpected findings, showing that the digits and flight feathers share remarkably similar developmental programmes, provide important insights into how the bird wing evolved to permit flight."

The researchers hope to extend this work by trying to understand how the early exposure of embryonic chick wing bud cells to Shh is 'memorised' to allow flight feather formation at a much later stage of development.

Credit: 
The Company of Biologists

Children born with a cleft lip unlikely to be genetically inclined to do poorly at school

New research has found that children born with a cleft lip, either with or without a cleft palate, are not likely to be genetically predisposed to do less well at school than their peers. The study by the Cleft Collective research team at the University of Bristol is published today [6 May] in the International Journal of Epidemiology.

Worldwide, around one in every 700 babies is born with a cleft lip, which is a gap in the upper lip. Some previous studies have shown that children born with a cleft lip or a cleft palate (a gap in the roof of the mouth) do less well in school, even if they don't have any other conditions or known genetic syndromes.

It has been suggested that these observed differences could be due to a genetic predisposition to lower intelligence caused by undiagnosed differences in brain structure or function. The new study by the Cleft Collective team indicates that this is unlikely to be the case for children born with a cleft lip.

The team compared information about the genetics of cleft lip to information about the genetics of educational attainment and intelligence using an approach pioneered at Bristol known as Mendelian randomization and another genetic approach known as 'linkage disequilibrium score regression'. They found very little evidence to suggest that the genetic influences on cleft lip are related to low educational attainment or intelligence.

The findings could have an important impact on family counselling and coping strategies, and on how the public perceives people born with a cleft lip.

Dr Gemma Sharp, Senior Lecturer in Molecular Epidemiology in the MRC Integrative Epidemiology Unit and senior author of the study, said: "Our study has highlighted the need for further research into possible explanations as to why these children tend to do less well at school. For example, the differences in educational attainment might be explained by factors related to having a cleft lip (with or without a cleft palate), such as social stigmatization and impaired speech and language development, or by confounding factors such as family socioeconomic position.

"A better understanding of these factors could help to develop ways to support schools and families to improve educational attainment in children born with a cleft lip."

Christina Dardani, PhD student at the Centre of Academic Mental Health and the first author of the study, added: "Our findings could have a positive impact on how people perceive children born with a cleft lip. When they are born, they are just as likely as anyone else to do well at school. The fact that some studies have found they do less well really highlights the need to provide the right environment and educational opportunities to help these children reach their full potential."

The researchers hope to conduct further research in this area using data from the Cleft Collective Cohort Studies as the children in that study reach school age.

Credit: 
University of Bristol

'Terrible twos' not inevitable: With engaged parenting, happy babies can become happy toddlers

Parents should not feel pressured to make their young children undertake structured learning or achieve specific tasks, particularly during lockdown. A new study of children under the age of two has found that parents who take a more flexible approach to their child's learning can - for children who were easy babies - minimise behavioural problems during toddlerhood.

The flexible method of parenting, known as 'autonomy support', places emphasis on the child taking the lead. As the child engages in tasks, parents should watch and adjust how they respond according to how the child is managing, say the researchers. They acknowledge that this method of helping the child to be in control is not necessarily easy.

"It's not about doing everything for your child, or directing their actions. It's more of a to-and-fro between parent and child. Parents who do best at this can sit back and watch when they see their child succeeding with something, but increase support or adapt the task when they see the child struggling," said Professor Claire Hughes, Deputy Director of the Centre for Family Research at the University of Cambridge, and joint first author of the study with Dr Rory Devine at the University of Birmingham's School of Psychology.

The study, published in the journal Developmental Science, found a link between parental autonomy support in 14-month-old children, and reduced behavioural problems ten months later. But this link only applied to children who had been rated as 'easy babies'- those in a generally happy mood, who adapted easily to new experiences and quickly established routines. Children who demonstrated high levels of self-control at 14 months were less likely than their peers to have behaviour problems at 24 months.

"If you're blessed with a happy baby, then you can get them through the 'terrible twos' without things getting too bad or lasting too long, by being flexible about the way you play with your child between the age of 14 and 24 months. A puzzle game, for example, can turn into quite a different game if you allow your child to take the lead," said Hughes.

Many toddlers have temper tantrums and exhibit frustration and defiant behaviour, in what is commonly known as the 'terrible twos'. Unfortunately, the autonomy support strategy isn't equally effective for all children: those born with a more irritable temperament are still more likely to be difficult toddlers.

Parenting must be tailored according to the child, say the researchers. Parents who don't remember their baby having an easy temperament should let go of the idea of achieving specific goals during play, and allow their children to develop at their own pace.

"As we cope with the upheavals of being in lockdown, we're having to be patient with ourselves in so many ways. Parents particularly need to be more patient with the toddlers who found life a bit more challenging, even in ordinary times," said Hughes.

Over 400 expectant couples were recruited for the study from the East of England, New York State and the Netherlands. Each couple was visited when their new baby was 4 months, 14 months and 24 months old, and filmed interacting as their young children carried out a range of specific tasks. The research team carefully rated the level of parental support for each interaction. In addition, parents rated their child's temperament as a baby, and behavioural problems at 14 and 24 months.

Simple tasks were used to test the level of autonomy support parents gave to their child. In one, each child was given farm animal pieces that fitted into cut-out shapes on a board. Some of the parents appeared quite anxious for their child to put the pieces in the right places, and gave them a lot of help. Others spotted that the task was too difficult for their child, and let the game evolve by following the child's lead.

"We had some children who took two animal pieces from a wooden farm puzzle and started clapping them together, and making a game out of the fact that they made a clapping noise. Here, parents might respond by encouraging the child to make animal noises that match the animals being clapped together," said Devine. "Autonomy supportive parenting is about being flexible, following a child's lead, and providing just the right amount of challenge."

During lockdown, many parents are having to look after young children at home rather than leaving them in nursery care during working hours. Trying to keep children motivated and engaged all day can be a daunting task. Yet having more time to spend with young children can also be seen as a rare opportunity to explore new ways of engaging with them, say the researchers.

"Rather than trying to make a child achieve a rigidly defined task, autonomy support is more of a playful interaction. It promotes the child's problem solving and their ability to learn, by letting games or tasks evolve into experiences that engage them," said Hughes.

Previous studies have looked at links between executive function and antisocial behaviour, and separately at family influences on conduct problems. This study is unique in its direct observational measures of parent-child interactions, in combination with a group of executive function tasks.

The researchers found the link between executive function at 14 months and reduced problem behaviours at 24 months held up even when controlling for other factors like a child's language skills, and the quality of mother-child interactions.

Credit: 
University of Cambridge

Real-time visualization of solid-phase ion migration

image: A, Schematic illustration of the ion migration process under e-beam irradiation. B, Reconstructed atomic structure of proposed solid-phase migration within Te nanowire. C, Illustration of the migration path within Te nanowire.

Image: 
Zhen He,Li?Ge Chang,Yue Lin,Feng-Lei Shi,Ze-Dong Li,Jin-Long Wang,Yi Li,Rui Wang,Qing-Xia Chen,Yu-Yang Lu,Qing-Hua Zhang,Lin Gu,Yong Ni,Jian-Wei Liu,Jian-Bo Wu,Shu-Hong Yu*

The USTC team led by Prof. YU Shuhong from University of Science and Technology of China, collaborating with Prof. WU Jianbo from Shanghai Jiao Tong University, has shed new lights on the topic of solid-phase ion migration. Researchers demonstrated a unique in-situ strategy for visualizing the dynamic solid-phase ion migration between nanostructures with nanogap at the atomic scale. The research article entitled "Real-Time Visualization of Solid-Phase Ion Migration Kinetics on Nanowire Monolayer" was published in Journal of the American Chemical Society on April 29th.

Ion migration - the ion migrates through an intact anion sublattice or metal oxide lattice - has been recognized as a critical step in determining the performance of numerous devices in chemistry, biology, and material science. The reasonable control of ion transport process would significantly improve the corresponding properties. The Ion migration is usually accompanied with charge and mass transfer, which is complex and difficult to trace. To date, efforts have been devoted to investigating the dynamic migration mechanism, such as the external heating induced or electrically activated migration. However, direct visualization and quantitative investigation of ion migration in solid-phase remain a challenging task, which has been seldom reported. The requirements for specially designed apparatus also impede the comprehensive understanding of ion migration kinetics, which hamper further practical applications in various areas.

Chemical transmission electron microscopy (ChemTEM) is a newly emerging technique that allows the chemical reaction triggered by an electron beam during the imaging process. The kinetic energy and heat effect that transferred from the e-beam to samples are mainly responsible for the bond dissociation. By adjusting the e-beam dose rate, the type and rate of chemical reactions as well as the bond dissociation can be well controlled. This experimental approach offers an opportunity to investigate the in-situ ion migration process.

Taking up the challenge, the researchers report a unique technique to investigate the solid-phase ion migration process at the atomic scale using Ag ion on Te nanowires as the research model. This complicated process was tracked not only within a single nanowire but also between two neighboring nanowires with an obvious nanogap, which was revealed by both phase-field simulation and ab initio modeling theoretical evaluation. A migration "bridge" between neighboring NWs was observed. Furthermore, these new observations could also be applied to the migration of other noble metal ions on other semiconductor nanowires (Ag ion migration on Se@Te nanowires and migration of Cu ion on Te nanowires). These findings provide critical insights into the solid-phase ion migration kinetics occurring in nanoscale systems with generality and offer an efficient tool to explore other ion migration processes, which will facilitate the fabrication of customized and new hetero-nanostructures in the future.

Credit: 
University of Science and Technology of China

Surfaces that grip like gecko feet could be easily mass-produced

video: The slightest bit of shear tension makes gecko adhesion surfaces grip, and the release of that same tension makes them let go. The same gripping surfaces can pick up objects of all shapes, sizes, and materials with the exception of Teflon and other non-stick surfaces.

Image: 
Georgia Tech / Varenberg lab

Why did the gecko climb the skyscraper? Because it could; its toes stick to about anything. For a few years, engineers have known the secrets of gecko stickiness and emulated it in strips of rubbery materials useful for picking up and releasing objects, but simple mass production for everyday use has been out of reach until now.

Researchers at the Georgia Institute of Technology have developed, in a new study, a method of making gecko-inspired adhesive materials that is much more cost-effective than current methods. It could enable mass production and the spread of the versatile gripping strips to manufacturing and homes.

Polymers with "gecko adhesion" surfaces could be used to make extremely versatile grippers to pick up very different objects even on the same assembly line. They could make picture hanging easy by adhering to both the picture and the wall at the same time. Vacuum cleaner robots with gecko adhesion could someday scoot up tall buildings to clean facades.

"With the exception of things like Teflon, it will adhere to anything. This is a clear advantage in manufacturing because we don't have to prepare the gripper for specific surfaces we want to lift. Gecko-inspired adhesives can lift flat objects like boxes then turn around and lift curved objects like eggs and vegetables," said Michael Varenberg, the study's principal investigator and an assistant professor in Georgia Tech's George W. Woodruff School of Mechanical Engineering.

Current grippers on assembly lines, such as clamps, magnets, and suction cups, can each lift limited ranges of objects. Grippers based on gecko-inspired surfaces, which are dry and contain no glue or goo, could replace many grippers or just fill in capability gaps left by other gripping mechanisms.

Drawing out razors

The adhesion comes from protrusions a few hundred microns in size that often look like sections of short, floppy walls running parallel to each other across the material's surface. How they work by mimicking geckos' feet is explained below.

Up to now, molding has produced these mesoscale walls by pouring ingredients onto a template, letting the mixture react and set to a flexible polymer then removing it from the mold. But the method is inconvenient.

"Molding techniques are expensive and time-consuming processes. And there are issues with getting the gecko-like material to release from the template, which can disturb the quality of the attachment surface," Varenberg said.

The researchers' new method formed those walls by pouring ingredients onto a smooth surface instead of a mold, letting the polymer partially set then dipping rows of laboratory razor blades into it. The material set a little more around the blades, which were then drawn out, leaving behind micron-scale indentations surrounded by the desired walls.

Varenberg and first author Jae-Kang Kim published details of their new method in the journal ACS Applied Materials & Interfaces on April 6, 2020.

Forget about perfection

Though the new method is easier than molding, developing it took a year of dipping, drawing, and readjusting while surveying finicky details under an electron microscope.

"There are many parameters to control: Viscosity and temperature of the liquid; timing, speed, and distance of withdrawing the blades. We needed enough plasticity of the setting polymer to the blades to stretch the walls up, and not so much rigidity that would lead the walls to rip up," Varenberg said.

Gecko-inspired surfaces have a fine topography on a micron-scale and sometimes even on a nanoscale, and surfaces made via molding are usually the most precise. But such perfection is unnecessary; the materials made with the new method did the job well and were also markedly robust.

"Many researchers demonstrating gecko adhesion have to do it in a cleanroom in clean gear. Our system just plain works in normal settings. It is robust and simple, and I think it has good potential for use in industry and homes," said Varenberg, who studies surfaces in nature to mimic their advantageous qualities in human-made materials.

Gecko foot fluff

Behold the gecko's foot. It has ridges on its toes, and this has led some in the past to think their feet stick by suction or some kind of clutching by the skin.

But electron microscopes reveal a deeper structure - spatula-shaped bristly fibrils protrude a few dozen microns long off those ridges. The fibrils make such thorough contact with surfaces down to the nanoscale that weak attractions between atoms on both sides appear to add up enormously to create overall strong adhesion.

In place of fluff, engineers have developed rows of shapes covering materials that produce the effect. A common shape makes a material's surface look like a field of mushrooms that are a few hundred microns in size; another is rows of short walls like those in this study.

"The mushroom patterns touch a surface, and they are attached straightaway, but detaching requires applying forces that can be disadvantageous. The wall-shaped projections require minor shear force like a tug or a gentle grab to generate adherence, but that is easy, and letting go of the object is uncomplicated, too," Varenberg said.

Varenberg's research team used the drawing method to make walls with U-shaped spaces in between them and walls with V-shaped spaces in between. They worked with polyvinylsiloxane (PVS) and polyurethane (PU). The V-shape made in PVS worked best, but polyurethane is the better material for industry, so Vanenberg's group will now work toward achieving the V-shape gecko gripping pattern in PU for the best possible combination.

Credit: 
Georgia Institute of Technology

Clinical implications of chromatin accessibility in human cancers

image: Chromosomal landscape of chromatic accessibility in human cancers. (A) Distribution of all the regulatory elements such as promoter, enhancer, intron, and other elements across chromosomes. Here the other elements denote elements located at the exonic region, or 3? UTR, or 5? UTR. Color indicates the type of genomic region overlapped by the peak. UTR, untranslated region. (B) Genome landscapes of chromatin accessibility. The chromatin accessibility scores indicate the likelihood of chromatin openness and are plotted in two-dimensional space representing chromosomal positions of human genome assembly (GRCh38). One dimension consists of the 23 chromosomes from Chr1 to ChrX and the other dimension indicates the genomic coordinates on a chromosome from p arm to q arm. Correlation of the colors and accessibility scores is indicated by the accompanied colorbar.

Image: 
Correspondence to - Yuexin Liu - yliu8@mdanderson.org

Volume 11, Issue 18 of @Oncotarget Clinical implications of chromatin accessibility assessed by ATAC-seq profiling in human cancers especially in a large patient cohort is largely unknown.

In this study, the authors analyzed ATAC-seq data in 404 cancer patients from the Cancer Genome Atlas, representing the largest cancer patient cohort with ATAC-seq data, and correlated chromatin accessibility with patient demographics, tumor histology, molecular subtypes, and survival.

Chromatin accessibility, especially on the X chromosome, is strongly dependent on patient sex, but not much on patient age or tumor stage.

Dr. Yuexin Liu from The Department of Bioinformatics and Computational Biology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA said, "Cancer is a heterogeneous disease with a diversity of cell types which thus play a deterministic role on patient outcome or therapeutic responses."

"Cancer is a heterogeneous disease with a diversity of cell types which thus play a deterministic role on patient outcome or therapeutic responses."

- Dr. Yuexin Liu, The Department of Bioinformatics and Computational Biology, The University of Texas MD Anderson Cancer Center

The assay for transposase-accessible chromatin using sequencing employs hyperactive Tn5 transposase for highly efficient cutting of exposed DNA and simultaneous ligation of adapters which are then subject to next-generation sequencing.

Therefore, ATAC-seq has enabled the genome-wide profiling of chromatin accessibility in primary human cancers.

Unexpectedly, there is little research on cancer study by using the ATAC-seq technique with a limited number of cancer types such as prostate cancer, pancreatic cancer, and hematological malignancy.

Recently, the Cancer Genome Atlas performed ATAC-seq on 410 tumor samples derived from 404 unique donors and generated a catalog of chromatin accessibility in human cancers.

We will further integrate ATAC-seq data along with the patient clinical annotations or molecular characteristics to determine the association between chromatin accessibility in the promoter regions and patient demographics such as sex, age, tumor stage and histology, molecular subtype, and patient survival.

The Liu Research Team concluded in their Oncotarget Research Article, "chromatin accessibility has important clinical implications in human cancers and our results provide an additional perspective in tumor initiation and progression."

Credit: 
Impact Journals LLC

All-fiber optical wavelength converter

image: (a) Schematic of the operations of wavelength conversion due to the SHG and SFG from the GaSe-integrated microfiber. When pumped by two continuous-wave laser with the wavelengths of λ1 and λ2, there are three light signals with new generated wavelengths [λ1/2, λ2/2 and λ3=λ1λ2/(λ1+λ2)] via second-order nonlinear optical effects. Top inset shows the energy diagrams of the SHG and SFG in the fiber device. (b) Wavelength-conversion spectra from the microfiber with and without GaSe integration when pumped by a pulsed laser 1550 nm, showing the SHG peak at 775 nm. In comparison, the SHG intensity is enhanced by more than four orders of magnitude after GaSe integration. (c) Spectral evolutions of SHG1, SHG2 and SFG when pumped by a 1310 nm DFB laser and a tunable laser (Pump-2) varying the wavelength in the range from 1500 nm to 1620 nm.

Image: 
Biqiang Jiang, Zhen Hao, Yafei Ji, Yueguo Hou, Ruixuan Yi, Dong Mao, Xuetao Gan and Jianlin Zhao

Silica optical fibers exhibit intrinsic features such as ultralow loss, a high damage threshold, and a small mode field, enabling the possibility of long-haul communications and sensing, and have greatly changed our daily living and working styles. The advance of fibers also gives birth to nonlinear fiber optics due to the long interaction length and high power density in the fiber core. However, the lowest-order nonlinear effects in optical fibers originate from the third-order nonlinear susceptibility, which requires extremely high peak power. In nonlinear optics, second-order nonlinear responses are the primary alternative, relying on much higher susceptibility, which will greatly reduce the pump power of nonlinear optical effect. Unfortunately, the centrosymmetric nature of silica fibers precludes the possibility of wavelength conversion based on their second-order nonlinearity. To improve it, considerable efforts have already been made by scientists. However, for the wavelength conversion, a high-intensity pulsed laser, complex post-processing or harsh fabrication conditions of the fibers are still required. Thus, the conversion operations with low-power consumption in a wide wavelength region are hence scientists' pursuits and visions for extensive and practical applications.

Recently, a new paper published in Light: Science & Applications, Scientists from Northwestern Polytechnical University, China, proposed and developed an all-fiber wavelength converter assisted by few-layer gallium selenide (GaSe) nanoflakes. Attributed to the strong evanescent field of the microfiber and ultrahigh second-order nonlinearity of the GaSe nanoflakes, the efficiency of wavelength conversion from the GaSe-integrated microfiber is enhanced by more than four orders of magnitude in comparison with that from a pristine microfiber. The high efficiency wavelength conversion therefore offers the possibility of continuous-wave (CW) laser pump. In practical application, CW-pumped nonlinear fiber optics with simple, low-power and low-cost light sources, such as semiconductor laser diodes, would be highly desirable. More importantly, the researchers found that the conversion of wavelength can be operated in a wide wavelength range covering the whole C and L telecom bands as well as the O band. Also, the generations of new wavelengths only require a sub-milliwatt CW laser.

The all-fiber wavelength converter is based on the second-order nonlinear process such as second-harmonic generation (SHG) and sum-frequency generation (SFG). They are two common optical phenomena in nonlinear optics, while not easy to be excited, especially in silica fiber devices. The scientists give key points in their schemes of all-fiber wavelength converter.

"To obtain the enhanced conversion efficiency, we control the diameter of the microfiber for satisfying the phase-matching condition, according to the theory and simulation results. Also, we have to optimize the integration technique and reduce scattering loss of the microfiber introduced by the GaSe integration, by improving the performance of GaSe nanosheets, such as uniformity, thickness and size."

"Of course, the efficiency of the wavelength conversion can be further enhanced by using a direct chemical vapor deposition growth technique for the perfect coating of 2D materials, which could facilitate a strong and tunable light-matter interaction" they added.

"The proposed CW pumped all-fiber wavelength converter is easy to integrate with current telecom infrastructures, and will promote many new applications, such as all-fiber all-optical signal processing, new light source generations at awkward wavelengths, and so on" the scientists forecast.

"This hybrid fiber device, by integrating other atomic layered materials, will pave the way for achieving high-performance wavelength or frequency modulation and manipulation in an all-fiber structure." they also believe.

Credit: 
Light Publishing Center, Changchun Institute of Optics, Fine Mechanics And Physics, CAS

UB investigators uncover cellular mechanism involved in Krabbe disease

BUFFALO, N.Y. - A group of researchers at the University at Buffalo have published a paper that clarifies certain cellular mechanisms that could lead to improved outcomes in patients with globoid cell leukodystrophy, commonly known as Krabbe disease.

The paper, titled "Macrophages Expressing GALC Improve Peripheral Krabbe Disease by a Mechanism Independent of Cross-Correction," was published today (May 5) in the journal Neuron.

The research was led by Lawrence Wrabetz, MD, and M. Laura Feltri, MD. Wrabetz and Feltri head the Hunter James Kelly Research Institute and both are professors in the departments of Biochemistry and Neurology in the Jacobs School of Medicine and Biomedical Sciences at UB.

The institute is named for the son of former Buffalo Bills quarterback Jim Kelly. Hunter Kelly died at age 8 in 2005 from complications of Krabbe disease.

Krabbe disease is a progressive and fatal neurologic disorder that usually affects newborns and causes death before a child reaches the age of 2 or 3.

Traditionally, hematopoietic stem cell transplantation, also known as a bone marrow transplant, has improved the long-term survival and quality of life of patients with Krabbe disease, but it is not a cure.

It has long been assumed that the bone marrow transplant works by a process called cross-correction, in which an enzyme called GALC is transferred from healthy cells to sick cells.

Using a new Krabbe disease animal model and patient samples, the UB researchers determined that in reality cross-correction does not occur. Rather, the bone marrow transplant helps patients through a different mechanism.

The researchers first determined which cells are involved in Krabbe disease and by which mechanism. They discovered that both myelin-forming cells, or Schwann cells, and macrophages require the GALC enzyme, which is missing in Krabbe patients due to genetic mutation.

Schwann cells require GALC to prevent the formation of a toxic lipid called psychosine, which causes myelin destruction and damage to neurons. Macrophages require GALC to aid with the degradation of myelin debris produced by the disease.

The research showed that hematopoietic stem cell transplantation does not work by cross-correction, but by providing healthy macrophages with GALC.

According to Feltri, the data reveal that improving cross-correction would be a way to make bone marrow transplants and other experimental therapies such as gene therapy more effective.

"Bone marrow transplantation and other treatments for lysosomal storage disorders, such as enzyme replacement therapy, have historically had encouraging but limited therapeutic benefit," said study first author Nadav I. Weinstock, an MD-PhD student in the Jacobs School. "Our work defined the precise cellular and mechanistic benefit of bone marrow transplantation in Krabbe disease, while also shedding light on previously unrecognized limitations of this approach.

"Future studies, using genetically engineered bone marrow transplantation or other novel approaches, may one day build on our findings and eventually bridge the gap for effectively treating patients with lysosomal disease," he continued.

Credit: 
University at Buffalo

Broadband enhancement relies on precise tilt

image: Broadband enhancement of the on-chip single photon extraction via tilted hyperbolic metamaterials. A quantum emitter is positioned very close to a hyperbolic metamaterial, whose optical axis is tilted with respect to the end facet of nanofiber.

Image: 
Lian Shen

WASHINGTON, May 5, 2020 -- Quantum photonics involves a new type of technology that relies on photons, the elementary particle of light. These photons can potentially carry quantum bits of information over large distances. If the photon source could be placed on a single chip and made to produce photons at a high rate, this could enable high-speed quantum communication or information processing, which would be a major advance in information technologies.

In this week's issue of Applied Physics Reviews, from AIP Publishing, a simple on-chip photon source using a type of material known as a hyperbolic metamaterial is proposed. The investigators carried out calculations to show that a prototype using the hyperbolic metamaterial arranged in a precise way can overcome problems of low efficiency and allow for high repetition rates for on-chip photon sources.

Until recently, single-photon sources have usually been made from self-assembled quantum dots in semiconductors or from materials, like diamonds, with structural defects. It is difficult, however, to produce single photons at high rates from such materials. Some approaches to remedy this problem have been tried, but so far, the results suffer from a narrow bandwidth and low efficiency.

Another way to approach these problems is to use special materials, such as metamaterials, for the photon source. Metamaterials are stacks of metallic and dielectric layers, structured at a level much smaller than the wavelength of light in use. They exhibit unusual optical properties when formed into shapes, such as nanowires. Electrons flowing through the material set up a collective oscillation known as a surface plasmon, generating localized electromagnetic fields.

Hyperbolic metamaterials are highly anisotropic versions of these metamaterials. They manipulate light in a variety of ways. For example, they can shrink the wavelength of light and allow it to travel freely in one direction while stopping it in another.

The investigators propose a geometry for their on-chip photon source where a hyperbolic metamaterial is tilted at a precise angle with respect to the end facet of the nearby nanofiber used to transmit the emitted photons. By choosing the tilt angle carefully, light reflections are suppressed at the interface with the fiber.

Calculations by the group showed that this simple geometrical arrangement should overcome previous limitations with these materials.

Co-author Lian Shen said, "Our work represents a vital step toward the implementation of spectrally broad single photon sources with high repetition rates for on-chip quantum networks."

Credit: 
American Institute of Physics

Editing selfies is counter productive: Study

Girls and young women shouldn't spend a lot of time editing selfies for social media because it negatively influences their thoughts about their looks, according to a new Flinders University publication.

In a study published in Body Image, Flinders University psychology researchers asked 130 women aged 18 to 30 to view Instagram snaps of thin and average sized women, before analysing their selfie habits.

They found that the longer the women took to edit and post selfies, the worse their mood and dissatisfaction about their facial appearance.

The women in the study spent about 4½ minutes editing up to five selfies, to smooth and change skin tone, remove dark eye circles, shape their faces and remove flaws.

Flinders University Professor Marika Tiggemann says investing time and effort into taking, selecting, and editing selfies are not harmless activities because they have detrimental effects on women motivated to present the best possible version of themselves.

"We found an increase in dissatisfaction following the selfie task was a based on the extent of editing being undertaken. This demonstrates that the editing of selfies is not a benign process but has negative consequences, even though participants reported being much happier with their edited selfie than their original photo."

"Many women and girls spend considerable time and effort in taking and selecting their selfies, for example, finding the best lighting and most flattering angle, which can then be further enhanced by filters or digital editing to maximise their appearance and appeal."

Prof Tiggemann says teenagers and young women should also be dissuaded from using software to edit selfies.

"Women appear to be motivated by the wish to present the best possible version of themselves and are correspondingly substantially happier with their edited selfie than the original photo. Yet, at the same time, these activities have detrimental effects in terms of poorer mood and facial dissatisfaction."

The results also indicate extensive selfie editing leads to feeling disingenuous online.

"These suggestions are respectively consistent with the two unique predictors of increased facial dissatisfaction, such as thinking about how others will judge you, and thinking about making yourself look better than you do in real life," says Professor Tiggemann.

"Our findings illustrate the difficulties women encounter in negotiating the contemporary social media world."

'Uploading your best self: Selfie editing and body dissatisfaction' (2020) by M Tiggemann, I Anderberg and Z Brown has been published in Body Image (Elsevier) DOI:10.1016/j.bodyim.2020.03.002

Credit: 
Flinders University

Oceans should have a place in climate 'green new deal' policies, scientists suggest

CORVALLIS, Ore. - The world's oceans play a critical role in climate regulation, mitigation and adaptation and should be integrated into comprehensive "green new deal" proposals being promoted by elected officials and agency policymakers, a group of ocean scientists suggests in a new paper.

"The 'green new deal' has been the headline, but very few have been talking about the oceans in those conversations," said Steven Dundas, an environmental and resource economist in Oregon State University's College of Agricultural Sciences and the Coastal Oregon Marine Experiment Station in Newport, Oregon.

"We think it's important to add a touch of ocean blue to this conversation because the oceans play an important role in efforts to mitigate effects of climate change," he said. "Our proposed 'teal deal' is an integrated approach that is more likely to generate cost-effective and equitable solutions to this global threat."

Dundas is one of three senior authors of the paper, which was published recently by the journal Conservation Letters. The other senior authors are Arielle Levine and Rebecca Lewison of San Diego State University. Additional authors include OSU's Angee Doerr, Ana Spalding and Will White.

The scientists highlight four areas of investment commonly touted in "green new deal" proposals that also apply to the world's oceans: energy, transportation, food security and habitat restoration.

"Adding the oceans to climate policy doesn't mean you're ignoring the terrestrial approaches to climate change mitigation," Dundas said. "It means adopting a portfolio approach that includes both. We hope this paper and our recommendations broaden the policy options needed to meet the grand challenge of climate change."

The concept of a green new deal emerged last year as a way to address climate change. International environmental leaders are now suggesting that coronavirus recovery plans present an opportunity to address climate change.

In the renewable energy sector, the ocean's winds, waves and currents represent a significant source of clean energy that could reduce emissions, meet demand for electricity and spur economic growth through new industry. But many hurdles remain, since offshore energy projects are subject to a range of regulatory policies from the local to the national level, the researchers said.

In the transportation sector, 80% of merchandise around the globe is transported by sea, contributing about 3% of human-made emissions. Growth in world trade is predicted to increase emissions by 150 to 250% by 2050. But measures to address and improve maritime emissions reductions are largely absent from international efforts. Modifying hull designs, relying more on biofuels or wind power and other steps could reduce shipping emissions, the researchers suggested.

In the area of food security, marine fisheries remain one of the most sustainable sources of protein for human consumption, with a lower total carbon footprint than many land-based food sources.

As climate change impacts the size and distribution of marine resources, fishing communities are faced with a few options: following the fish, which could increase costs and emissions; finding an alternative livelihood, which is often not feasible; and switching to a new species, which also could come with increased costs and requires careful fisheries management, the researchers said.

Aquaculture - the term for commercially raising fish or growing seafood products - also holds potential for growth at a relatively low emissions cost, researchers said. For example, seaweed aquaculture could mitigate hundreds of tons of emissions each year.

"Properly executed aquaculture, paired with sustainable fisheries, has the potential to enhance the food supply, decrease the carbon footprint of protein sources and sequester carbon at the same time," said Lewison.

In the area of habitat restoration, investment in projects that restore coastal habitats such as mangroves, tidal wetlands, kelp forests and seagrasses should be a key component of climate policy, the researchers suggest. These habitats currently store up to 25 billion metric tons of carbon, and further restoration could increase that storage capability.

Coastal habitat restoration also can increase flood and erosion protection and mitigate storm impacts, reducing the vulnerability of coastal populations to extreme weather impacts and reducing costs of disaster aid.

"Investing in these four sectors can benefit communities across the United States," said Levine. "The impacts and the benefits go far beyond coastal communities."

The researchers hope to use the paper and their argument to encourage policymakers to consider the oceans in "green new deal" proposals moving forward.

Credit: 
Oregon State University

How race affects listening during political conversations

COLUMBUS, Ohio - A new study offers a rare look at how black and white people listen to each other during political discussions, including those that touch on controversial issues about race.

Researchers at The Ohio State University found that, in general, blacks were slightly more likely than whites to say they really listen to others during political discussions.

But in discussions of controversial topics of race - such as white people's use of the Confederate flag and police treatment of blacks - black respondents were more likely than whites to say that it would be "hard" to truly listen to a cross-race discussion partner.

The results show how much the topics of political discussions matter when it comes to how race affects listening, said William Eveland, lead author of the study and professor of communication and political science at Ohio State.

"It makes sense that black people may be better listeners in general, because they have to be constantly monitoring for threats," Eveland said.

"But when it comes to talking specifically about issues of race, black people are more likely to have had prior experiences of racism or micro-aggressions, which make it harder for them to have these conversations with whites."

Eveland conducted the study with Ohio State colleagues Osei Appiah, professor of communication, and Kathryn Coduto and Olivia Bullock, doctoral students in communication. Their paper was published recently in the journal Political Communication.

Their research encompassed two studies.

The first study involved 749 adult Americans who took part online. The researchers oversampled blacks so that they were roughly half of the participants.

Respondents were asked how much they agreed (on a five-point scale from "strongly disagree" to "strongly agree") with four statements that measured how much of a listening approach they took in political conversations.

For example, they were asked "When I talk politics, it is more important for me to learn from others than to convince them."

The researchers also asked participants if they had had any discussions about politics with cross-race conversation partners in the past month.

Overall, blacks were slightly more likely than whites to engage in political listening. However, that finding no longer applied once the researchers took into account whether participants had discussions with opposite-race partners.

Researchers attributed that change to the fact that blacks were more likely than whites to have opposite-race discussion partners: 48 percent of blacks, compared to only 31 percent of whites.

"People who talked about politics with someone of the opposite race were more open to listening, and blacks were more likely to be in that category," Eveland said.

In a second study, the researchers looked specifically at listening in the context of controversial issues surrounding race.

This involved 800 respondents specifically recruited so the study would include 200 black Democrats, 200 black Republicans, 200 white Democrats and 200 white Republicans.

In addition to listing their own race, each participant was asked if they identified with their own race and with the opposite race. In this study, identification referred to a group that the participants "feel particularly close to - that is people who are most like you in their ideas, interests, and feelings."

Since the first study found that most whites and nearly half of blacks did not regularly talk to cross-race partners, the researchers instructed participants to imagine political discussions.

Participants were asked to anticipate a conversation about one of three hot-button topics: police treatment of blacks in the United States, white people displaying the Confederate flag, or black athletes kneeling during the national anthem.

They were told this conversation would be with a person of the other race who was a stranger, co-worker, friend or family member.

Researchers asked participants to take a minute to imagine the conversation, considering who would initiate the conversation, how long it would last, what they might say, what the discussion partner might say, what feelings they might experience and what they might learn that they didn't know before.

After participants had time to imagine the conversation, they answered one question: "Do you think it would be easy or hard to truly listen" to their cross-race partners' views on the topic during the conversation?

They rated the difficulty on a four-point scale from very easy to very hard.

Results showed that blacks tended to say it would be harder to "truly listen" to their white partners than whites did with their imagined black partners.

The topic they talked about - police treatment of blacks, Confederate flags or athletes kneeling - had no effect on the results. It also didn't matter if participants imagined talking to a stranger, co-worker, friend or family member.

The data in this study can't say why blacks said they would find it harder to listen than did whites, Eveland said. But other studies provide a possible explanation.

"Blacks often have had negative prior experiences talking about race-related issues. They've often encountered explicit racism or micro-aggressions that could lead them to put up defensive walls," he said. "They may want to avoid these conversations altogether."

But participants who identified with the opposite race - blacks identifying with "European Americans" and whites identifying with "African Americans" - said they would find listening easier than those who identified only with their own race or with no race at all.

"That was one bright spot. It suggests that getting people to identify with the feelings and ideas of people from the opposite race could be one path to more cross-race listening," Eveland said.

Looking across both studies, the age, sex and education of participants had no relationship to political listening.

"Surprisingly, party identification was also unrelated to listening in either study," Eveland said.

Not surprisingly, Eveland said, it was those people who had the most real-life experience or connection with people of the opposite race who showed the most capacity for listening.

In the first study, the best listeners were people who reported having prior political discussions with someone of the opposite race. In the second study, it was those who identified with the opposite race and who had more opposite-race relatives.

Those real-life connections may be difficult to achieve on a broad scale, Eveland said, but they could play a vital role in improving our political discourse.

"If there were more listening - and greater perception that other people would listen to us - we might not have the degree of partisan polarization we currently have," he said.

"It is important to find ways to encourage people to listen."

Credit: 
Ohio State University

Simulations forecast nationwide increase in human exposure to extreme climate events

image: The map displays projected changes in human exposure to extreme climate events at a 1-kilometer scale from 2010 to 2050, which range from minor decreases in rural and suburban areas to moderate and major increases in densely populated urban centers.

Image: 
Adam Malin/Oak Ridge National Laboratory, U.S. Dept. of Energy

OAK RIDGE, Tenn., May 5, 2020 -- By 2050, the United States will likely be exposed to a larger number of extreme climate events, including more frequent heat waves, longer droughts and more intense floods, which can lead to greater risks for human health, ecosystem stability and regional economies.

This potential future was the conclusion that a team of researchers from the Department of Energy's Oak Ridge National Laboratory, Istanbul Technical University, Stanford University and the National Center for Atmospheric Research reached by using ORNL's now-decommissioned Titan supercomputer to calculate the trajectories of nine types of extreme climate events. The team based these calculations on the National Oceanic and Atmospheric Administration's National Centers for Environmental Information Climate Extremes Index, or CEI.

Previous studies have demonstrated the impact that a single type of extreme, such as temperature or precipitation, could have on broad climate zones across the U.S. However, this team estimated the combined consequences of many different types of extremes simultaneously and conducted their analysis at the county level, a unique approach that provided unprecedented regional and national climate projections that identified the areas and population groups that are most likely to face such hardships. Results from this research are published in Earth's Future.

"We calculated population exposure at a 1-kilometer scale, which had never been done before, to provide more precise estimates," said Moetasim Ashfaq, a climate computational scientist at ORNL.

The team combined a high-resolution climate model ensemble, CEI estimates for various climate extreme categories, and future population projections in order to simulate multiple scenarios supplied by the Intergovernmental Panel on Climate Change, or IPCC. The team based one such simulation on a scenario called the Representation Concentration Pathway 8.5, which considers how climate conditions are likely to evolve if greenhouse gas emissions continue to rise without intervention.

According to the researchers' estimates, on average, more than 47 million people throughout the country are exposed to extreme climate conditions annually, and this population exposure has been increasing in recent decades. They expect the prevailing trend to continue and anticipate that the number of people exposed could double by 2050, meaning one in every three people would be directly affected. Projected population growth could increase exposure even more.

Without adjusting for any change in population habits, this increased exposure could cause or exacerbate health problems. For example, high temperatures can worsen cardiovascular, respiratory and other medical conditions. Droughts can increase the risk of infectious disease outbreaks by reducing air quality and contaminating water and food sources.

Extreme heat can also reduce crop yields, disrupting economies reliant on agriculture. Additionally, costly and dangerous natural disasters such as wildfires and flash floods can leave trees defenseless against disease and insect infestations that can destroy entire ecosystems.

The researchers analyzed their results in comparison with a "reference period" containing historical simulation data from 1980 to 2005, and they designed their simulations to study human contributions to climate projections. As a result, the annual greenhouse gas concentrations were aligned in the historical simulations and in the observations but the occurrences of observed natural modes of climate variations were not.

The lack of alignment in natural modes of changes in climate, combined with the resemblance between simulated and observed trends in exposure to climate extremes, helped the team conclude that human behavior could be responsible for the observed increase in population exposure to climate extremes in the U.S. Additionally, these results improved confidence in the projected doubling of population exposure that the team anticipates will occur in the next 30 years unless greenhouse gas levels are reduced.

"Seeing the same upward trend in the number of climate extremes in our historical simulations and observations strongly suggests that these changes are driven by human activity," Ashfaq said.

The researchers are preparing to run another set of simulations based on new scenarios for the next IPCC report, and their existing data have already been incorporated into other studies.

"These collaborative efforts could uncover how various climate extremes affect certain areas and help determine the types of policies and mitigation strategies that may be required to prevent or reduce the damage," Ashfaq said.

Credit: 
DOE/Oak Ridge National Laboratory

Expansion, environmental impacts of irrigation by 2050 greatly underestimated

The amount of farmland around the world that will need to be irrigated in order to feed an estimated global population of 9 billion people by 2050 could be up to several billion acres, far higher than scientists currently project, according to new research. The result would be a far greater strain on aquifers, as well as the likely expansion of agriculture into natural ecosystems as farmers search for water.

Existing irrigation models -- which are widely used to define policies on water and food security, environmental sustainability, and climate change -- suggest that the amount of agricultural land requiring irrigation could extend between 240 million and 450 million hectares (590 million to 1.1 billion acres) during the next 30 years.

But those projections likely underestimate population growth and too confidently assume how much land and water will be available for agriculture without having to find new sources, according to researchers from Princeton University, the University of Reading in the United Kingdom, and the University of Bergen in Norway.

The amount of irrigated land could in fact increase to as high as 1.8 billion hectares (4.4 billion acres), the study authors reported in the journal Geophysical Research Letters, writing, "Policymakers should acknowledge that irrigated areas can grow much more than previously thought in order to avoid underestimating potential environmental costs."

First author Arnald Puy, a postdoctoral researcher in ecology and evolutionary biology at Princeton, said that an expansion of irrigation of this magnitude would have dramatic effects on the environment and other sectors of society. Puy, who is affiliated with the Center for BioComplexity administered by the Princeton Environmental Institute (PEI), worked with co-authors Samuele Lo Piano of the University of Reading and Andrea Saltelli of the University of Bergen.

Irrigation is currently responsible for about 70% of freshwater withdrawals worldwide. About 90% of water taken for residential and industrial uses eventually returns to the aquifer, but only about one-half of the water used for irrigation is reusable. Evaporation, evapotranspiration from plants, and delivery losses such as from leaky pipes forever remove the rest from the water cycle.

"Much larger irrigated areas might mean extending agricultural land toward new ecosystems or non-cultivated areas with the consequent loss of biodiversity, which might also be larger than expected," Puy said. "At the same time, needing more water for irrigation means less water for other sectors and therefore more stress on water resources than expected."

There also could be a much higher amplification of climate change, which current climate models do not account for, Puy said. Previous research has shown that irrigation may influence climate by altering surface temperatures and the amount of water vapor in the atmosphere, both of which are critical components of climate modeling. These factors have an impact on cloud formation and the amount of solar radiation that is either contained within the atmosphere or reflected back into space.

The climate effects of irrigation also include greenhouse gases released through producing and operating irrigation machinery. The most common modern equipment consists of center-pivot systems consisting of wheeled tubes outfitted with spray guns or dripping faucet heads that rotate around a central water source.

"Much larger irrigated areas means that predictions of agricultural gas emissions might also be much lower than they will be in reality," Puy said "More irrigated areas means investing on irrigation machinery and energy consumption, leading to the consumption of fossil-energy reservoirs and the release of CO2."

Finally, irrigated agriculture also increases soil total nitrogen and carbon due to the addition of fertilizers and manure. Nitrate leaching can taint groundwater and ammonia can be volatilized from fertilizers, limiting the availability of potable water, Puy said.

By drawing attention to the underestimation of irrigated land by current models, Puy, Lo Piano and Saltelli hoped to increase the accuracy of all studies that rely on those estimates to project how the climate and environment could be affected by the very real challenge of feeding everyone on Earth -- and how the state of the environment could shape the outcome of that effort.

Credit: 
Princeton University

MU researcher identifies four possible treatments for COVID-19

image: Dr. Singh is an associate professor at the MU College of Veterinary Medicine.

Image: 
MU College of Veterinary Medicine

Now, a researcher at the University of Missouri has found that four antiviral drugs, including remdesivir, a drug originally developed to treat Ebola, are effective in inhibiting the replication of the coronavirus causing COVID-19.

Kamlendra Singh, an associate professor in the College of Veterinary Medicine, and his team used computer-aided drug design to examine the effectiveness of remdesivir, 5-fluorouracil, ribavirin and favipiravir in treating COVID-19. Singh found that all four drugs were effective in inhibiting, or blocking, the coronavirus' RNA proteins from making genomic copies of the virus.

"As researchers, we have an obligation to search for possible treatments given that so many people are dying from this virus," Singh said. "These antiviral drugs, if they turn out to be effective, all have some limitations. But in the midst of a global pandemic, they are worth taking a deeper look at because based on our research, we have reason to believe that all of these drugs could potentially be effective in treating COVID-19."

The coronavirus (SARS-CoV-2) that causes COVID-19, like all viruses, can mutate and develop resistance to antiviral drugs. Therefore, further testing in a laboratory setting and in patients is needed to better evaluate how the proposed treatments interact with the virus' RNA polymerase.

"Our goal is to help doctors by providing options for possible treatments of COVID-19, and to ultimately contribute in improving the health outcomes of patients suffering from the infectious disease," Singh said. "As researchers, we are simply playing our part in the fight against the pandemic."

Singh's research is an example of translational medicine, a key component of the University of Missouri System's NextGen Precision Health Initiative. The NextGen initiative aims to improve large-scale interdisciplinary collaboration in pursuit of life-changing precision health advancements and research.

Credit: 
University of Missouri-Columbia