Earth

Parental care has forced great crested grebes to lay eggs with an eye on seagulls

Ornithologists from St Petersburg University, Elmira Zaynagutdinova and Yuriy Mikhailov, studied the features of the great crested grebes (Podiceps cristatus) nesting in the nature reserve 'North Coast of the Neva Bay'. It turned out that some birds have learned to change the time of egg incubation in order to breed along with black-headed gulls and black terns. This strategy makes it possible for them to use the 'protection services' of neighbouring birds.

As a rule, the great crested grebes nesting in the shallow waters of the Neva Bay choose one of two scenarios for breeding. They build their nests either in reed beds where it is rather difficult for predators to sneak, or in more dangerous places in open water. In the latter case, the birds have to somehow protect their eggs from enemies - mostly from hooded crows, which are ready to destroy the nest at the first opportunity. This is why great crested grebes often settle along with more aggressive patrons - for example, black-headed gulls or black terns.

'As a result, great crested grebes begin to nest in the vicinity of the larid colony much earlier than in reed beds. Incubation of all three species in the two neighbouring colonies being studied started on the same date - 27 May. This made it possible for the great crested grebes to nest under the protection of the other birds, and therefore to protect their nests from predators as effectively as in the reed bed,' said Elmira Zaynagutdinova, PhD, an author of the research.

As the researchers note, the great crested grebes from the open water colonies made a 'false start' about five days earlier than the ones who had chosen reed beds. Each year, the exact dates of the great crested grebes nesting in the Neva Bay vary depending on the weather, but they fall roughly between 20 May and 15 June. At present, these cute birds with black crests and red collars can be found in ponds and lakes almost throughout Eurasia, Australia and New Zealand, and even in some places in Africa. Yuriy Mikhailov is a PhD student of St Petersburg University and the co-author of the research. He noted that the structure of their nests is also of particular interest.

'They build floating structures, so it is not so important for them to have an overgrown shore nearby,' noted the ornithologist Yuriy Mikhailov. 'A great crested grebe's nest usually consists of an old reed, typha and sometimes lily leaves. It is located where there is a water body with sufficient depth - from where it is convenient to immediately dive into the water and hunt for fish.'

Credit: 
St. Petersburg State University

Scientists map huge undersea fresh-water aquifer off US Northeast

image: An electromagnetic receiver used in the study being deployed off the research vessel Marcus Langseth.

Image: 
Kerry Key

In a new survey of the sub-seafloor off the U.S. Northeast coast, scientists have made a surprising discovery: a gigantic aquifer of relatively fresh water trapped in porous sediments lying below the salty ocean. It appears to be the largest such formation yet found in the world. The aquifer stretches from the shore at least from Massachusetts to New Jersey, extending more or less continuously out about 50 miles to the edge of the continental shelf. If found on the surface, it would create a lake covering some 15,000 square miles. The study suggests that such aquifers probably lie off many other coasts worldwide, and could provide desperately needed water for arid areas that are now in danger of running out.

The researchers employed innovative measurements of electromagnetic waves to map the water, which remained invisible to other technologies. "We knew there was fresh water down there in isolated places, but we did not know the extent or geometry," said lead author Chloe Gustafson, a PhD. candidate at Columbia University's Lamont-Doherty Earth Observatory. "It could turn out to be an important resource in other parts of the world." The study appears this week in the journal Scientific Reports.

The first hints of the aquifer came in the 1970s, when companies drilled off the coastline for oil, but sometimes instead hit fresh water. Drill holes are just pinpricks in the seafloor, and scientists debated whether the water deposits were just isolated pockets or something bigger. Starting about 20 years ago, study coauthor Kerry Key, now a Lamont-Doherty geophysicist, helped oil companies develop techniques to use electromagnetic imaging of the sub-seafloor to look for oil. More recently, Key decided to see if some form of the technology could also be used also to find fresh-water deposits. In 2015, he and Rob L. Evans of Woods Hole Oceanographic Institution spent 10 days on the Lamont-Doherty research vessel Marcus G. Langseth making measurements off southern New Jersey and the Massachusetts island of Martha's Vineyard, where scattered drill holes had hit fresh-water-rich sediments.

They dropped receivers to the seafloor to measure electromagnetic fields below, and the degree to which natural disruptions such as solar winds and lightning strikes resonated through them. An apparatus towed behind the ship also emitted artificial electromagnetic pulses and recorded the same type of reactions from the subseafloor. Both methods work in a simple way: salt water is a better conductor of electromagnetic waves than fresh water, so the freshwater stood out as a band of low conductance. Analyses indicated that the deposits are not scattered; they are more or less continuous, starting at the shoreline and extending far out within the shallow continental shelf -- in some cases, as far as 75 miles. For the most part, they begin at around 600 feet below the ocean floor, and bottom out at about 1,200 feet.

The consistency of the data from both study areas allowed to the researchers to infer with a high degree of confidence that fresh water sediments continuously span not just New Jersey and much of Massachusetts, but the intervening coasts of Rhode Island, Connecticut and New York. They estimate that the region holds at least 670 cubic miles of fresh water. If future research shows the aquifer extends further north and south, it would rival the great Ogallala Aquifer, which supplies vital groundwater to eight Great Plains states, from South Dakota to Texas.

The water probably got under the seabed in one of two different ways, say the researchers. Some 15,000 to 20,000 years ago, toward the end of the last glacial age, much of the world's water was locked up in mile-deep ice; in North America, it extended through what is now northern New Jersey, Long Island and the New England coast. Sea levels were much lower, exposing much of what is now the underwater U.S. continental shelf. When the ice melted, sediments formed huge river deltas on top of the shelf, and fresh water got trapped there in scattered pockets. Later, sea levels rose. Up to now, the trapping of such "fossil" water has been the common explanation for any fresh water found under the ocean.

But the researchers say the new findings indicate that the aquifer is also being fed by modern subterranean runoff from the land. As water from rainfall and water bodies percolates through onshore sediments, it is likely pumped seaward by the rising and falling pressure of tides, said Key. He likened this to a person pressing up and down on a sponge to suck in water from the sponge's sides. Also, the aquifer is generally freshest near the shore, and saltier the farther out you go, suggesting that it mixes gradually with ocean water over time. Terrestrial fresh water usually contains less than 1 part per thousand salt, and this is about the value found undersea near land. By the time the aquifer reaches its outer edges, it rises to 15 parts per thousand. (Typical seawater is 35 parts per thousand.)

If water from the outer parts of the aquifer were to be withdrawn, it would have to be desalinated for most uses, but the cost would be much less than processing seawater, said Key. "We probably don't need to do that in this region, but if we can show there are large aquifers in other regions, that might potentially represent a resource" in places like southern California, Australia, the Mideast or Saharan Africa, he said. His group hopes to expand its surveys.

Credit: 
Columbia Climate School

Shaken and stirred: Scientists capture the deformation effect of shock waves on a material

image: Diffraction images of polycrystalline aluminum foil before laser irradiation (left image) and after shock wave propagation (right image). The laser-induced shock wave pattern is smoothed due to plastic deformation.

Image: 
<em>Scientific Reports</em>

Understanding how shock waves affect structures is crucial for advancements in material science research, including safety protocols and novel surface modifications. Using X-ray diffraction probes, scientists at the Institute of Materials Structure Science of KEK, Tokyo of Tech, Kumamoto University, and University of Tsukuba studied the deformation of polycrystalline aluminum foil when subjected to a laser-driven shock wave.

The foundations of engineering lie in understanding and manipulating the structure of materials to harness their properties in creative ways. Interactions between materials take place via the exchange of forces so predicting a material's ability to withstand a force and how it propagates is central to developing structures with enhanced strength.

If an instantaneous strong force acting on a material results in a shock wave, the atoms may become displaced or dislocated. Like a rubber band, if the external force is not too significant, the internal forces can resist and the material can return to its original state (elastic deformation). But beyond a certain limit, the force may result in permanent damage or even structural failure (plastic deformation) of the material.

Unit cells are the smallest regularly repeating three dimensional atomic structure that reflects the overall symmetry of a crystal, and studying their displacement can provide rich insights. However, observing processes at the atomic scale is very difficult. This is where x-ray diffraction comes to the rescue. Envision a camera that allows you to capture events taking place at the atomic scale. When an x-ray encounters an atom it gets absorbed and then re-emitted by the atom. This results in the wave being scattered or diffracted in an orderly fashion, owing to the orderly arrangement of atoms in the crystal. Depending on the size, spatial arrangement, and distance between the atoms, the wave is scattered in different directions with different intensity. Thus, the atomic structure is captured as signals, like a photograph of the crystal during and after the shock wave passes. This can be used to decode crystal deformation.

Motivated by this, researchers conducted an experiment to observe the deformation process of polycrystalline aluminum foil when subjected to a laser-driven shock wave. This disturbance was then captured as diffraction spots of an x-ray beam which could be simultaneously compared to the diffraction pattern of the pre-shock crystal (Fig. 1). They found that large grains of aluminum were rotated, compressed elastically, and reduced in size along the wave direction. As the wave propagated deeper into the sample, the diffraction spots smoothed and broadened, and the original diffraction spots began to disappear, replaced by a new set of spots (Fig. 2). "We observed grain refinement and structural changes of the polycrystalline metal, which increased with the propagation of the laser-driven shock wave. This, in turn, enabled the study of microstructural deformation in plastic shock flows from the atomic to the mesoscale level," stated Dr. Kohei Ichiyanagi of High Energy Accelerator Research Organization and Jichi Medical University.

Contemporary research of post-shock structural changes of materials often fails to highlight the process of wave dissipation and the distribution of defects. This research changes the status quo by providing a method to observe grain refinement and structural changes, including surface hardness and modification, of polycrystalline metal during shock wave loading. Optimistic about the potential of this research, Professor Kazutaka G. Nakamura of the Tokyo Institute of Technology said, "Our technique will be valuable for revealing mechanisms of microstructural change for various alloys and ceramics based on dynamic processes."

Surely, this goes to show the creative ways we can expand the reaches of what we are able to see: this time, it's how x-rays can be used to capture how particles are shaken and stirred!

Credit: 
Tokyo Institute of Technology

Ice lithography: opportunities and challenges in 3D nanofabrication

image: IL process flow. Water ice acts as a positive-tone lithography resist, and alkane ice demonstrates a negative-resist-like capability.

Image: 
©Science China Press

Nanotechnology and nanoscience are enabled by nanofabrication. Electron-beam lithography (EBL), which makes patterns down to a few nanometers, is one of the fundamental pillars of nanofabrication. In the past decade, significant progress has been made in electron-beam-based nanofabrication, such as the emerging ice lithography (IL) technology, in which ice thin-films are used as resists and patterned by a focused electron-beam. The entire process of IL nanofabrication is sustainable and streamlined because spin coating and chemical developing steps commonly required for EBL resists are made needless.

A fresh review "Ice lithography for 3D nanofabrication" by Prof. Min Qiu at Westlake University is published in Science Bulletin. In this review, the authors present current status and future perspectives of ice lithography (IL). Different ice resists and IL instrument design are also introduced. Special emphasis is placed on advantages of IL for 3D nanofabrication.

The IL technology was first proposed by the Nanopore group at Harvard University in 2005. Water ice is the first identified ice resist for IL, and it is still the only one positive-tone lithography resist so far. As shown in Fig.1, water ice is easily removed within the electron-beam exposure area. Organic ice condensed from simple organic molecules, such as alkanes, demonstrates a negative-resist-like capability, which means only exposed patterns remain on the substrate after heating the sample to room temperature.

IL research is still in its infancy, and this method has already exhibited great advantages in efficient 3D nanofabrication. Different from spin coating of EBL resists, ice resists are able to coat all accessible freezing surfaces of the sample during ice deposition. Therefore, IL can process samples with non-flat and irregular surfaces, such as patterning on AFM probes, and pattern on a tiny and fragile nanostructure, such as suspended single-walled carbon nanotubes. Benefiting from the very low sensitivity of water ice, IL allows in situ observing nanostructures under the ice resist through SEM imaging. This feature not only improves the alignment accuracy but also simplifies the processing steps in fabricating 3D layered nanostructures.

As cutting-edge instrument research and development is essential for advancing the IL technology, this review finally discusses the evolution of IL instruments and provides a clear guidance on the construction of a dedicated IL instrument. With the discovery of new functional ice resists in future, more cutting-edge and interdisciplinary researches are expected to exploit the potentials of IL.

Credit: 
Science China Press

Discovery of the cell fate switch from neurons to astrocytes in the developing brain

image: In the process of fetal brain development, neural stem cells generate both neurons and astrocytes. Neurons are formed first (left) and astrocytes later (right). The present study has revealed the cell fate switch from neurons to astrocytes to be FGF signaling.

Image: 
Kanazawa University

Background

Neurons1 and astrocytes2 are prominent cell types in the cerebral cortex. Neurons are the primary information processing cells in the brain, whereas astrocytes support and modulate their functions. For sound functioning of the brain, it is crucial that proper numbers of neurons and astrocytes are generated during fetal brain development. The brain could not function correctly if only neurons or astrocytes were generated.

During fetal brain development, both neurons and astrocytes are generated from neural stem cells3, which give rise to almost all cells in the cerebral cortex (Figure 1). One of the characteristics of this developmental process is that neural stem cells first generate neurons and, after that, start generating astrocytes (Figure 1). The "switch" to change the cell differentiation fate of neural stem cells from neurons to astrocytes has attracted much attention, since the cell fate switch is key to the generation of proper numbers of neurons and astrocytes. However, it remained largely unknown.

Results

The research group at Kanazawa University show that the switch determining the fate of two types of cells in the cerebral cortex generated from neural stem cells is based on the FGF signaling pathway (Figure 1). More specifically, it was found that enhancement of FGF signaling by introducing FGF in the cerebral cortex caused cells destined to become neurons to be differentiated into astrocytes (Figures 2, 3). On the other hand, suppression of FGF signaling caused cells destined to become astrocytes to be differentiated into neurons (Figure 3).
The present study has thus elucidated the mechanism responsible for determining the correct numbers of neurons and astrocytes during development of the fetal brain.

Future prospects

The research group has discovered the switch that determines the fate of cells in the developing cerebral cortex generated from neural stem cells, i.e. neurons and astrocytes; this switch involves the FGF signaling pathway. This may be relevant for understanding the pathology of brain disorders caused by unbalanced numbers of neurons and astrocytes by determining which disorders are based on abnormal FGF signaling.

Credit: 
Kanazawa University

High school seniors losing trust in law enforcement, justice system

WASHINGTON -- High school seniors' confidence in law enforcement and the justice system significantly declined from 2006 to 2017 while their faith in religious organizations and schools was comparatively higher and more stable, according to research published by the American Psychological Association.

"We found that adolescents' trust in law enforcement in particular declined more rapidly in recent years than their confidence in any other authority," said Adam D. Fine, PhD, of Arizona State University's Watts College of Public Service and Community Solutions, and lead author of the study. "Our results contradicted the common stereotype that teens have 'anti-authority' attitudes because trust in schools and religious organizations was not affected. This shows that by 12th grade, teens are clearly able to differentiate among different types of authority."

The study was published in the journal Developmental Psychology.

Several previous studies that included surveys of adults and high school seniors have shown that trust in U.S. authority institutions, such as news media, businesses, religious institutions and Congress, has declined in recent decades, reaching all-time or near-all-time lows by 2012, according to Fine.

Fine and his co-authors, Emily Kan, a graduate student in psychological science at the University of California, Irvine and Elizabeth Cauffman, PhD, University of California, Irvine professor of psychological science, wanted to examine whether teens exhibited a general anti-authority attitude.

"As children become teenagers, they begin to question authority more frequently and more skillfully," said Fine. "Adolescents may critically evaluate authority figures in all aspects of their lives, including at home, at school and in their communities."

The researchers used data from the Monitoring the Future study, which consists of annual, self-reported surveys of 12th grade students in the 48 contiguous United States. Data from more than 10,000 teens was used from four time periods: 2006 to 2008, 2009 to 2011, 2012 to 2014 and 2015 to 2017.

Teens were asked to rate how good or bad a job was being done for the country by police and law enforcement agencies, the justice system, public schools and churches/religious organizations on a scale of one (very poor) to five (very good).

Fine and his co-authors found that over those 11 years, adolescents tended to have the most confidence in religious institutions, followed by public schools, and then law enforcement, while they viewed the justice system least favorably.

From 2015 to 2017, however, there was a critical shift as teens perceived both law enforcement and the justice system equally negatively, he said.

"Given the current conversations surrounding unjust policing in the United States, we were not surprised to find that youth do, in fact, differentiate among authorities, and something unique is happening when it comes to their perceptions of law enforcement and the justice system," said Fine. "However, America's teens do not have a ubiquitous 'anti-authority' attitude as their confidence in social institutions remained higher and more stable."

The researchers also examined differences among racial and ethnic minorities.

Black teens reported the lowest confidence in law enforcement and the justice system, followed by Latino youths, then white adolescents.

"In direct contrast to our findings about legal authorities, black youth reported significantly more confidence in social institutions, more so than their white peers," said Kan.

These findings surprised the researchers. They expected that because racial and ethnic minority students in the U.S. tend to receive more and harsher discipline in school, youth of color would perceive schools as being more closely aligned with legal authorities, said Fine.

"This may indicate that despite the statistics on disproportionate discipline, youth of color may still perceive schools as generally supportive social authorities. Considering black youth reported more positive perceptions of social authorities than white youth, the fact that their perceptions of law enforcement and the justice system are so poor becomes even more salient," he said.

The authors believe these findings highlight the immediate need for policymakers and officials in law enforcement and the justice system to focus on gaining back adolescents' trust.

"Considering that negative perceptions of legal authority have been linked to involvement in crime and crime reporting, the real-world implications are quite clear. Efforts must be made to improve biased and unjust policing practices," Cauffman said.

Credit: 
American Psychological Association

New research backs Australian regulatory decision on poppers

Young gay and bisexual men are frequent users of alkyl nitrites, or poppers, but few show signs of addiction, risky consumption habits or other psychosocial problems, a study shows.

A survey of more than 800 men aged 18 to 35 found little evidence of typical dependency characteristics, including health, social, legal and financial problems, and no correlation between popper use and mental health or psychological stress.

Dr Daniel Demant, public health researcher at the University of Technology Sydney (UTS), conducted the study and said he welcomed the decision by Australia's Therapeutic Goods Administration (TGA) to step back from prohibiting poppers. The TGA instead elected to classify them as a Schedule 3 drug, available over the counter in pharmacies from February 2020.

An interim decision by the TGA in 2018 recommended poppers be classed as a prohibited substance, in the same category as methamphetamine and heroin, which would have made "overnight criminals" of the estimated 100,000 plus Australian users.

"What we see with this research is that poppers are a very commonly used drug in the LGBT community, both recently and over their lifetime," Dr Demant said.

"Most of the users are already oppressed or marginalised based on their social identity as gay or bisexual men. This creates a question as to whether there would have been a discriminatory element in banning a substance with such a low risk profile.

"Banning a substance that is used by so many people would create a new class of criminals, basically overnight."

Currently, poppers are available on prescription from pharmacies, but they are more commonly bought illicitly, in sex-on-premises venues and LGBT bars. A vial containing 25-30mL of the clear, strong-smelling fluid, possibly labelled as "VHS tape cleaner", "leather cleaner" or "room deodoriser", sells for up to $50, despite costing a couple of cents to manufacture.

The new TGA decision to regulate poppers rather than banning them hopefully paves the way for some measure of quality control as well as the removal of the "extreme profit margin" that exists now, Dr Demant said.

Dr Demant said that with poppers becoming a pharmacy-only medicine, safety standards would have to be met and pharmacy staff could provide guidance in cases where poppers might react badly with users' other medications, particularly Viagra.

"We could stop pretending that poppers are sold for anything other than getting people high. And once we do offer it in pharmacies, we would have something made to the highest standards for people to use."

Credit: 
University of Technology Sydney

Epilepsy and sudden death linked to bad gene

image: Keep on Breathing: People with epilepsy can stop breathing and die suddenly, with or without a seizure. A group of UConn neuroscientists traced the problem to a gene that causes both seizures in the cortex and respiratory irregularities in the brainstem.

Image: 
Dan Mulkey and Virge Kask, University of Connecticut.

In sudden death in epilepsy, people stop breathing for no apparent reason and die. Now, a group of UConn neuroscientists have a lead as to why, they report in the journal eLife.

"People with epilepsy have a high mortality rate, but it's mysterious," says Dan Mulkey, a neuroscientist in UConn's physiology and neurobiology department.

More than one of every 1,000 people with epilepsy die each year from what's called sudden unexpected death in epilepsy (SUDEP). No one knows why.

The explanation usually given is that the patient had a seizure that killed them. But seizures happen in the cortex, the top of the brain, and life-sustaining processes like breathing are controlled somewhere else entirely: the brainstem, the very bottom part of the brain that connects to the spinal cord. The two parts of the brain are quite distant from each other.

"It's like, if the seizure is in New York, the brainstem is in San Francisco," Mulkey says.

Many neurologists argue that a particularly bad seizure can travel down through the brain from the cortex to the brainstem to cause breathing or heartbeat malfunction, and that's what kills in SUDEP. But Mulkey doesn't buy it. People die of SUDEP without having an obvious seizure, and epilepsy patients can have breathing problems in the absence of seizures.

Instead, Mulkey and his colleagues, graduate students Fu-Shan Kuo and Colin Clearly, wondered if there was a genetic basis for SUDEP. Perhaps the same genetic mutation that causes the seizures also disrupts the cells in the brainstem that control breathing.

Kuo raised mice with the human mutation for a severe form of epilepsy called Dravet syndrome. Dravet syndrome is caused by mutations in a gene that shapes the channels through which sodium moves in and out of cells in the brain. If the sodium channels don't function properly, cells can get overexcited. One cell's overexcitement can travel through the brain like hysteria through a crowded stadium, stampeding into a seizure.

The gene mutated in Dravet syndrome is called sodium channel gene 1a, or Scn1a. It's considered a super-culprit for epilepsy, with more than 1,200 different Scn1a mutations identified. The severity of the epilepsy caused by Scn1a depends on whether the mutation causes partial or complete loss of the sodium channel's function. The Dravet mutation is on the severe end of the spectrum. People with Dravet syndrome tend to have dramatic seizures, exacerbated by hot weather, and the syndrome is very hard to control with anti-epileptic medications. SUDEP is sadly a frequent way for people with Dravet syndrome to die.

There's a somewhat paradoxical part of Dravet syndrome, too: this Scn1a mutation makes the sodium channels less active, not more. Instead of making cells overactive, it makes them underactive. But there's a catch. This mutation mostly affects inhibitory cells - that is, cells in charge of calming the brain down. They're the stadium bouncers, so to speak. And if the bouncers are asleep on the job, the overexcited neurons can stampede uninhibited.

To understand how this might lead to SUDEP, Kuo wanted to test two things: first, whether the mice with the Dravet syndrome mutation show breathing problems and die prematurely of SUDEP, and second, whether the cells in the part of the mice's brainstem that controls breathing were normal or were somehow perturbed by the mutation.

The first question was answered quickly: the mice with Dravet syndrome had bad seizures that became more severe when the mice got hot, exactly like humans with Dravet syndrome. They tended to die very young, in a manner similar to SUDEP; none lived much past three weeks.

The second question took longer to answer, but there were early clues that Kuo and Mulkey were on to something. The mice with Dravet Syndrome had disordered breathing. They tended to hypoventilate (breathe too little) for no apparent reason sometimes. Other times they would have long apneas, or pauses between breaths. And these mice didn't breathe more in response to high carbon dioxide levels in the air, the way humans and normal mice do.

"We felt really good that our model was reflecting the human condition," Mulkey says.

The next step was to actually look at the mice's brainstems and see if something was wrong.

When Kuo zoomed in on the part of the brainstem that controls breathing, she saw that the inhibitory cells - the stadium bouncers of the brainstem - were definitely less active than they should have been. This led the excitatory neurons to run wild, and constantly tell the part of the brain that generates the breathing rhythm to push faster. But shouldn't this lead to increased breathing, not stopping?

There is definitely something wrong with the breathing circuit in the brainstem in these mice, but Mulkey and Kuo cannot pinpoint the exact problem. So they're still on the case. The next steps will be to look at mice that only express the Scn1a mutation in the brainstem or only in the cortex, and see if they also have problems. If mice with a mutation in the cortex but not the brainstem don't have SUDEP, that would argue against the 'seizure descending from cortex to brainstem' hypothesis. The researchers also plan on looking at other parts of the breathing circuit to see whether other parts have gone haywire, too. Eventually, they hope to identify a key player that can be calmed - or prodded - to prevent the breathing system from breaking down, and ultimately save the lives of people with epilepsy.

Credit: 
University of Connecticut

Burnout: Sleepless firefighters at risk of exhaustion and mental health conditions

Sleep disturbances and mental health challenges are putting close to half of America's firefighters at high risk of emotional fatigue and exhaustion, new research shows.

The research was conducted by Monash University in Australia in collaboration with Brigham and Women's Hospital in Boston, USA.

Of the 6,307 firefighters from 66 fire departments across the USA that took part in this cross-sectional study, 49% exhibited high levels of physical and emotional burnout in at least one area.

Firefighters who screened positive for a sleep disorder, in particular insomnia, reported a threefold increased risk of emotional exhaustion. Those with a self-reported diagnosis of post-traumatic stress disorder (PTSD), depression or anxiety had up to four-times the increased risk of burnout.

Sleepiness and short sleep, even in firefighters who did not screen positive for a sleep disorder, were also associated with high levels of burnout.

The collaboration was led by Dr Alexander Wolkow, Post-Doctoral Research Fellow and Professor Shantha Rajaratnam in the Turner Institute for Brain and Mental Health at Monash University, and Dr Laura Barger and Dr Charles Czeisler in the Division of Sleep and Circadian Disorders at Brigham and Women's Hospital.

Researchers investigated whether sleep disorder risk and mental health outcomes in firefighters were associated with burnout, particularly emotional exhaustion, and examined the role of sleep at work in these relationships.

The study identifies the physical and emotional impact that sleep loss and exhaustion can have on firefighters' ability respond to infernos and other incidents where lives and property are in danger. The study was published online in the Journal of Sleep Research.

"Firefighters are frequently exposed to sleep restriction due to their work schedules, which typically involve 24-hour shifts. These schedules may prevent firefighters from obtaining sufficient sleep in order to feel rested," Dr Wolkow said.

"Inadequate sleep during and after work, and into rest periods, may impair firefighters' ability to recover from occupational demands, potentially explaining the heightened burnout risk."

Almost half of the firefighters surveyed reported having less than six hours of sleep in a 24-hour period when working overnight (between 10pm and 8am), including 24-hour shifts, and 31% reported short sleep patterns the day after overnight work or a 24-hour shift.

In an important step forward, researchers provide evidence that short sleep during an overnight shift mediates the link between sleep disorder risk and high burnout on emotional exhaustion and depersonalisation levels.

"Given that 84.4% of our sample worked extended duration shifts of 24 or more hours, our findings highlight the need to maximise sleep opportunities during overnight shifts to reduce burnout," Dr Wolkow said.

"For instance, fire department policies that encourage sleep - such as, permitting and encouraging napping and having black-out shades for sleep quarters, may increase firefighters' sleep at work.

"With the high cost of burnout to the individual and organisation on the rise, we suggest that reducing sleep and mental health disturbances should be a focus of fire departments' occupational health screening programs, along with trialling interventions designed to maximise sleep."

Credit: 
Monash University

Do ice cores help to unravel the clouds of climate history?

image: INDA (Ice Nucleation Droplet Array) is a instrument in which the many drops of water are cooled in a controlled manner. Through a glass window it can be observed from above at which temperature how many drops freeze. The number of frozen drops is then converted into the concentration of ice nucleating particles.

Image: 
Photo: Heike Wex, TROPOS

Leipzig/Copenhagen/Villigen/Beijing. For the first time, an international research team led by the Leibniz Institute for Tropospheric Research (TROPOS) has investigated atmospheric ice nucleating particles (INPs) in ice cores, which can provide insights on the type of cloud cover in the Arctic over the last 500 years. These INPs play an important role in the formation of ice in clouds and thus have a major influence on the climate. So far, however, there are only a few measurements that date back only a few decades. The new method could help to obtain information about historical clouds from climate archives and thus close large gaps in knowledge in climate research.

The team from TROPOS, the University of Copenhagen, the University of Bern and the Paul Scherrer Institute writes in the journal Geophysical Research Letters that findings on variations in the concentrations of ice nucleating particles in the atmosphere over the past centuries would help to better understand future climate changes.

Climate archives are important for reconstructing the past climate and making statements about the development of the climate in the future. In Europe, the weather has only been observed and recorded regularly for around 300 years. For the time before and for locations without a weather station, however, research depends on conclusions from natural archives. Paleoclimate research uses a wide variety of natural archives such as tree rings, ice cores or sediments. In recent decades, a number of methods have been developed and refined that use indirect indicators (climate proxies) to draw conclusions about climate factors such as temperature, precipitation, volcanic eruptions and solar activity. Clouds are responsible for precipitation, among other things, but they are very elusive and therefore difficult to study. But the number, type and extent of clouds and their ice content have a big influence on the radiation budget of the atmosphere, the temperature on the ground and precipitation, and information about parameters affecting clouds are hence important for climate reconstruction.

A method how to improve our knowledge about clouds and their role in climate history is now presented by an international research team from Germany, Denmark and Switzerland. According to them, the team has reconstructed the concentrations of ice nucleating particles (INP) from ice cores for the first time. These measurements could be used to reconstruct cloud cover in the future. "Ice formation in mixed-phase clouds is mainly caused by heterogeneous ice formation, i.e. INP are necessary to stimulate the freezing of supercooled cloud droplets. The number and type of these particles therefore influence precipitation, lifetime and radiation properties of the clouds. In the laboratory, we were able to show that two types of particles are particularly suitable for this purpose: Mineral dust from the soil as well as various biological particles such as bacteria, fungal spores or pollen," explains Dr. Frank Stratmann, head of the clouds working group at TROPOS.

Ice cores are often used to reconstruct various climate parameters such as temperature, precipitation or volcanic eruptions over thousands of years. For the now published study, the team was able to draw on parts of two ice cores from the Arctic: The core "Lomo09" was drilled on the Lomonosovfonna glacier on Svalbard at an altitude of 1200 metres in 2009. The ice core "EUROCORE" was elaborately extracted in 1989 from the summit of the Greenland Ice Sheet at an altitude of over 3000 metres. The frozen samples of these cores were sent to Leipzig, where they were now examined for INP. Small samples of the ice were melted and the melt water divided into many small drops of 1 and 50 microliters. These drops were placed in two experimental setups, each with almost 100 tiny troughs, and were then cooled down in a controlled manner. These setups were already used in previous studies: Both, LINA (Leipzig Ice Nucleation Array) and INDA (Ice Nucleation Droplet Array), are instruments in which the many drops of water are cooled in a controlled manner. Through a glass window it can be observed from above at which temperature how many drops freeze. The number of frozen drops is then converted into the concentration of ice nucleating particles. "In 2015 US-American researchers derived atmospheric INP concentrations from snow and precipitation water. What works for precipitation should also work for ice samples was our approach. And so we were the first to show that historical ice nuclei concentrations can also be extracted from the ice cores," says Markus Hartmann of TROPOS, who carried out the investigations as part of his doctoral thesis.

This opens up new possibilities for paleoclimate research. Since the 1930s, countless ice cores have been extracted from glaciers all over the world and the climate of the past has been reconstructed. The Information on the cloud phase (i.e. if it contains ice or and or liquid water) was not available. The study by polar and atmospheric researchers is a first step in this direction. Since the team did not have a continuous ice core available, it could only reconstruct the ice nucleating particles from individual years of the period 1735 to 1989 on Greenland and 1480 to 1949 on Svalbard. Overall, there was no trend in the ice nucleating particles over the last half millennium. "However, the Arctic has only been warming dramatically for about 25 years. The ice analyzed now was formed before this strong warming began. Both measurements of a continuous ice core and of newer ice would therefore be desirable," adds Markus Hartmann.

The fact that mankind has caused global warming through its emissions is undisputed among researchers. However, it is unclear how much the clouds in the atmosphere have been changed as a result. Researchers therefore also hope to gain important insights from investigations into ice nucleating particles in the air. In autumn/winter 2016, a team from the University of Beijing, TROPOS, the University of Gothenburg and the Chinese Academy of Sciences measured the concentrations of ice nucleating particles in the air of the Chinese capital Beijing. However, they were unable to prove any connection with the high level of air pollution there. "We therefore assume that the ice nucleating particles in Beijing originate more from natural sources such as dust storms or the biosphere, both of which are known as sources of ice nucleating particles, than from anthropogenic combustion processes," says Dr. Heike Wex of TROPOS. But this is a snapshot of one place and the indirect influence of man should not be forgotten: Changes in land use or droughts have an impact on dust in the atmosphere and on the biosphere, which in turn can lead to changes in clouds". In order to better understand the effects of humanity on the atmosphere, cloud researchers measure both at the hotspots of air pollution such as the metropolises of emerging countries and in comparatively clean regions such as the polar regions.

So far relatively little is known about the quantity, properties and sources of ice nucleating particles in the Arctic, although they are an important factor in cloud formation and thus for the climate there. Especially long time series with monthly or weekly time resolution are practically non-existent, but essential to investigate seasonal effects. In the journal Atmospheric Chemistry and Physics, an Open Access journal of the European Geosciences Union (EGU), an international team, also led by TROPOS, recently published an overview of the seasonal variations in ice nuclei concentrations in the Arctic. Samples from four research stations in the Arctic from 2012/2013 and 2015/2016 were investigated in the Leipzig Cloud Laboratory of TROPOS: Alert in Canada, Ny-Ålesund on Spitsbergen (Norway), Utqiagvik (Barrow) in Alaska (USA) and Villum (Station Nord) in Greenland (Denmark). "This gives us an overview of the variations between the seasons: Most abundant are ice nucleating particles in the air from the end of spring until the beginning of autumn, the least are found in winter and at the beginning of spring. This influences how the type of cloud cover in the Arctic changes during the year and thus the influence of clouds on Arctic warming," explains Heike Wex. Researchers hope that the studies will lead to better predictions on climate change, as climate models are currently unable to adequately reflect the warming of the Arctic, which will lead to uncertainties ranging from rising sea levels to regional climate changes in Europe.

The complex feedback processes between biosphere and climate will also be part of the MOSAiC expedition: In September 2019, the German research icebreaker Polarstern, led by the Alfred Wegener Institute (AWI), will drift through the Arctic Ocean for one year. Supplied by additional icebreakers and aircraft, a total of 600 people from 17 countries will take part in the MOSAiC expedition. Together with an international partner, the AWI is responsible for the five main research areas: sea ice physics and snow cover, processes in the atmosphere and in the ocean, biogeochemical cycles and the Arctic ecosystem. TROPOS will play a leading role in two central measurements: Firstly, a remote sensing container for the entire ice drift will continuously explore the vertical aerosol and cloud distribution using lidar, radar and microwave radiometers. On the other hand, a tethered balloon will measure the Arctic boundary layer as accurately as possible during a flight section. Both measurements allow more or less the direct detection of the vertical distribution of the ice nucleating particles. In addition, TROPOS will again investigate the surface microlayer of the sea and melt ponds, which is likely to be a major source of ice nucleating particles in the Arctic.

Since 2016, the Collaborative Research Centre TR172 "Arctic Amplification" of the German Research Foundation (DFG) has been investigating the reasons why the Arctic warms much more than the rest of the Earth. In addition to the University of Leipzig, the research network also includes the universities of Bremen and Cologne, the Alfred Wegener Institute, the Helmholtz Centre for Polar and Marine Research (AWI) and the Leibniz Institute for Tropospheric Research (TROPOS) in Leipzig. Tilo Arnhold

Credit: 
Leibniz Institute for Tropospheric Research (TROPOS)

Advanced NMR at Ames Lab captures new details in nanoparticle structures

Advanced nuclear magnetic resonance (NMR) techniques at the U.S. Department of Energy's Ames Laboratory have revealed surprising details about the structure of a key group of materials in nanotechology, mesoporous silica nanoparticles (MSNs), and the placement of their active chemical sites.

MSNs are honeycombed with tiny (about 2-15 nm wide) three-dimensionally ordered tunnels or pores, and serve as supports for organic functional groups tailored to a wide range of needs. With possible applications in catalysis, chemical separations, biosensing, and drug delivery, MSNs are the focus of intense scientific research.

"Since the development of MSNs, people have been trying to control the way they function," said Takeshi Kobayashi, an NMR scientist with the Division of Chemical and Biological Sciences at Ames Laboratory. "Research has explored doing this through modifying particle size and shape, pore size, and by deploying various organic functional groups on their surfaces to accomplish the desired chemical tasks. However, understanding of the results of these synthetic efforts can be very challenging."

Ames Laboratory scientist Marek Pruski explained that despite the existence of different techniques for MSNs' functionalization, no one knew exactly how they were different. In particular, atomic-scale description of how the organic groups were distributed on the surface was lacking until recently.

"It is one thing to detect and quantify these functional groups, or even determine their structure," said Pruski. "But elucidating their spatial arrangement poses additional challenges. Do they reside on the surfaces or are they partly embedded in the silica walls? Are they uniformly distributed on surfaces? If there are multiple types of functionalities, are they randomly mixed or do they form domains? Conventional NMR, as well as other analytical techniques, have struggled to provide satisfactory answers to these important questions."

Kobayashi, Pruski, and other researchers used DNP-NMR to obtain a much clearer picture of the structures of functionalized MSNs. "DNP" stands for "dynamic nuclear polarization," a method which uses microwaves to excite unpaired electrons in radicals and transfer their high spin polarization to the nuclei in the sample being analyzed, offering drastically higher sensitivity, often by two orders of magnitude, and even larger savings of experimental time. Conventional NMR, which measures the responses of the nuclei of atoms placed in a magnetic field to direct radio-frequency excitation, lacks the sensitivity needed to identify the internuclear interactions between different sites and functionalities on surfaces. When paired with DNP, as well as fast magic angle spinning (MAS), NMR can be used to detect such interactions with unprecedented sensitivity.

Not only did the DNP-NMR methods elicit the atomic-scale location and distribution of the functional groups, but the results disproved some of the existing notions of how MSNs are made and how the different synthetic strategies influenced the dispersion of functional groups throughout the silica pores.

"By examining the role of various experimental conditions, our NMR techniques can give scientists the mechanistic insight they need to guide the synthesis of MSNs in a more controlled way" said Kobayashi.

Credit: 
DOE/Ames National Laboratory

Landmark study signals shift in thinking about stem cell differentiation

image: David M. Gilbert, the J. Herbert Taylor Distinguished Professor of Molecular Biology.

Image: 
FSU Photography Services

A pioneering new study led by Florida State University biologists could fundamentally change our understanding of how embryonic stem cells differentiate into specific cell types.

The research, published today in the journal Stem Cell Reports, calls into question decades of scientific consensus about the behavior of embryonic stem cells as they transition to endoderm, a class of cell in animal embryos that gives rise to the digestive and respiratory systems.

David M. Gilbert, the J. Herbert Taylor Distinguished Professor of Molecular Biology in FSU's Department of Biological Science, said the study upends well-established notions of when embryonic stem cells chart their unalterable courses toward a fixed endoderm lineage -- in this case, their eventual fate as specific digestive or respiratory cells.

"This paper challenges the longstanding assumption that embryonic stem cells remain quite plastic and malleable during the earliest stages of cell commitment," Gilbert said. "We show that human embryonic stem cells can commit irreversibly to endoderm lineages -- liver and pancreas cells, for example -- very quickly."

The findings represent a new chapter in the study of embryonic stem cell differentiation, a field that could be key to helping scientists and clinicians unlock improved therapies for a range of diseases.

Using a sophisticated protocol developed by the San Diego-based regenerative medicine firm ViaCyte, Gilbert and his collaborators exposed a sample of embryonic stem cells to culture conditions engineered to nudge the cells into the definitive endoderm stage, a fast lane to specialized cell development. The team then quickly returned the cells to a bath of treatment factors designed to restore them to an embryonic state.

Based on previous studies, the researchers presumed it would take days in the endoderm culture, or at least a full cell division cycle, for the cells to commit to a developmental track.

"In fact, we found that after only a few hours exposure to the endoderm cocktail -- a fraction of a cell division cycle -- the cells could be returned to the stem cell cocktail and continue to go through the same series of gene expression changes as the control cells that remained in the endoderm cocktail."

In other words, after a remarkably short soak in the endoderm culture, the cells had committed full bore to a specific cellular program.

"Prior to the experiments reported here, there was no expectation that early stem cell lineage commitment would be so rapid and irreversible," Gilbert and his co-authors wrote in their paper.

This wasn't the only entrenched assumption challenged by the team's study. Scientists long believed the 3D organization of chromosomes in the nucleus to be both exceptionally rigid and closely linked to replication timing -- the order in which segments of DNA are copied before cell division. It was thought that the only way to reconfigure that architecture was to crack open a cell's nucleus when its chromosomes were being delivered to its daughter cells.

It turns out those assumptions may have been misguided as well.

"We show that chromosome architecture can be remodeled locally and rapidly without dismantling the entire cell nucleus -- akin to changing the scaffolding of a building without tearing it down -- which was quite unexpected," Gilbert said. "We also show that these changes in chromosome architecture occur dynamically and immediately upon stimulation of stem cells to become endoderm. This finding demonstrates that replication and architecture do not always go hand in hand, they can be what we call 'uncoupled.'"

The researchers' work delinking replication timing from chromosome architecture and showing the ability to surgically remodel that architecture could help refine scientists' understanding of embryonic stem cell behavior. Along with the discovery that stem cell lineage commitment occurs more rapidly and irreversibly than expected, Gilbert said the findings raise critical questions about the basic nature of stem cells and the barriers to turning one cell into another.

If researchers can harness these newly acquired insights, they could begin unraveling the mysteries of how and why stem cells commit to their developmental tracks and why certain cells are especially difficult to reprogram.

That information could inform the creation of new, powerful tools to combat disease and allay human suffering.

"The fact that large changes in genome organization and their temporal order of replication can be remodeled so easily, and that this is correlated with irreversible commitment so quickly in a cell culture system in the laboratory, means that we might be able to use this system to get at the mechanisms that represent irreversible commitment," Gilbert said. "We never anticipated that -- we expected irreversible commitment to take a lot more work, time and expense."

Credit: 
Florida State University

Novel model for studying intestinal parasite could advance vaccine development

image: The parasite Cryptosporidium, transmitted through water sources, is one of the most common causes of diarrheal disease in the world. New inroads into modeling the disease--and preventing it with a vaccine--lay the groundwork for tackling the infection in humans. Here, A section of intestine from an infected mouse shows Cryptosporidium tyzzeri parasites in red. The University of Pennsylvania-led team is the first to sequence, study, and manipulate a naturally occurring mouse Cryptosporidium.

Image: 
Muthgapatti Kandasamy, Adam Sateriale, and Boris Striepen

The intestinal parasite Cryptosporidium, which causes a diarrheal disease, is very good at infecting humans. It's the leading cause of waterborne disease from recreational waters in the United States. Globally, it's a serious illness that can stunt the growth of, or even kill, infants and young children. And people with compromised immune systems, such as those with HIV/AIDS, are also highly susceptible. There is no vaccine and no effective treatment.

Surprisingly, the parasite strains that infect humans don't do such a good job at infecting mice. To study the disease, researchers have had to rely on mice with defective immune systems, a model that made it difficult to understand how to elicit an immune response that could protect children.

But that is set to change. Using a naturally occurring species of mouse Cryptosporidium, a team led by researchers from Penn's School of Veterinary Medicine has developed a model of infection that affects immunologically normal mice. They show that mice develop immunity to the parasite after infection, and that a live attenuated vaccine offers the animals protection against it. Their findings appear in the journal Cell Host & Microbe.

"We now have a fantastic mouse model that mirrors the human disease," says Boris Striepen, a biologist at Penn Vet and senior author on the study. "It's a powerful lab model, where we can introduce changes at will and test the importance of different components of the immune response to infection, which is just what we need to develop an effective vaccine."

Mice that received the experimental vaccine, which used a weakened version of the parasite, were as protected from infection as those that had already weathered an initial infection, the researchers found. "We were able to show that the mice were protected --not by sterile immunity--but by very robust protection from disease, which is exactly what is observed in children," says Adam Sateriale, first author on the report and a postdoctoral researcher in Striepen's lab.

Striepen has focused on advancing science on Cryptosporidium for the last several years. One major advance came in 2015, when his lab found success in using the CRISPR-Cas9 technology to genetically modify the organism.

In the new work, Striepen, Sateriale, and colleagues aimed to develop a method to more easily study the parasite in mice, which are resistant to the two species responsible for most human infections. Taking a different tack, they searched for Cryptosporidium DNA in mice feces from farms and found one species, C. tyzzeri, in 30 percent of the samples.

"One of the first things we did was sequence and annotate the genome," says Sateriale, finding it to be an extremely close relative of the species that affect humans. "Once we know the genome, we can not only see how it varies compared to those species, but we can also begin to use our genetic tools to manipulate it."

Among the manipulations the researchers made using CRISPR were introducing genes that make the parasite glow using a gene borrowed from the firefly, allowing them to precisely, but non-invasively, track the infection.

Unlike the more artificial models of infection that used immunocompromised mice, the Penn-led team showed that C. tyzzeri could infect healthy mice, causing an infection that replicated many features of human disease.

"Some of the main immunological components that have been shown to be important in people were also true of this mouse model," Striepen notes.

Specifically, they found that T cells and the protein interferon-gamma, a key player in fighting off a variety of infections, were both critical in the body's response to the parasite. Mice lacking the gene for interferon-gamma and those that lacked T cells had more severe, longer-lasting infections than normal mice.

"Understanding these correlates of immunity--how the parasite triggers an immune response, and by what mechanism the immune system then attacks the parasite--are important aspects of vaccine development," Striepen says.

Knowing that children who become infected with Cryptosporidium can develop resistance to subsequent infections, the researchers wanted to see if the same held true in the mice. After confirming that this was the case, their final effort was to attempt to vaccinate the mice. They exposed C. tyzzeri spores to radiation to weaken them. Mice that received the vaccination with the live attenuated C. tyzzeri were protected from infection, though mice lacking either interferon-gamma or T cells were not protected, again underscoring the importance of these factors in developing anti-Cryptosporidium immunity.

Encouraged by their findings, the researchers are continuing to probe the pathways involved in conferring immune protection against Cryptosporidium infection, and are sharing their model with colleagues to aggressively pursue a vaccine or other treatments for the disease.

"We feel fortunate to be at the vet school and at Penn in general as we work on these questions," says Striepen. "Here we can build larger teams of parasite biologists, and experts in the study of immune responses like our colleague Christopher Hunter, so we're building up an interdisciplinary effort that can overcome the challenges of working on these complex investigations. And hopefully this will lead to advances that protect children."

Credit: 
University of Pennsylvania

Treatment for common cause of diarrhea more promising

image: The intestinal parasite Cryptosporidium (brown) grows on' mini-guts' (purple) in a dish. Researchers at Washington University School of Medicine in St. Louis have figured out how to grow the parasite in the lab, an achievement that will speed efforts to treat or prevent diarrhea caused by the parasite.

Image: 
Georgia Wilke

One of the most common causes of diarrhea worldwide - accounting for millions of cases and tens of thousands of deaths, mostly of small children - is the parasite Cryptosporidium. Doctors can treat children with Cryptosporidium for dehydration, but unlike many other causes of diarrhea, there are no drugs to kill the parasite or vaccines to prevent infection.

Now, researchers at Washington University School of Medicine in St. Louis have figured out how to grow the most common type of Cryptosporidium in the lab, a technological advance that will accelerate efforts to treat the deadly infection.

"This parasite was described over 100 years ago, and scientists have never been able to reliably grow it in the lab, which has hampered our ability to understand the parasite and develop treatments for it," said senior author L. David Sibley, PhD, the Alan A. and Edith L. Wolff Distinguished Professor of Molecular Microbiology. "We now have a way to cultivate it, propagate it, modify its genes and start figuring out how it causes disease in children. This is a first step toward screening potential drugs and finding new drugs or vaccine targets."

The findings are published online June 20 in Cell Host & Microbe.

In wealthy countries, Cryptosporidium is notorious for causing water-borne outbreaks of diarrhea. The parasite goes through a complex life cycle, including a stage in which it is called an oocyst and becomes hardy, spore-like and hard to kill with chlorine, bleach or other routine sanitation measures. In 1993, 400,000 people in the Milwaukee area developed diarrhea, stomach cramps and fever after a malfunctioning water purification plant allowed Cryptosporidium into the city's water supply. Every year, dozens of smaller outbreaks are reported in the U.S., many associated with swimming pools and water playgrounds.

Diarrhea caused by the parasite can last for weeks. While this is miserable for otherwise healthy people, it can be life-threatening for undernourished children and people with compromised immune systems.

Until now, researchers who wanted to study the parasite had to obtain oocysts from infected calves - Cryptosporidium infection is a serious problem in commercial cattle farming - and grow the parasites in human or mouse cell lines. The parasite inevitably would die after a few days without going through a complete life cycle, so the researchers would have to obtain more oocysts from cattle to do more experiments.

Sibley, along with co-first authors Georgia Wilke, PhD, who is a student in the Medical Scientist Training Program, and postdoctoral researcher Lisa Funkhouser-Jones, PhD, suspected that the problem lay in the cell lines traditionally used to grow the parasite. Derived from cancer cells, these cell lines were very different from the normal, healthy intestine that is the parasites' usual home.

To create a more natural environment, the researchers collaborated with Thaddeus S. Stappenbeck, MD, PhD, the Conan Professor of Laboratory and Genomic Medicine and a co-author on the paper. Stappenbeck and colleagues cultured intestinal stem cells to become "mini-guts" in a dish - complete with all the cell types and structural complexity of a real intestine.

When the researchers added oocysts to the mini-guts, the parasites thrived. They emerged from the oocysts and went through their full life cycle to produce more oocysts. For the first time, every stage of the parasite's complicated life cycle could be studied in the lab. The researchers also showed that they could edit the parasite's genes with CRISPR/Cas9 and perform genetic crosses, making these powerful tools for studying biology more accessible than before.

"We put the parasite in this environment that is much more natural, and it's happy and it grows and develops and goes through the entire life cycle," Sibley said. "This opens up possibilities that were closed for a long time. There is only one FDA-approved drug, and it doesn't work in young children. There are potential drug candidates, but we couldn't screen them before because the parasites would just die anyway. How can you tell if the drug is killing the parasites if they are already dying? Now we can start screening drugs and also start asking questions about what makes this parasite dangerous."

The technique applies only to C. parvum, one of the two most common Cryptosporidium species that cause diarrhea in people. Its cousin C. hominis is even more difficult to grow in the lab. The two are closely related, but while C. parvum can infect young mammals of many species, C. hominis only infects people.

"There are only a few dozen genes that are different between parvum and hominis but somehow that's enough to make hominis very finicky," Sibley said. "It won't grow in mice or in calves or, so far, in mini-guts grown from mouse stem cells. Developing systems to work with hominis is an important goal of my lab."

The technique, while potentially powerful, will not immediately translate into better treatment or prevention for diarrhea, Sibley cautioned.

"These things take time," Sibley said. "There's a lot of basic research that still needs to be done. But this system provides an important path forward. We can now use genetic approaches to study the role of individual genes and thereby identify important targets for improved therapies."

Credit: 
Washington University School of Medicine

Spotting objects amid clutter

image: Robots currently attempt to identify objects in a point cloud by comparing a template object -- a 3-D dot representation of an object, such as a rabbit -- with a point cloud representation of the real world that may contain that object.

Image: 
Christine Daniloff, MIT

A new MIT-developed technique enables robots to quickly identify objects hidden in a three-dimensional cloud of data, reminiscent of how some people can make sense of a densely patterned "Magic Eye" image if they observe it in just the right way.

Robots typically "see" their environment through sensors that collect and translate a visual scene into a matrix of dots. Think of the world of, well, "The Matrix," except that the 1s and 0s seen by the fictional character Neo are replaced by dots -- lots of dots -- whose patterns and densities outline the objects in a particular scene.

Conventional techniques that try to pick out objects from such clouds of dots, or point clouds, can do so with either speed or accuracy, but not both.

With their new technique, the researchers say a robot can accurately pick out an object, such as a small animal, that is otherwise obscured within a dense cloud of dots, within seconds of receiving the visual data. The team says the technique can be used to improve a host of situations in which machine perception must be both speedy and accurate, including driverless cars and robotic assistants in the factory and the home.

"The surprising thing about this work is, if I ask you to find a bunny in this cloud of thousands of points, there's no way you could do that," says Luca Carlone, assistant professor of aeronautics and astronautics and a member of MIT's Laboratory for Information and Decision Systems (LIDS). "But our algorithm is able to see the object through all this clutter. So we're getting to a level of superhuman performance in localizing objects."

Carlone and graduate student Heng Yang will present details of the technique later this month at the Robotics: Science and Systems conference in Germany.

"Failing without knowing"

Robots currently attempt to identify objects in a point cloud by comparing a template object -- a 3-D dot representation of an object, such as a rabbit -- with a point cloud representation of the real world that may contain that object. The template image includes "features," or collections of dots that indicate characteristic curvatures or angles of that object, such the bunny's ear or tail. Existing algorithms first extract similar features from the real-life point cloud, then attempt to match those features and the template's features, and ultimately rotate and align the features to the template to determine if the point cloud contains the object in question.

But the point cloud data that streams into a robot's sensor invariably includes errors, in the form of dots that are in the wrong position or incorrectly spaced, which can significantly confuse the process of feature extraction and matching. As a consequence, robots can make a huge number of wrong associations, or what researchers call "outliers" between point clouds, and ultimately misidentify objects or miss them entirely.

Carlone says state-of-the-art algorithms are able to sift the bad associations from the good once features have been matched, but they do so in "exponential time," meaning that even a cluster of processing-heavy computers, sifting through dense point cloud data with existing algorithms, would not be able to solve the problem in a reasonable time. Such techniques, while accurate, are impractical for analyzing larger, real-life datasets containing dense point clouds.

Other algorithms that can quickly identify features and associations do so hastily, creating a huge number of outliers or misdetections in the process, without being aware of these errors.

"That's terrible if this is running on a self-driving car, or any safety-critical application," Carlone says. "Failing without knowing you're failing is the worst thing an algorithm can do."

A relaxed view

Yang and Carlone instead devised a technique that prunes away outliers in "polynomial time," meaning that it can do so quickly, even for increasingly dense clouds of dots. The technique can thus quickly and accurately identify objects hidden in cluttered scenes.

The researchers first used conventional techniques to extract features of a template object from a point cloud. They then developed a three-step process to match the size, position, and orientation of the object in a point cloud with the template object, while simultaneously identifying good from bad feature associations.

The team developed an "adaptive voting scheme" algorithm to prune outliers and match an object's size and position. For size, the algorithm makes associations between template and point cloud features, then compares the relative distance between features in a template and corresponding features in the point cloud. If, say, the distance between two features in the point cloud is five times that of the corresponding points in the template, the algorithm assigns a "vote" to the hypothesis that the object is five times larger than the template object.

The algorithm does this for every feature association. Then, the algorithm selects those associations that fall under the size hypothesis with the most votes, and identifies those as the correct associations, while pruning away the others. In this way, the technique simultaneously reveals the correct associations and the relative size of the object represented by those associations. The same process is used to determine the object's position.

The researchers developed a separate algorithm for rotation, which finds the orientation of the template object in three-dimensional space.

To do this is an incredibly tricky computational task. Imagine holding a mug and trying to tilt it just so, to match a blurry image of something that might be that same mug. There are any number of angles you could tilt that mug, and each of those angles has a certain likelihood of matching the blurry image.

Existing techniques handle this problem by considering each possible tilt or rotation of the object as a "cost" -- the lower the cost, the more likely that that rotation creates an accurate match between features. Each rotation and associated cost is represented in a topographic map of sorts, made up of multiple hills and valleys, with lower elevations associated with lower cost.

But Carlone says this can easily confuse an algorithm, especially if there are multiple valleys and no discernible lowest point representing the true, exact match between a particular rotation of an object and the object in a point cloud. Instead, the team developed a "convex relaxation" algorithm that simplifies the topographic map, with one single valley representing the optimal rotation. In this way, the algorithm is able to quickly identify the rotation that defines the orientation of the object in the point cloud.

With their approach, the team was able to quickly and accurately identify three different objects -- a bunny, a dragon, and a Buddha -- hidden in point clouds of increasing density. They were also able to identify objects in real-life scenes, including a living room, in which the algorithm quickly was able to spot a cereal box and a baseball hat.

Carlone says that because the approach is able to work in "polynomial time," it can be easily scaled up to analyze even denser point clouds, resembling the complexity of sensor data for driverless cars, for example.

"Navigation, collaborative manufacturing, domestic robots, search and rescue, and self-driving cars is where we hope to make an impact," Carlone says.

This research was supported in part by the Army Research Laboratory, the Office of Naval Research, and the Google Daydream Research Program.

Credit: 
Massachusetts Institute of Technology