Tech

New clues to why there's so little antimatter in the universe

Imagine a dust particle in a storm cloud, and you can get an idea of a neutron's insignificance compared to the magnitude of the molecule it inhabits.

But just as a dust mote might affect a cloud's track, a neutron can influence the energy of its molecule despite being less than one-millionth its size. And now physicists at MIT and elsewhere have successfully measured a neutron's tiny effect in a radioactive molecule.

The team has developed a new technique to produce and study short-lived radioactive molecules with neutron numbers they can precisely control. They hand-picked several isotopes of the same molecule, each with one more neutron than the next. When they measured each molecule's energy, they were able to detect small, nearly imperceptible changes of the nuclear size, due to the effect of a single neutron.

The fact that they were able to see such small nuclear effects suggests that scientists now have a chance to search such radioactive molecules for even subtler effects, caused by dark matter, for example, or by the effects of new sources of symmetry violations related to some of the current mysteries of the universe.

"If the laws of physics are symmetrical as we think they are, then the Big Bang should have created matter and antimatter in the same amount. The fact that most of what we see is matter, and there is only about one part per billon of antimatter, means there is a violation of the most fundamental symmetries of physics, in a way that we can't explain with all that we know," says Ronald Fernando Garcia Ruiz, assistant professor of physics at MIT.

"Now we have a chance to measure these symmetry violations, using these heavy radioactive molecules, which have extreme sensitivity to nuclear phenomena that we cannot see in other molecules in nature," he says. "That could provide answers to one of the main mysteries of how the universe was created."

Ruiz and his colleagues have published their results today in Physical Review Letters.

A special asymmetry

Most atoms in nature host a symmetrical, spherical nucleus, with neutrons and protons evenly distributed throughout. But in certain radioactive elements like radium, atomic nuclei are weirdly pear-shaped, with an uneven distribution of neutrons and protons within. Physicists hypothesize that this shape distortion can enhance the violation of symmetries that gave origin to the matter in the universe.

"Radioactive nuclei could allow us to easily see these symmetry-violating effects," says study lead author Silviu-Marian Udrescu, a graduate student in MIT's Department of Physics. "The disadvantage is, they're very unstable and live for a very short amount of time, so we need sensitive methods to produce and detect them, fast."

Rather than attempt to pin down radioactive nuclei on their own, the team placed them in a molecule that futher amplifies the sensitivity to symmetry violations. Radioactive molecules consist of at least one radioactive atom, bound to one or more other atoms. Each atom is surrounded by a cloud of electrons that together generate an extremely high electric field in the molecule that physicists believe could amplify subtle nuclear effects, such as effects of symmetry violation.

However, aside from certain astrophysical processes, such as merging neutron stars, and stellar explosions, the radioactive molecules of interest do not exist in nature and therefore must be created artificially. Garcia Ruiz and his colleagues have been refining techniques to create radioactive molecules in the lab and precisely study their properties. Last year, they reported on a method to produce molecules of radium monofluoride, or RaF, a radioactive molecule that contains one unstable radium atom and a fluoride atom.

In their new study, the team used similar techniques to produce RaF isotopes, or versions of the radioactive molecule with varying numbers of neutrons. As they did in their previous experiment, the researchers utilized the Isotope mass Separator On-Line, or ISOLDE, facility at CERN, in Geneva, Switzerland, to produce small quantities of RaF isotopes.

The facility houses a low-energy proton beam, which the team directed toward a target -- a half-dollar-sized disc of uranium-carbide, onto which they also injected a carbon fluoride gas. The ensuing chemical reactions produced a zoo of molecules, including RaF, which the team separated using a precise system of lasers, electromagnetic fields, and ion traps.

The researchers measured each molecule's mass to estimate of the number of neutrons in a molecule's radium nucleus. They then sorted the molecules by isotopes, according to their neutron numbers.

In the end, they sorted out bunches of five different isotopes of RaF, each bearing more neutrons than the next. With a separate system of lasers, the team measured the quantum levels of each molecule.

"Imagine a molecule vibrating like two balls on a spring, with a certain amount of energy," explains Udrescu, who is a graduate student of MIT's Laboratory for Nuclear Science. "If you change the number of neutrons in one of these balls, the amount of energy could change. But one neutron is 10 million times smaller than a molecule, and with our current precision we didn't expect that changing one would create an energy difference, but it did. And we were able to clearly see this effect."

Udrescu compares the sensitivity of the measurements to being able to see how Mount Everest, placed on the surface of the sun, could, however minutely, change the sun's radius. By comparison, seeing certain effects of symmetry violation would be like seeing how the width of a single human hair would alter the sun's radius.

The results demonstrate that radioactive molecules such as RaF are ultrasensitive to nuclear effects and that their sensitivity may likely reveal more subtle, never-before-seen effects, such as tiny symmetry-violating nuclear properties, that could help to explain the universe's matter-antimmater asymmetry.

"These very heavy radioactive molecules are special and have sensitivity to nuclear phenomena that we cannot see in other molecules in nature," Udrescu says. "This shows that, when we start to search for symmetry-violating effects, we have a high chance of seeing them in these molecules."

Credit: 
Massachusetts Institute of Technology

Change in respiratory care strategies for preterm infants improves health outcomes

image: Dupree Hatch, MD, MPH, senior study author and assistant professor of Pediatrics in the Division of Neonatology at Monroe Carell Jr. Children's Hospital at Vanderbilt

Image: 
Vanderbilt University Medical Center

A decade's worth of data shows that neonatologists are shifting the type of respiratory support they utilize for preterm infants, a move that could lead to improved health outcomes.

Using two large national datasets that included more than 1 million preterm infants, researchers in a new Vanderbilt-led study found that from 2008 to 2018 there was a greater than 10% decrease in the use of mechanical ventilation for this patient population. Concurrently, there was a similar increase in the use of non-invasive respiratory support, such as continuous positive airway pressure (CPAP), for these infants.

The study, "Changes in Use of Respiratory Support for Preterm Infants in the U.S.," published July 6 in JAMA Pediatrics.

"It's a pretty big success story for the field of neonatology. We've been able to keep a lot of babies off mechanical ventilation and potentially spare them of lung injury and injuries to other organ systems as well," said Dupree Hatch, MD, MPH, senior study author and assistant professor of Pediatrics in the Division of Neonatology at Monroe Carell Jr. Children's Hospital at Vanderbilt.

For preterm infants, mechanical ventilation can have adverse pulmonary and neurodevelopmental outcomes. To reduce these risks, neonatologists over the past two decades have been exploring and researching non-invasive respiratory support options like CPAP therapy for these infants. Much of the shift in mechanical ventilation seen in the study coincided with a large study released in 2010 that showed mechanical ventilation was not superior to non-invasive ventilation, and in fact may involve more risks.

"In multiple studies mechanical ventilation has been associated with adverse outcomes. There are several studies that show for every week you stay on a ventilator as a preterm baby, your odds of having adverse neurodevelopmental outcomes go up," Hatch said.

"Since large studies were published in 2008, 2010 and 2011 showing the effectiveness of non-invasive respiratory support, we thought that respiratory support patterns in preterm infants had likely changed, but no one had really quantified that, or looked if it was widespread across the entire country or if it was just in pockets."

Hatch and colleagues examined two large national datasets, confirming the changes in practices. They looked at data collected over an 11-year period on the type of respiratory support used for infants born between 22 weeks' and 34 weeks' gestation.

In one of the study datasets that included admissions to over 350 NICUs in the U.S., they found that mechanical ventilation utilization in preterm infants decreased from 29.4% in 2008 to 18.5% in 2018. Nationally, the study authors wrote, the changes were associated with about 30,000 fewer infants receiving mechanical ventilation during the study period. As the number of infants on mechanical ventilation went down, the duration of time that ventilated babies spent on mechanical ventilators also went down.

Also, in their findings, researchers discovered that the total number of days on non-invasive respiratory support went up across all gestational ages from 13.8 days to 15.4 days. Hatch said more research is needed to understand the implications of spending more time on non-invasive respiratory support therapies.

"We need to figure out if the increase in duration of respiratory support is a good thing, and what does that do to NICU length of stay and overall resource utilization for preterm infants in the U.S. It raises more questions," he said.

Additionally, they saw an increase in the number of extremely preterm infants, 22 to 24 weeks' gestation, being placed on mechanical ventilation as there has been increased intervention and improved survival for this age group. Hatch notes that the respiratory support strategies for this particular population of infants needs more examination.

"The field of neonatology has worked really hard to examine our practices and get better. I am proud of how quickly some of the landmark respiratory care studies have penetrated our clinical care," said Hatch. "Care in the NICU is becoming less invasive and gentler because it is the right thing to do for babies' long-term outcomes."

Credit: 
Vanderbilt University Medical Center

New computational technique, software identifies cell types within a tumor and its microenvironment

(Boston)--The discovery of novel groups or categories within diseases, organisms and biological processes and their organization into hierarchical relationships are important and recurrent pursuits in biology and medicine, which may help elucidate group-specific vulnerabilities and ultimately novel therapeutic interventions.

Now a new study introduces a novel computational methodology and an associated software tool called K2Taxonomer, which support the automated discovery and annotation of molecular classifications at multiple levels of resolution from high-throughput bulk and single cell 'omics' data. The study includes a case study detailing the analysis of the transcriptome of breast tumor-infiltrating lymphocytes (white blood cells in the immune system, aka TILs) on a single-cell basis, which significantly expands upon previous findings and showcases the incorporation of the methods into an advanced in-silico (produced by computer modeling) analysis workflow.

"Our study presents a comprehensive evaluation and extensive benchmarking of the method on simulated and real data, which convincingly shows its high accuracy, its superior performance when compared to other representative methods and its capability to (re)discover known nested molecular classification," explained first author Eric Reed, PhD, a recent graduate of the BU Bioinformatics Program.

The researchers found that the K2Taxonomer-based analysis of single cell data from breast TILs characterized a transcriptional signature common to multiple immune T cell subsets. Importantly, the analysis found that activation of this signature is associated with better survival in breast cancer patients. "Our study points to some of the features of what an effective cancer immune response would look like. Not only could this enable us to better predict how breast cancer patients will fare after diagnosis, but also reveal some specific immune programs that need to be enhanced to generate a (literally) killer immune response," added corresponding author Stefano Monti, PhD, associate professor of medicine at Boston University School of Medicine (BUSM).

According to Monti, the identification and characterization of different cell types within a tumor, its microenvironment and a deeper understanding of their cross-talk, are essential to better understanding mechanisms of cancer initiation, progression and sensitivity to intervention approaches. He says the newly developed methodology can be equally applied to the analysis of other components of a tumor, including different types of malignant cells, tumor stroma (supportive tissue) and cancer-associated adipocytes (a cell specialized for the storage of fat.)

Credit: 
Boston University School of Medicine

Molecular imaging improves staging and treatment of pancreatic ductal adenocarcinomas

image: Primary staging of a patient with PDAC. (A) Axial images of PDAC and liver in arterial (upper image) and venous (lower image) ceCT scan. (B) Mean intensity projection (MIP) images of 18F-FDG and FAPI PET/CT imaging. (C) Axial 18F-FDG and FAPI PET/CT images of same patient on level (blue line in A) of pancreatic tumor mass and another suspicious FAPI accumulation in projection on perihepatic lymph node. Metastatic situation, which had been revealed by FAPI PET/CT, was confirmed by biopsy of pulmonary lesion that was diagnosed as metastasis of known PDAC.

Image: 
Image created by M Röhrich, University Hospital Heidelberg, Germany.

Reston, VA--For patients with pancreatic ductal adenocarcinomas (PDAC), molecular imaging can improve staging and clinical management of the disease, according to research published in the June issue of The Journal of Nuclear Medicine. In a retrospective study of PDAC patients, the addition of PET/CT imaging with 68Ga-FAPI led to restaging of disease in more than half of the patients, most notably in those with local recurrence.

PDAC is a highly lethal cancer, with a five-year survival rate of less than 10 percent. Optimal imaging of PDAC is crucial for accurate initial TNM (tumor, node, metastases) staging and selection of the primary treatment. Follow-up imaging is also important to accurately detect local recurrence or metastatic spread as early and as completely as possible.

"Currently, contrast-enhanced CT is the gold standard when it comes to TNM staging, and PET imaging isn't typically part of the clinical routine" stated Manuel Röhrich, MD, nuclear medicine physician at Heidelberg University Hospital in Heidelberg, Germany. "However, we know that PDAC is composed of certain fibroblasts that express fibroblast activation protein, which can be imaged with the novel PET radiotracer 68Ga-FAPI. Given this characteristic, we sought to explore the utility of 68Ga-FAPI PET/CT to image FDAC patients."

The study included 19 FDAC patients who received contrast-enhanced CT imaging followed by 68Ga-FAPI PET/CT. Results from the 68Ga-FAPI PET/CT scans were then compared with TNM staging based on contrast-enhanced CT. Changes in oncological management were recorded.

68Ga-FAPI PET/CT-based TNM staging differed from contrast-enhanced CT imaging in 10 out of 19 patients, which resulted in changes in TNM staging. Of the 12 patients with recurrent disease, eight were upstaged, one was downstaged and three remained the same. In the seven patients newly diagnosed with PDAC, one was upstaged, while the staging remained the same for six of the patients.

"This analysis suggests that 68Ga-FAPI PET/CT is a promising new imaging modality in staging of PDAC that may help to detect new or clarify inconclusive results obtained by standard CT imaging," said Röhrich. He added, "Improvement in survival can only be achieved by effective treatment approaches customized to the individual patient's disease status. Thus, hybrid imaging using FAPI tracer may open up new applications in staging and restaging of PDAC."

Credit: 
Society of Nuclear Medicine and Molecular Imaging

New study shows mathematical models helped reduce the spread of COVID-19

Colorado researchers have published new findings in Emerging Infectious Diseases that take a first look at the use of SARS-CoV-2 mathematical modeling to inform early statewide policies enacted to reduce the spread of the Coronavirus pandemic in Colorado. Among other findings, the authors estimate that 97 percent of potential hospitalizations across the state in the early months of the pandemic were avoided as a result of social distancing and other transmission-reducing activities such as mask wearing and social isolation of symptomatic individuals.

The modeling team was led by faculty and researchers in the Colorado School of Public Health and involved experts from the University of Colorado Anschutz Medical Campus, University of Colorado Denver, University of Colorado Boulder, and Colorado State University.

"One of the defining characteristics of the COVID-19 pandemic was the need for rapid response in the face of imperfect and incomplete information," said the authors. "Mathematical models of infectious disease transmission can be used in real-time to estimate parameters, such as the effective reproductive number (Re) and the efficacy of current and future intervention measures, and to provide time-sensitive data to policymakers."

The new paper describes the development of such a model, in close collaboration with the Colorado Department of Health and Environment and the Colorado Governor's office to gage the impact of early policies to decrease social contacts and, later, the impact of gradual relaxation of Stay-at-Home orders. The authors note that preparing for hospital intensive care unit (ICU) loads or capacity limits was a critical decision-making issue.

The Colorado COVID-19 Modeling team developed a susceptible-exposed-infected-recovered (SEIR) model calibrated to Colorado COVID-19 case and hospitalization data to estimate changes in the contact rate and the Re after emergence of SARS-CoV-2 and the implementation of statewide COVID-19 control policies in Colorado. The modeling team supplemented model estimates with an analysis of mobility by using mobile device location data. Estimates were generated in near real time, at multiple time-points, with a rapidly evolving understanding of SARS-CoV-2. At each time point, the authors generated projections of the possible course of the outbreak under an array of intervention scenarios. Findings were regularly provided to key Colorado decision-makers.

"Real-time estimation of contact reduction enabled us to respond to urgent requests to actively inform rapidly changing public health policy amidst a pandemic. In early stages, the urgent need was to flatten the curve," note the authors. "Once infections began to decrease, there was interest in the degree of increased social contact that could be tolerated as the economy reopened without leading to overwhelmed hospitals."

"Although our analysis is specific to Colorado, our experience highlights the need for locally calibrated transmission models to inform public health preparedness and policymaking, along with ongoing analyses of the impact of policies to slow the spread of SARS-CoV-2," said Andrea Buchwald, PhD, lead author from the Colorado School of Public Health at CU Anschutz. "We present this material not as a final estimate of the impact of social distancing policies, but to illustrate how models can be constructed and adapted in real-time to inform critical policy questions."

Credit: 
University of Colorado Anschutz Medical Campus

Energycane produces more biodiesel than soybean at a lower cost

image: University of Illinois researchers (L to R) Steve Long, Shraddha Maitra, Vijay Singh, and Deepak Kumar conducted a series of studies on biofuel production from energycane.

Image: 
University of illinois.

URBANA, Ill. ¬- Bioenergy from crops is a sustainable alternative to fossil fuels. New crops such as energycane can produce several times more fuel per acre than soybeans. Yet, challenges remain in processing the crops to extract fuel efficiently.

Four new studies from the University of Illinois explore chemical-free pretreatment methods, development of high-throughput phenotyping methods, and commercial-scale techno-economic feasibility of producing fuel from energycane in various scenarios.

The studies are part of the ROGUE (Renewable Oil Generated with Ultra-productive Energycane) project at U of I. ROGUE focuses on bioengineering accumulation of triacylglycerides (TAGs) in the leaves and stems of energycane, enabling the production of much more industrial vegetable oil per acre than previously possible.

"The productivity of these non-food crops is very high per unit of land. Soybean is the traditional crop used for biodiesel, but we can get higher yield, more oil, and subsequently more biofuel from lipid-producing energycane," says Vijay Singh, Founder professor in the Department of Agricultural and Biological Engineering (ABE) at U of I and co-author on all four papers.

Biofuel production from crops involves breaking down the cellulosic material and extracting the oil in a series of steps, explains study co-author Deepak Kumar, assistant professor in the Chemical Engineering Department at State University of New York College of Environmental Science and Forestry (SUNY-ESF) and adjunct research scientist at the Carl R. Woese Institute for Genomic Biology at U of I.

"The first step is to extract the juice. That leaves bagasse, a lignocellulosic material you can process to produce sugars and subsequently ferment to bioethanol," Kumar says.

"One of the critical things in processing any lignocellulosic biomass is a pretreatment step. You need to break the recalcitrant structure of the material, so enzymes can access the cellulose," he adds. "Because energycane is a relatively new crop, there are very few studies on the pretreatment and breakdown of this bagasse to produce sugars, and to convert those sugars into biofuels."

The pretreatment process also yields some unwanted compounds, which inhibit enzymes that convert the sugar into biofuels. The U of I researchers investigated the best pretreatment methods to maximize the breakdown while minimizing the production of inhibitors. Typically, the pretreatment process uses chemicals such as sulfuric acid to break down the biomass at high temperature and pressure.

"We use a chemical-free method, which makes it more environmentally friendly," Kumar explains. "Furthermore, harsh chemicals may alter the oil structure or quality in the biomass."

The researchers tested their method using nine different combinations of temperature and time intervals. They were able to achieve more than 90% cellulose conversion at the optimal conditions, which is equivalent to results from chemical pretreatment methods.

The second study built on those results to further investigate the relationship between temperature, inhibitor production, and sugar recovery.

"We pretreated the lignocellulosic biomass over a range of different temperatures to optimize the condition for minimal inhibitor generation without affecting the sugar recovery. Then we added cryogenic grinding to the process," says Shraddha Maitra, postdoctoral research associate in ABE and lead author on the study.

"In cryogenic grinding, you treat the bagasse with liquid nitrogen, which makes it very brittle, so upon grinding the biomass fractures easily to release the sugars. This further increased sugar recovery, mainly xylose, by about 10% compared to other refining processes," Maitra explains.

Other industries use similar methods, for example for spices and essential oils, where it is important to preserve the qualities of the product. But applying them to biofuel production is new.

In a third study, Maitra and her co-authors investigated time-domain nuclear magnetic resonance (NMR) technology to determine the stability and recovery of lipids by monitoring changes in total, bound, and free lipids after various physical and chemical feedstock preprocessing procedures.

The research team's fourth study investigated the commercial-scale techno-economic feasibility of engineered energycane-based biorefinery. They used computer modeling to simulate the production process under two different scenarios to determine capital investment, production costs, and output compared with soybean-based biodiesel.

"Although the capital investment is higher compared to soybean biodiesel, production costs are lower (66 to 90 cents per liter) than for soybean (91 cents per liter). For the first scenario, processing energycane had overall slightly lower profitability than soybean biodiesel, but yields five times as much biodiesel per unit of land," says Kumar, the lead author on the study.

"Energycane is attractive in its ability to grow across a much wider geography of the U.S. south east than sugarcane. This is a region with much underutilized land, yet capable of rain-fed agriculture," says ROGUE Director Steve Long, Ikenberry Endowed Chair of Plant Biology and Crop Sciences at the University of Illinois.

"As a perennial, energycane is suitable for land that might be damaged by annual crop cultivation. Our research shows the potential to produce a remarkable 7.5 barrels of diesel per acre of land annually. Together with co-products, this would be considerably more profitable than most current land use, while having the potential to contribute greatly to the national U.S. goal of achieving net zero greenhouse gas emissions by 2050. This proves how valuable it is to build on the successes already achieved in bioengineering energycane to accumulate oils that are easily converted into biodiesel and biojet," Long states.

Credit: 
University of Illinois College of Agricultural, Consumer and Environmental Sciences

Tiny tools: Controlling individual water droplets as biochemical reactors

image: Controlling Individual Water Droplets as Biochemical Reactors -
Scientists from Ritsumeikan University, Japan develop a method to better manipulate tiny droplets in lab-on-a-chip applications for biochemistry, cell culturing, and drug screening

Image: 
Ritsumeikan University, Japan

Miniaturization is rapidly reshaping the field of biochemistry, with emerging technologies such as microfluidics and "lab-on-a-chip" devices taking the world by storm. Chemical reactions that were normally conducted in flasks and tubes can now be carried out within tiny water droplets not larger than a few millionths of a liter. Particularly, in droplet-array sandwiching techniques, such tiny droplets are orderly laid out on two parallel flat surfaces opposite to each other. By bringing the top surface close enough to the bottom one, each top droplet makes contact with the opposite bottom droplet, exchanging chemicals and transferring particles or even cells. In quite a literal way, these droplets can act as small reaction chambers or cell cultures, and they can also fulfill the role of liquid-handling tools such as pipettes but on a much smaller scale.

The problem with droplet-array sandwiching is that there is no individual control of droplets; once the top surface is lowered, each droplet on the bottom surface necessarily makes contact with one on the top surface. In other words, this technology is limited to batch operations, which limits its versatility and makes it costlier. Could there be a simple way to select which droplets should make contact when the surfaces are brought closer together?

Thanks to Professor Satoshi Konishi and his colleagues at Ritsumeikan University, Japan, the answer is a resounding yes! In a recent study published in Scientific Reports, this team of scientists presented a novel technique that allows one to individually select droplets for contact in droplet-array sandwiching. The idea behind their approach is rather straightforward: if we could control the height of individual droplets on the bottom surface to make some stand taller than others, we could bring both surfaces close together such that only those droplets make contact with their counterparts while sparing the rest. How this was actually achieved, however, was a bit trickier.

The researchers had previously attempted to use electricity to control the "wettability" of the dielectric material in the area below each droplet. This approach, known as "electrowetting-on-dielectric (EWOD)," lets one slightly alter the balance of forces that holds a water droplet together when resting on a surface. By applying an electric voltage under the droplet, it is possible to make it spread out slightly, increasing its area and reducing its height. However, the team found that this process was not easily reversible, as droplets would not spontaneously recover their original height once the voltage was turned off.

To tackle this problem, they developed an EWOD electrode with a hydrophilic-hydrophobic pattern. When the electrode is turned on, the previously described process makes the droplet on top of it spread out and become shorter. Conversely, when the electrode is turned off, the outer hydrophobic part of the electrode repels the droplet while the inner hydrophilic part attracts it. This restores the original shape, and height, of the droplet!

The researchers showcased their method by laying out multiple EWOD electrodes on the bottom surface of a droplet-array sandwiching platform. By simply applying voltage to selected electrodes, they could easily choose which pairs of droplets came into contact when the top platform was lowered. In their demonstration, they transferred red dye from the top droplets to only some of the bottom droplets. "Our approach can be used to electrically set up individual contacts between droplets, allowing us to effortlessly control the concentration of chemicals in these droplets or even transfer living cells from one to another," explains Prof. Konishi.

This study paves the way for the potentially fruitful combination of droplet-handling techniques and automation. "We envision that lab-on-chip technology using droplets will replace conventional manual operations using tools such as pipettes, thereby improving the efficiency of drug screening. In turn, this will accelerate the process of drug discovery," highlights Prof. Konishi. He adds that culturing cells in hanging droplets, which has been used in the field of cell biology, will also make cell-based evaluation of drugs and chemicals cheaper and faster, representing a valuable tool for biochemistry and cell biology.

Let us hope the fruits of this technology "drop" just around the corner!

Credit: 
Ritsumeikan University

Study: Impulsiveness tied to faster eating in children, can lead to obesity

BUFFALO, N.Y -- Children who eat slower are less likely to be extroverted and impulsive, according to a new study co-led by the University at Buffalo and Children's Hospital of Philadelphia.

The research, which sought to uncover the relationship between temperament and eating behaviors in early childhood, also found that kids who were highly responsive to external food cues (the urge to eat when food is seen, smelled or tasted) were more likely to experience frustration and discomfort and have difficulties self-soothing.

These findings are critical because faster eating and greater responsiveness to food cues have been linked to obesity risk in children, says Myles Faith, PhD, co-author and professor of counseling, school and educational psychology in the UB Graduate School of Education.

The research, published in June in Pediatric Obesity, supports the integration of temperament into studies of and treatment for childhood obesity, a connection Faith deemed in need of further exploration in a previous study he co-led.

"Temperament is linked to many child developmental and behavioral outcomes, yet despite emerging evidence, few studies have examined its relationship with pediatric obesity," said co-lead investigator Robert Berkowitz, MD, emeritus professor at the University of Pennsylvania and director of the Weight and Eating Disorders Research Program at Children's Hospital of Philadelphia.

Co-lead investigator Alyssa Button, doctoral candidate in the UB Graduate School of Education, is the first author.

The researchers surveyed 28 participants beginning a family intervention program to reduce eating speed among 4- to 8-year-old children with or at risk for obesity.

The study examined the associations between three eating behaviors and three facets of temperament. The eating behaviors included responsiveness to feeling full (internal food cues); responsiveness to seeing, smelling and tasting food (external food cues); and eating speed. Temperament consisted of extroversion and impulsivity (also known as surgency); self-control; and the inability to self-sooth negative emotions such as anger, fear and sadness.

Among the findings is that children who respond well to feeling full exhibit more self-control. More research is needed to understand the role parents play in their children's temperament and eating behavior, says Button.

"Parents may use food to soothe temperamental children and ease negative emotions," says Button, also a senior research support specialist in the Department of Pediatrics in the Jacobs School of Medicine and Biomedical Sciences at UB. "Future research should examine the different ways parents feed their children in response to their temperament, as well as explore whether the relationship between temperament and eating behaviors is a two-way street. Could the habit of eating slower, over time, lead to lower impulsiveness?"

"This study established relationships between temperament and eating patterns in children; however, there is still the question of chicken-and-egg and which comes first?" says Faith. "Research that follows families over time is needed to untangle these developmental pathways."

Credit: 
University at Buffalo

Quantum particles: Pulled and compressed

image: The quantum motion of a nanoparticle can be extended beyond the size of the particle using the new technique developed by. physicists in Austria.

Image: 
Marc Montagut

Very recently, researchers led by Markus Aspelmeyer at the University of Vienna and Lukas Novotny at ETH Zurich cooled a glass nanoparticle into the quantum regime for the first time. To do this, the particle is deprived of its kinetic energy with the help of lasers. What remains are movements, so-called quantum fluctuations, which no longer follow the laws of classical physics but those of quantum physics. The glass sphere with which this has been achieved is significantly smaller than a grain of sand, but still consists of several hundred million atoms. In contrast to the microscopic world of photons and atoms, nanoparticles provide an insight into the quantum nature of macroscopic objects. In collaboration with experimental physicist Markus Aspelmeyer, a team of theoretical physicists led by Oriol Romero-Isart of the University of Innsbruck and the Institute of Quantum Optics and Quantum Information of the Austrian Academy of Sciences is now proposing a way to harness the quantum properties of nanoparticles for various applications.

Briefly delocalized

"While atoms in the motional ground state bounce around over distances larger than the size of the atom, the motion of macroscopic objects in the ground state is very, very small," explain Talitha Weiss and Marc Roda-Llordes from the Innsbruck team. "The quantum fluctuations of nanoparticles are smaller than the diameter of an atom." To take advantage of the quantum nature of nanoparticles, the wave function of the particles must be greatly expanded. In the Innsbruck quantum physicists' scheme, nanoparticles are trapped in optical fields and cooled to the ground state. By rhythmically changing these fields, the particles now succeed in briefly delocalizing over exponentially larger distances. "Even the smallest perturbations may destroy the coherence of the particles, which is why by changing the optical potentials, we only briefly pull apart the wave function of the particles and then immediately compress it again," explains Oriol Romero-Isart. By repeatedly changing the potential, the quantum properties of the nanoparticle can thus be harnessed.

Many applications

With the new technique, the macroscopic quantum properties can be studied in more detail. It also turns out that this state is very sensitive to static forces. Thus, the method could enable highly sensitive instruments that can be used to determine forces such as gravity very precisely. Using two particles expanded and compressed simultaneously by this method, it would also be possible to entangle them via a weak interaction and explore entirely new areas of the macroscopic quantum world.

Together with other proposals, the new concept forms the basis for the ERC Synergy Grant project Q-Xtreme, which was granted last year. In this project, the research groups of Markus Aspelmeyer and Oriol Romero-Isart, together with Lukas Novotny and Romain Quidant of ETH Zurich, are pushing one of the most fundamental principles of quantum physics to the extreme limit by positioning a solid body of billions of atoms in two places at the same time.

Credit: 
University of Innsbruck

Reducing the melting of the Greenland ice cap using solar geoengineering?

Injecting sulphur into the stratosphere to reduce solar radiation and stop the Greenland ice cap from melting. An interesting scenario, but not without risks. Climatologists from the University of Liège have looked into the matter and have tested one of the scenarios put forward using the MAR climate model developed at the University of Liège. The results are mixed and have been published in the journal The Cryosphere.

The Greenland ice sheet will lose mass at an accelerated rate throughout the 21st century, with a direct link between anthropogenic greenhouse gas emissions and the extent of Greenland's mass loss. To combat this phenomenon, and therefore global warming, it is essential to reduce our greenhouse gas emissions. Every day new ideas emerge to slow down global warming, such as the use of solar geoengineering, a climate intervention that consists of artificially reducing solar radiation above the ice caps and thus limiting the melting of the ice. How can this be done? The idea is to inject sulphur into the stratosphere, a stable meteorological zone located between 8 and 15 km above sea level in the atmosphere," explains Xavier Fettweis,climatologist and director of the Climatology Laboratory at ULiège. The sulphur will then act as a sort of mirror that will reflect part of the solar radiation back into space". An intervention which therefore makes it possible to reduce the amount of sunshine on earth, similar to what happens during volcanic eruptions. In 1991, the eruption of Pinatubo (Philippines) injected millions of tonnes of sulphur dioxide into the stratosphere, causing a drop in global temperatures of around 0.5°C. This observation led to the development of solar geoengineering scenarios. Are these scenarios really reliable and risk-free? This is what the ULiège climatologists wanted to test.

We used a plausible scenario of solar geoengineering (G6solar) that would reduce global warming by a factor of 2 on a global scale compared with the most pessimistic scenario in which nothing would be done about the climate," continues Xavier Fettweis. By forcing the MAR (Regional Atmospheric Model) developed at ULiège to use this scenario, we show that the reduction in solar radiation associated with this scenario would make it possible to locally reduce the melting at the surface of the Greenland ice sheet by 6% in addition to the global reduction in global warming. While these results seem encouraging, the researchers insist that this type of scenario would not be sufficient to maintain the ice cap in a stable state by the end of this century. Moreover, this type of intervention is not without risk since it could have a significant impact on the ozone layer and on water cycles and precipitation, accentuating the disparities between wet and dry regions. Only solar geoengineering scenarios, which are much more ambitious but becoming unrealistic and dangerous, would make it possible to save the cap," concludes Xavier Fettweis. We are talking here about human and intentional intervention in the climate. A plan B that is not! It is therefore urgent to drastically reduce our greenhouse gas emissions by means that we know but are struggling to implement.

Credit: 
University of Liège

Machine learning tool sorts the nuances of quantum data

ITHACA, N.Y. - An interdisciplinary team of Cornell and Harvard University researchers developed a machine learning tool to parse quantum matter and make crucial distinctions in the data, an approach that will help scientists unravel the most confounding phenomena in the subatomic realm.

The Cornell-led project's paper, "Correlator Convolutional Neural Networks as an Interpretable Architecture for Image-like Quantum Matter Data," published June 23 in Nature Communications. The lead author is doctoral student Cole Miles.

The Cornell team was led by Eun-Ah Kim, professor of physics in the College of Arts and Sciences, who partnered with Kilian Weinberger, associate professor of computing and information science in the Cornell Ann S. Bowers College of Computing and Information Science and director of the TRIPODS Center for Data Science for Improved Decision Making.

The collaboration with the Harvard team, led by physics professor Markus Greiner, is part of the National Science Foundation's 10 Big Ideas initiative, "Harnessing the Data Revolution." Their project, "Collaborative Research: Understanding Subatomic-Scale Quantum Matter Data Using Machine Learning Tools," seeks to address fundamental questions at the frontiers of science and engineering by pairing data scientists with researchers who specialize in traditional areas of physics, chemistry and engineering.

The project's central aim is to find ways to extract new information about quantum systems from snapshots of image-like data. To that end, they are developing machine learning tools that can identify relationships among microscopic properties in the data that otherwise would be impossible to determine at that scale.

Convolutional neural networks, a kind of machine learning often used to analyze visual imagery, scan an image with a filter to find characteristic features in the data irrespective of where they occur - a step called "convolution." The convolution is then sent through nonlinear functions that make the convolutional neural networks learn all sorts of correlations among the features.

Now, the Cornell group has improved upon that approach by creating an "interpretable architecture," called Correlation Convolutional Neural Networks (CCNN), that allows the researchers to track which particular correlations matter the most.

"Convolutional neural networks are versatile," Kim said. "However, the versatility that comes from the nonlinearity makes it difficult to figure out how the neural network used a particular filter to make its decision, because nonlinear functions are hard to track. That's why weather prediction is difficult. It's a very nonlinear system."

To test CCNN, the Harvard team employed quantum gas microscopy to simulate a fermionic Hubbard model - often used to demonstrate how quantum particles interact in a lattice, and also the many unresolved questions that are raised as a result.

"Quantum mechanics is probabilistic, but you cannot learn probability from one measurement, you have to repeat many measurements," Kim said. "From the Schrödinger's cat perspective, we have a whole collection of atoms, a collection of live or dead cats. And each time we make a projective measurement, we have some dead cats and some live cats. And from that we're trying to understand what state the system is in, and the system is trying to simulate fundamental models that hold keys to understanding mysterious phenomena, such as high-temperature superconductivity."

The Harvard team generated synthetic data for two states that are difficult to tell apart: geometric string theory and pi-flux theory. In geometric string theory, the system verges on an antiferromagnetic order, in which the electron spins form a kind of anti-alignment - i.e., up, down, up, down, up, down - that is disrupted when an electron hole starts to move at a different timescale. In pi-flux theory, the spins form pairs, called singlets, that begin to flip and flop around when a hole is introduced, resulting in a scrambled state.

CCNN was able to distinguish between the two simulations by identifying correlations in the data to the fourth order.

By repeating this exercise, the CCNN essentially learns what occurrences in the image were essential for neural networks to make a decision - a process that Kim compares to the choices made by people boarding a lifeboat.

"You know when a big ship is about to sink, and people are told, OK, you can only bring one personal item," Kim said. "That will show what's in their hearts. It could be a wedding ring, it could be a trash can. You never know. We're forcing the neural network to choose one or two features that help it the most in coming up with the right assessment. And by doing so we can figure out what are the critical aspects, the core essence, of what defines a state or phase."

The approach can be applied to other scanning probe microscopies that generate image-type data on quantum materials, as well as programmable quantum simulators. The next step, according to Kim, is to incorporate a form of unsupervised machine learning that can offer a more objective perspective, one that is less influenced by the decisions of researchers handpicking which samples to compare.

Kim sees researchers like her student and lead author Cole Miles as representing the next generation that will meld these cutting-edge and traditional approaches even further to drive new scientific discovery.

"More conservative people are skeptical of new and shiny things," Kim said. "But I think that balance and synergy between classic and the new and shiny can lead to nontrivial and exciting progress. And I think of our paper as an example of that."

Credit: 
Cornell University

Scientists use artificial intelligence to detect gravitational waves

image: Scientific visualization of a numerical relativity simulation that describes the collision of two black holes consistent with the binary black hole merger GW170814. The simulation was done on the Theta supercomputer using the open source, numerical relativity, community software Einstein Toolkit (https://einsteintoolkit.org/).

Image: 
(Image by Argonne Leadership Computing Facility, Visualization and Data Analytics Group [Janet Knowles, Joseph Insley, Victor Mateevitsi, Silvio Rizzi].)

When gravitational waves were first detected in 2015 by the advanced Laser Interferometer Gravitational-Wave Observatory (LIGO), they sent a ripple through the scientific community, as they confirmed another of Einstein’s theories and marked the birth of gravitational wave astronomy. Five years later, numerous gravitational wave sources have been detected, including the first observation of two colliding neutron stars in gravitational and electromagnetic waves.

As LIGO and its international partners continue to upgrade their detectors’ sensitivity to gravitational waves, they will be able to probe a larger volume of the universe, thereby making the detection of gravitational wave sources a daily occurrence. This discovery deluge will launch the era of precision astronomy that takes into consideration extrasolar messenger phenomena, including electromagnetic radiation, gravitational waves, neutrinos and cosmic rays. Realizing this goal, however, will require a radical re-thinking of existing methods used to search for and find gravitational waves.

Recently, computational scientist and lead for translational artificial intelligence (AI), Eliu Huerta of the U.S. Department of Energy’s (DOE) Argonne National Laboratory, in conjunction with collaborators from Argonne, the University of Chicago, the University of Illinois at Urbana-Champaign, NVIDIA and IBM, has developed a new production-scale AI framework that allows for accelerated, scalable and reproducible detection of gravitational waves.

This new framework indicates that AI models could be as sensitive as traditional template matching algorithms, but orders of magnitude faster. Furthermore, these AI algorithms would only require an inexpensive graphics processing unit (GPU), like those found in video gaming systems, to process advanced LIGO data faster than real time.

The AI ensemble used for this study processed an entire month — August 2017 — of advanced LIGO data in less than seven minutes, distributing the dataset over 64 NVIDIA V100 GPUs. The AI ensemble used by the team for this analysis identified all four binary black hole mergers previously identified in that dataset, and reported no misclassifications.

“As a computer scientist, what’s exciting to me about this project,” said Ian Foster, director of Argonne’s Data Science and Learning (DSL) division, “is that it shows how, with the right tools, AI methods can be integrated naturally into the workflows of scientists — allowing them to do their work faster and better — augmenting, not replacing, human intelligence.”

Bringing disparate resources to bear, this interdisciplinary and multi-institutional team of collaborators has published a paper in Nature Astronomy showcasing a data-driven approach that combines the team’s collective supercomputing resources to enable reproducible, accelerated, AI-driven gravitational wave detection.

“In this study, we’ve used the combined power of AI and supercomputing to help solve timely and relevant big-data experiments. We are now making AI studies fully reproducible, not merely ascertaining whether AI may provide a novel solution to grand challenges,” Huerta said.

Building upon the interdisciplinary nature of this project, the team looks forward to new applications of this data-driven framework beyond big-data challenges in physics.

“This work highlights the significant value of data infrastructure to the scientific community,” said Ben Blaiszik, a research scientist at Argonne and the University of Chicago. “The long-term investments that have been made by DOE, the National Science Foundation (NSF), the National Institutes of Standards and Technology and others have created a set of building blocks. It is possible for us to bring these building blocks together in new and exciting ways to scale this analysis and to help deliver these capabilities to others in the future.”

Huerta and his research team developed their new framework through the support of the NSF, Argonne’s Laboratory Directed Research and Development (LDRD) program and DOE’s Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program.

“These NSF investments contain original, innovative ideas that hold significant promise of transforming the way scientific data arriving in fast streams are processed. The planned activities are bringing accelerated and heterogeneous computing technology to many scientific communities of practice,” said Manish Parashar, director of the Office of Advanced Cyberinfrastructure at NSF.

Credit: 
DOE/Argonne National Laboratory

University of Maryland researchers record brainwaves to measure 'cybersickness'

image: A test subject experiences a potentially stomach-churning virtual reality fly-
through of a space station while her brain activity is monitored.

Image: 
Maryland Blended Reality Center

If a virtual world has ever left you feeling nauseous or disorientated, you're familiar with cybersickness, and you're hardly alone. The intensity of virtual reality (VR)--whether that's standing on the edge of a waterfall in Yosemite or engaging in tank combat with your friends--creates a stomach-churning challenge for 30-80% of users.

In a first-of-its kind study, researchers at the University of Maryland recorded VR users' brain activity using electroencephalography (EEG) to better understand and work toward solutions to prevent cybersickness. The research was conducted by Eric Krokos, who received his Ph.D. in computer science in 2018, and Amitabh Varshney, a professor of computer science and dean of UMD's College of Computer, Mathematical, and Natural Sciences.

Their study, "Quantifying VR cybersickness using EEG," was recently published in the journal Virtual Reality.

The term cybersickness derives from motion sickness, but instead of physical movement, it's the perception of movement in a virtual environment that triggers physical symptoms such as nausea and disorientation. While there are several theories about why it occurs, the lack of a systematic, quantified way of studying cybersickness has hampered progress that could help make VR accessible to a broader population.

Krokos and Varshney are among the first to use EEG--which records brain activity through sensors on the scalp--to measure and quantify cybersickness for VR users. They were able to establish a correlation between the recorded brain activity and self-reported symptoms of their participants. The work provides a new benchmark--helping cognitive psychologists, game developers and physicians as they seek to learn more about cybersickness and how to alleviate it.

"Establishing a strong correlation between cybersickness and EEG-measured brain activity is the first step toward interactively characterizing and mitigating cybersickness, and improving the VR experience for all," Varshney said.

EEG headsets have been widely used to measure motion sickness, yet prior research on cybersickness has relied on users to accurately recall their symptoms through questionnaires filled out after users have removed their headsets and left the immersive environment.

The UMD researchers said that such methods provide only qualitative data, making it difficult to assess in real time which movements or attributes of the virtual environment are affecting users.

Another complication is that not all people suffer from the same physical symptoms when experiencing cybersickness, and cybersickness may not be the only cause of these symptoms.

Without the existence of a reliable tool to measure and interactively quantify cybersickness, understanding and mitigating it remains a challenge, said Varshney, a leading researcher in immersive technologies and co-director of the Maryland Blended Reality Center.

For the UMD study, participants were fitted with both a VR headset and an EEG recording device, then experienced a minute-long virtual fly-through of a futuristic spaceport. The simulation included quick drops and gyrating turns designed to evoke a moderate degree of cybersickness.

Participants also self-reported their level of discomfort in real time with a joystick. This helped the researchers identify which segments of the fly-through intensified users' symptoms.

Credit: 
University of Maryland

Wastewater did not significantly alter seismic stress direction in southern Kansas

Although wastewater disposal has been the primary driving force behind increased earthquake activity in southern Kansas since 2013, a new study concludes that the disposal has not significantly changed the orientation of stress in the Earth's crust in the region.

Activities like wastewater disposal can alter pore pressure, shape and size within rock layers, in ways that cause nearby faults to fail during an earthquake. These effects are thought to be behind most recent induced earthquakes in the central and eastern United States.

It is possible, however, that human activity could also lead to earthquakes by altering the orientation of stresses that act on faults in the region, said U.S. Geological Survey seismologist Robert Skoumal, who co-authored the study in Seismological Research Letters with USGS seismologist Elizabeth Cochran.

"Since we do not see evidence for a significant stress rotation [in the region], we think most of the earthquakes in southern Kansas are due to changes in pore pressures or porelastic effects rather than due to stress rotations," Skoumal said.

One way that researchers can learn more about the orientation of the stress field in rock layers where fluid fills fractures in the rock is through a seismic wave effect called shear wave splitting. Some of the shear waves traveling through the rock move parallel to open fractures and are therefore faster, while others move perpendicular to the fractures and have a lower velocity. Estimating the direction of the fast waves can help determine the orientation of the stress field.

A previous shear wave splitting study in southern Kansas estimated a 90-degree rotation in the fast direction beginning in 2015, which the study authors attributed to elevated pore fluid pressures from wastewater disposal. However, the rotation coincided with a change in the stations used to observe the shear waves.

Skoumal and Cochran decided to take another look at stress changes in southern Kansas as part of a larger effort to characterize stress in rock reservoirs without drilling expensive boreholes. When they analyzed shear wave splitting using high-quality data collected from a stable local seismic network, they found that the regional stress orientation remained relatively constant between 2014 and 2017.

The geological conditions of a wastewater reservoir might affect whether injection can alter stress orientation, Skoumal noted. Most of the wastewater injected in southern Kansas went into rock layers called the Arbuckle Group, "which is underpressurized--fluid can be 'poured in' without the need of a pump," Skoumal said, noting that pore pressures can diffuse rapidly in the highly permeable rock.

There are no reports of significant stress rotations due to wastewater disposal, the authors note, suggesting that it may either not be a common occurrence, the stress rotations are smaller than can be detected with current methods, or that the phenomenon hasn't been studied enough. Until recently, seismic instrumentation has been sparse in many places that have experienced a large increase in wastewater disposal over the past decade.

"Documenting stress orientations is already challenging in these regions, and characterizing changes in those stresses over time is an even greater challenge," Skoumal said. "In the areas where we have looked though, we haven't seen compelling evidence for significant stress rotations due to wastewater disposal."

Credit: 
Seismological Society of America

A biological fireworks show 300 million years in the making

image: Frog eggs like those pictured here release zinc when fertilized, much like mammalian eggs do.

Image: 
(Image by Tero Laakso/licensed under CC BY-SA 2.0.)

Five years ago, researchers at Northwestern University made international headlines when they discovered that human eggs, when fertilized by sperm, release billions of zinc ions, dubbed “zinc sparks.”

Now, Northwestern has teamed up with the U.S. Department of Energy’s (DOE) Argonne National Laboratory and Michigan State University (MSU) to reveal that these same sparks fly from highly specialized metal-loaded compartments at the egg surface when frog eggs are fertilized. This means that the early chemistry of conception has evolutionary roots going back at least 300 million years, to the last common ancestor between frogs and people.

“This work may help inform our understanding of the interplay of dietary zinc status and human fertility.” — Thomas O’Halloran, professor, Michigan State University

And the research has implications beyond this shared biology and deep-rooted history. It could also help shape future findings about how metals impact the earliest moments in human development.

“This work may help inform our understanding of the interplay of dietary zinc status and human fertility,” said Thomas O’Halloran, the senior author of the research paper published June 21 in the journal Nature Chemistry.

O’Halloran was part of the original zinc spark discovery at Northwestern and, earlier this year, he joined Michigan State as a foundation professor of microbiology and molecular genetics and chemistry. O’Halloran was the founder of Northwestern’s Chemistry of Life Processes Institute, or CLP, and remains a member.

The team also discovered that fertilized frog eggs eject another metal, manganese, in addition to zinc. It appears these ejected manganese ions collide with sperm surrounding the fertilized egg and prevent them from entering.

“These breakthroughs support an emerging picture that transition metals are used by cells to regulate some of the earliest decisions in the life of an organism,” O’Halloran said.

To make these discoveries, the team needed access to some of the most powerful microscopes in the world as well as expertise that spanned chemistry, biology and X-ray physics. That unique combination included collaborators at the Center for Quantitative Element Mapping for the Life Sciences, or QE-Map, an interdisciplinary National Institutes of Health-funded research hub at MSU and Northwestern’s CLP. The research relied heavily on the tools and expertise available at Argonne.

The research team brought sections of frog eggs and embryos to Argonne for analysis. Using both X-ray and electron microscopy, the researchers determined the identity, concentrations and intracellular distributions of metals both before and after fertilization.

X-ray fluorescence microscopy was conducted at beamline 2-ID-D of the Advanced Photon Source (APS), a DOE Office of Science User Facility at Argonne. Barry Lai, group leader at Argonne and an author on the paper, said that the X-ray analysis quantified the amount of zinc, manganese and other metals concentrated in small pockets around the outer layer of the eggs. They found these pockets contained more than 30 times the manganese as the rest of the eggs, and 10 times the zinc.

“We are able to do this analysis because of the elemental sensitivity of the beamline,” Lai said. “In fact, it is so sensitive that substantially lower concentrations can be measured.”

Complementary scans were conducted using transmission electron microscopy at the Center for Nanoscale Materials (CNM), a DOE Office of Science User Facility at Argonne. Further analysis was performed on a separate prototype scanning transmission electron microscope that includes technology developed by Argonne Senior Scientist Nestor Zaluzec, an author on the paper. These scans were performed at smaller scales — down to a few nanometers, about 100,000 times smaller than the width of a human hair — but found the same results: high concentrations of metals in pockets around the outer layer.

Both X-ray and electron microscopy showed that the metals in these pockets were almost completely released after fertilization.

“Argonne has the tools necessary to examine these biological samples at these scales without destroying them with X-rays or electrons,” Zaluzec said. “It’s a combination of the right resources and the right expertise.”

The APS is in the process of undergoing a massive upgrade, one that will increase the brightness of its X-ray beams by up to 500 times. Lai said that an upgraded APS could complete these scans much more quickly or with higher spatial resolution. What took more than an hour for this research could be done in less than one minute after the upgrade, Lai said.

“We often think of genes as key regulating factors, but our work has shown that atoms like zinc and manganese are critical to the first steps in development after fertilization,” said MSU Provost Teresa K. Woodruff, Ph.D., another senior author on the paper.

Woodruff, an MSU foundation professor and former member of CLP, was also a leader of the Northwestern team that discovered zinc sparks five years ago. With the discovery of manganese sparks in African clawed frogs, or Xenopus laevis, the team is excited to explore whether the element is released by human eggs when fertilized.

“These discoveries could only be made by interdisciplinary groups, fearlessly looking into fundamental steps,” she said. “Working across disciplines at the literal edge of technology is one of the most profound ways new discoveries take place.”

“Xenopus is a perfect system for such studies because their eggs are an order of magnitude larger than human or mouse eggs, and are accessible in large numbers ” said Carole LaBonne, another senior author on the study, CLP member, and chair of the Department of Molecular Biosciences at Northwestern. “The discovery of zinc and manganese sparks is exciting, and suggests there may be other fundamental signaling roles for these transition metals.”

Credit: 
DOE/Argonne National Laboratory