Earth

Scientists find ways to improve cassava, a 'crop of inequality' featured at Goalkeepers

video: According to a recent study using modeling, the tropical root crop cassava could be more productive with less water if the microscopic pores on its leaves could open three times faster, allowing more carbon dioxide to enter the plant to be fixed into sugars during photosynthesis in the transition from shade to light.

Image: 
RIPE Project

Today, as world leaders gather for the UN General Assembly, hundreds of emerging leaders focused on fighting global inequality came together at the Bill & Melinda Gates Foundation's third annual Goalkeepers event in New York City. Among them, University of Illinois scientist Amanda De Souza highlighted a crop of inequality called cassava, which has starchy, tuberous roots that sustain more than 500 million people in sub-Saharan Africa, yet cassava has been largely neglected by research and development compared to the staple crops of wealthier regions. Recently, De Souza and a team from Realizing Increased Photosynthetic Efficiency (RIPE) published a study in New Phytologist that identified opportunities to improve cassava yields--which have not increased for more than fifty years in Africa.

"For smallholder farmers who depend on tiny plots of land to feed and support their families, cassava is a 'backup' crop when other crops fail," De Souza said at Goalkeepers, where she described her work to improve cassava through the RIPE project. "Especially for women, who represent a majority of smallholder farmers, cassava is a savings account. It is a resource they can harvest all year to pay for things like medical treatments and their children's school fees."

The RIPE project is an international effort to develop more productive crops by improving photosynthesis--the natural, sunlight-powered process that all plants use to fix carbon dioxide into carbohydrates that fuel growth, development, and ultimately yields. RIPE is supported by the Gates Foundation, the U.S. Foundation for Food and Agriculture Research (FFAR), and the U.K. Government's Department for International Development (DFID).

Led by RIPE researchers at Illinois and Lancaster University, this study examined factors that limit photosynthesis in 11 popular, or farmer-preferred, African varieties of cassava with the goal to eventually help cassava overcome photosynthetic limitations to boost yields.

First, the team examined the photosynthetic limitations of cassava exposed to constant high levels of light, like a plant would experience at midday with cloudless skies. In these conditions, and like many crops, cassava's photosynthesis is limited (by as much as 80 percent) by two factors: One half is due to the low speed that carbon dioxide molecules travel through the leaf to reach the enzyme that drives photosynthesis, called Rubisco. The other half is because Rubisco sometimes fixes oxygen molecules by mistake, wasting large amounts of the plant's energy.

Next, the team evaluated the limitations of photosynthesis under fluctuating light conditions. Surprisingly, and unlike most crops, Rubisco was not the primary limiting factor when leaves transitioned from shade to sunlight, like when the sun comes out from behind a cloud. Instead, cassava is limited by stomata, which are microscopic pores on the surface of leaves that open to allow carbon dioxide to enter the plant but at the cost of water that escapes through these same pores. Stomata are partially closed in the shade and open in response to light when Rubisco is active.

"Rubisco is the major limiting factor during this transition from shade to light for most plants, including rice, wheat, and soybean," De Souza said. "Cassava is the first crop that we have found where stomata limit photosynthesis during these light transitions more than Rubisco."

Illinois' Postdoctoral Researcher Yu Wang created a computer model to quantify how much cassava would gain by overcoming this limitation. According to the leaf-level model, if stomata could open three times faster, cassava could fix 6 percent more carbon dioxide each day. In addition, cassava's water use efficiency--the ratio of biomass produced to water lost by the plant--could be improved by 16 percent.

In addition, the team found that it takes as long as 20 minutes for cassava to transition from shade to full light and reach the maximum rate of photosynthesis, which is quite slow compared to other crops such as rice that can transition in just a few minutes. However, the fastest variety of cassava could transition almost three times faster and fix 65 percent more carbon dioxide into carbohydrates than the slowest variety. Closing this gap is another opportunity to improve cassava's productivity.

"Plants are constantly moving from shade to light as leaves shift and clouds pass overhead," said RIPE Director Stephen Long, Ikenberry Endowed University Chair of Crop Sciences and Plant Biology at Illinois' Carl R. Woese Institute for Genomic Biology, who contributed to this study. "We hope that the variation that we discovered during these light transitions among cassava varieties can be used to identify new traits, and therefore opportunities for us to improve cassava's photosynthetic efficiency and yield potential."

Credit: 
Carl R. Woese Institute for Genomic Biology, University of Illinois at Urbana-Champaign

Ocean's key role in achieving climate goals

Earth's oceans are not simply a passive victim of climate change but instead provide a previously unappreciated opportunity to provide solutions towards reducing global greenhouse gas emissions, argue Ove Hoegh-Guildberg and colleagues in a Policy Forum. Acting on certain ocean-focused emissions activities could help reduce emissions by a substantial percentage of what would be required to limit warming to 1.5 degrees Celsius by 2050, they say. Two Intergovernmental Panel on Climate Change (IPCC) reports - including the upcoming a special report on the the cryosphere planned for 25 September - illustrate the immense impact global climate change is inflicting on the planet's oceans, as well as on the livelihoods of billions of people who rely on ocean ecosystems for their wellbeing. The reports sound the alarm about the urgent need to rapidly decarbonize all sectors of the global economy. A third report that Hoegh-Guildberg and colleagues highlight, from the High-Level Panel for a Sustainable Ocean Economy (HLP), discusses the feasibility of reducing the emissions of ocean-based activities and the significant impact this could have. Based on the HLP report, Hoegh-Guildberg and colleagues discuss several ways in which actionable changes could be implemented in the short term. They outline the requirements in research, technology and policy development that would be required. The ocean areas ripe for mitigation potential include ocean-based renewable energy, marine shipping and transport, and coastal ecosystem restoration. Hoegh-Guildberg et al. suggest that addressing these activities could reduce global emissions by nearly 11 billion tons in 2050, an amount accounting for nearly 21% of the reductions required to limit warming to 1.5 degrees Celsius by this date.

Credit: 
American Association for the Advancement of Science (AAAS)

Tale of 2 climate crises gives clues to the present

image: Illustration of the KPB mass extinction, the PETM, and Anthropocene climate warming. (A) During the latest Maastrichtian environmental devastation is mainly due to volcanism (ash, aerosols and greenhouse gases), resulting in rapid climate changes, acid rains and ocean acidification that is exacerbated by the Chicxulub impact, thus impeding calcification by marine plankton at the base of the food chain. (B) During the latest Paleocene to early Eocene: Gradual climate warming preceding the PEB is attributed to North Atlantic Igneous Province volcanism (NAIP), but the rapid warming of 5 °C (PETM) is linked to methane hydrates released from continental shelves resulting in acid rain on land and ocean acidification (~170,000 years). (C) During the Anthropocene large inputs of greenhouse gases (CO2, SO2, N2O) linked to human activities and fossil fuel burning leads to rapid warming and ocean acidification at a rate exceeding those at the PETM and KPB by orders of magnitude. Global carbon budget data for the Anthropocene from Le Quéré et al. (2013). Illustration modified from Glikson (2014).

Image: 
Courtesy Paula Mateo

Phoenix, Arizona, USA: Figuring out what lies ahead for our species and our planet is one of the most pressing and challenging tasks for climate scientists. While models are very useful, there is nothing quite like Earth's history to reveal details about how oceans, animals, and plants respond to and recover from a warming world.

The two most recent major global warming events are especially instructive -- and worrisome, say scientists presenting new research Wednesday at the Annual Meeting of the Geological Society of America.

Ancient analogs

The two past climate crises that are comparable to today's happened 56 and 66 million years ago. The earlier one, the Cretaceous-Paleogene boundary (KPB) mass extinction, is infamous for ending the reign of the dinosaurs. The later event, called the Paleocene-Eocene Thermal Maximum (PETM) was relatively less severe, and provides clues to how the world can recover from such difficult times.

"We chose these two because they are the most recent examples of rapid climate warming and have been widely studied so we have more information about them," said Paula Mateo, a geologist at Caltech, who will be presenting the study on Wednesday.

Both ancient global warming events were, like today, caused by the release of greenhouse gases -- a.k.a. carbon emissions -- into the atmosphere. The sources in the past were not fossil fuel burning however, but related to very large and long volcanic eruptions -- unlike any that have occurred during the time humans have existed.

The geologic evidence suggests that the carbon emissions that preceded the dinosaurs' demise were at an average rate of about 0.2 to 3 gigatons per year. The PETM recorded carbon emissions of less than 1.1 gigatons per year, Mateo said. Those numbers are dwarfed next to humanity's emission rate of 10 gigatons per year, she added.

Dino killer?

The KPB mass extinction event is often attributed solely to the Chicxulub meteor impact in Mexico, but there is a growing body of evidence suggesting that the massive eruption of the Deccan Traps in India also played a role. That mega-eruption flowed across India in pulse after pulse, lasting about 750,000 years. A full 280,000 years before the extinction event the oceans had warmed 3 to 4 degrees Celsius while on land the warming was of 6 to 8 degrees C because of the eruptions. Volcanic activity accelerated during the last 25,000 years before the mass extinction, Mateo said, steadily releasing more carbon dioxide into the atmosphere. Those pulses added another 2.5 degrees C to the global temperature.

"This series of mega-pulses didn't let the ecosystems adapt or even survive," Mateo said. Fossil evidence suggests that the warming and ocean acidification stressed life on land and oceans, eventually contributing to one of the five mass extinction events in the history of the planet. Microfossils of the oceans' foraminifers, which are part of the base of the marine food chain, show signs that they were struggling leading up the end of the Cretaceous period and then 66% went extinct at the KPB, 33% survived but rapidly disappeared during the first 100,000 years after the KPB, and only one species survived in the long term. On land warming during the last 280,000 years of the Cretaceous appears to have started a decline in dinosaurs as well in early mammals, insects, and amphibians well prior to the last mega-eruptions ending with the KPB mass extinction.

Ocean-building event

The more recent PETM, for its part, was caused by the expansion of the North Atlantic Ocean basin. That involved a lot of magma rising up from below to become the new ocean crust. All that magma released a lot of carbon dioxide, which appears to have caused moderate warming that, in turn, triggered the melting of clathrates -- frozen methane hydrate deposits in the ocean floor. The methane emissions supercharged the greenhouse situation and led to a 5 degree C spike of warming.

That warming was hard on living things on land and sea, but it wasn't a series of blows, like what led to the KPB. Many animals were able to adapt or migrate and avoid the harshest conditions. It was a single blow with environmental consequences that lasted about 200,000 years but there wasn't a mass extinction event.

The best analog

Listed side-by-side, it's sobering to see how many of the same ecosystem effects of the KPB and PETM are now being played out in the oceans and on land in real time as a result of anthropogenic warming.

"The difference with today is that even though it's a very short pulse, the rate of change is very, very rapid," said Mateo. "It's happening so fast that the ecosystems are unable to catch up. There is no time for adaptation."

So while today's greenhouse warming is a single pulse, as in the PETM, it is happening orders of magnitude faster, which could be creating effects more like those of the KPB.

Neither of the past events is a perfect analog, but they are instructive. The PETM could be an analogy for our best case scenario, Mateo explained. It's something humanity could potentially survive. The KPB, on the other hand, would be our worst case scenario analogy. If we take that path it would qualify as the sixth mass extinction in the planet's history.

Credit: 
Geological Society of America

For baboons, a mother's history of hardship can have lasting effects on her kids too

image: Whether this infant baboon survives to adulthood depends, in part, on events that happened to his mother long before his birth. A study of wild baboons in Kenya finds that a mom's childhood trauma can be passed on to the next generation.

Image: 
Chelsea Weibel, University of Notre Dame

DURHAM, N.C. -- Numerous studies show that children who had a rough start in life are more likely to have health problems later on.

The enduring effects of early adversity aren't unique to humans. But for baboons, the impacts aren't just borne by one generation -- the next generation bears the brunt as well, said Susan Alberts, chair of evolutionary anthropology at Duke University.

The findings come from a study of 169 baboon mothers and nearly 700 of their offspring that were monitored almost daily between 1976 and 2017 in Amboseli National Park in Kenya.

In a paper published Sept. 24 in the journal eLife, Alberts, first author Matthew Zipple and colleagues report that a baboon mother's early trauma is linked to shorter lifespans for her kids, even if they grew up more carefree than she did.

To be sure, baboons don't risk growing up poor, or with an alcoholic parent, or in high-crime neighborhoods. But some have it harder than others.

The team looked at a variety of bad breaks a baboon might face in the first four years of life, before they started reproducing. They might be orphaned, or born in lean times when food and water are scarce. Some are raised by a mom with low social ranking. Others may have to compete for their mother's milk and attention with a younger sibling close in age or the larger group.

In a previous study published in 2016, the team found that baboons that experienced multiple such misfortunes during childhood die up to 10 years earlier than their more fortunate peers.

In the new study, the researchers were able to see the effects of early adversity years later in the next generation too, and even when those offspring had it easier than their mothers did.

Specifically, female baboons whose next-born sibling arrived before they were fully weaned, or who were orphaned before age four, went on to have offspring that were 39% to 48% less likely to make it to adulthood themselves -- often seven or more years after their moms' early hardships.

"It's a big difference," Alberts said.

Take Waka. By the time Waka was just 16 months old, her mother Willy was already juggling another baby. Then, just before Waka turned three, her mother died. Waka eventually had four children of her own, but none of them survived past their fifth birthday.

The study did not pinpoint why offspring of 'survivor' females had a higher risk for early death. It could be that when a baboon's relationship with her mother is cut short, she is less able to provide basic care when she becomes a mother herself, such as making quality milk, protecting her kids, or teaching them how to forage or make friends.

"Up until the age of four months, baboon kids rarely venture more than a meter from their mother," said Alberts, who has been studying the Amboseli baboons since 1984.

"In the first year of life, a baboon's mom is everything," said Zipple, a Ph.D. student in biology at Duke and the lead author on the paper.

The team's next step, Zipple says, is to look at how a mother's history of hardship affects her parenting. To find out, they've been observing mother-infant pairs for 45 minutes at a time, noting all the ways the mothers interact with their infants, from cuddling or suckling them to attending to their infants' cries.

For the Amboseli research team, using baboons to understand the origins of disease makes it possible to disentangle the intergenerational effects of early adversity from other factors that are often confounded in studies of human health, such as education, drug use, and access to health care.

The team says their baboon research is important because it helps them test ideas about how childhood wounds faced by one generation can take a toll on the next, and how close relationships with parents or other sources of support can help break the cycle.

By tracing adult health problems back to traumas faced in a parent's or grandparent's childhood, scientists say, we may better understand how to prevent some family disease trends from taking root in the first place.

Credit: 
Duke University

New mechanisms that regulate pluripotency in embryonic stem cells are discovered

image: The discovery paves the way to the development of drugs capable of making ESCs regress to the earliest stage of development.

Image: 
Luis Henrique Rimel - Hemocentro RP

Embryonic stem cells (ESCs) can give rise to many different types of tissues and organs. At the turn of the present century, these cells were believed to offer hope of treatment for several health problems, but as research advanced, scientists realized that understanding and controlling the behavior of ESCs would be a more daunting challenge than initially imagined.

Studies showed that any given population of ESCs could be very heterogeneous and that their potential pluripotency, or capacity to differentiate into other cell types, could vary both among cells from the same embryo and from one lineage to another. Later, it was discovered that the levels of certain microRNAs inside cells changed as differentiation progressed. MicroRNAs, or miRNAs, are small RNA molecules that do not encode proteins but perform a regulatory role in several intracellular processes.

Researchers at the Center for Cell-Based Therapy (CTC) in Ribeirão Preto, São Paulo State, have now investigated the functioning of 31 miRNAs observed in human ESCs and identified signaling pathways involved in both pluripotency and differentiation. The discovery opens up new perspectives for research in the area.

The study was supported by São Paulo Research Foundation - FAPESP, and the results are published in Stem Cell Research & Therapy. CTC is a Research, Innovation and Dissemination Center (RIDC) funded by FAPESP and hosted by the University of São Paulo (USP).

"Based on this information, we can think about developing drugs to facilitate the cultivation of ESCs in the laboratory and even make them regress to the earliest development stage, called naive. Their capacity to originate any type of tissue is greater at that stage," said Rodrigo Alexandre Panepucci (https://bv.fapesp.br/en/pesquisador/57383/rodrigo-alexandre-panepucci/), a researcher at the Ribeirão Preto Blood Center and principal investigator of the study.

According to Panepucci, the human ESCs used in scientific research are usually in the intermediate development stage of primed pluripotency, when they have not yet differentiated but are primed, i.e., a step closer to assuming a distinct cell identity. At this stage, they are less versatile than mouse ESCs, which are typically isolated when the cells are naive and hence widely used as a research model.

"Interest in working with the naive phenotype is strong because naive cells are able to originate even gametes [egg cells and sperm]. Primed cells aren't," Panepucci said.

Large-scale analysis

MiRNAs have complementary nucleotide sequences, so they can bind to messenger RNAs and break them down or prevent their translation into proteins. An increase in a cell's miRNA expression, therefore, means that a process is being inhibited. Identifying the process is no trivial task, however.

"A single miRNA may be able to bind to hundreds or thousands of messenger RNAs. This may affect several targets of a signaling pathway and have a wide-ranging biological effect," Panepucci explained.

Studying these molecules from a functional standpoint requires bioinformatics tools capable of processing massive quantities of data. The CTC group chose high-content screening (HCS), an automated fluorescence microscopy technique, analyzing thousands of images to find out how the miRNAs regulated the ESC phenotypes.

The ESCs were cultured in 96-well plates, with a different synthetic miRNA in each plate. After culture for three to four?days, automated image acquisition and analysis were performed to evaluate the effects on pluripotency maintenance and differentiation.

"We used fluorescence microscopy to obtain thousands of images of the cells. Based on our analysis of this material, we established multiparameter phenotype profiles to determine the pluripotency stage of the cells and the effects of the miRNAs. From hundreds of morphological parameters observed in the images, we selected approximately ten that enabled us to classify the differentiation stage of the ESCs," Panepucci said.

The researchers also measured, in each well, the level of two proteins found to be pluripotency markers, OCT4 and cyclin B1. The more these molecules are expressed, the greater the cell's pluripotency.

The miRNAs that produced similar effects in ESCs were then grouped and hierarchized using a clustering technique that enabled the researchers to organize the large volume of data obtained from the analysis and identify the signaling pathway with which each group of miRNAs was involved.

"We selected miR-363-3p for more detailed study because it clearly contributed to pluripotency maintenance," Panepucci said. "We showed that it inhibits differentiation by degrading the messenger RNA that encodes the protein NOTCH1."

The Notch signaling pathway is important for intercellular communication, involving gene regulation mechanisms that control multiple differentiation processes during embryonic and adult life. Mutations in genes encoding Notch pathway components underlie some diseases.

According to Panepucci, compounds that inhibit the NOTCH1-mediated signaling pathway could become tools to modulate ESC pluripotency and even make these cells regress to the naive stage in which high levels of OCT4 and other pluripotency factors are present.

"An understanding of these pluripotency regulation mechanisms can take research on ESCs to a new level, as well as research on iPSCs [induced pluripotent stem cells obtained from adult patient cells modified in the laboratory], on which the future of cellular therapy depends," Panepucci said.

He added that iPSCs offer the advantage of containing the same DNA as the patient to be treated. Moreover, because they are derived from adult cells rather than embryos, their use in medicine is not affected by ethical constraints. "All the same, ESCs are the best model for studying pluripotency," he said.

Credit: 
Fundação de Amparo à Pesquisa do Estado de São Paulo

Missing electrons reveal the true face of a new copper-based catalyst

ITHACA, N.Y. - A collaboration between researchers from Cornell, Harvard, Stanford and the SLAC National Accelerator Laboratory has resulted in a reactive copper-nitrene catalyst that pries apart carbon-hydrogen (C-H) bonds and transforms them into carbon-nitrogen (C-N) bonds, which are a crucial building block for chemical synthesis, especially in pharmaceutical manufacturing.

The team's paper, "Synthesis of a Copper-Supported Triplet Nitrene Complex Pertinent to Copper-Catalyzed Amination," was published Sept. 13 in Science.

"[Co-senior author] Ted Betley at Harvard and I are both interested in cases where you have light atoms like nitrogen or oxygen that typically are well defined in the oxidation state," said co-senior author Kyle Lancaster, associate professor of chemistry and chemical biology in the College of Arts and Sciences, who worked on the project with his Ph.D. student Ida DiMucci.

"We found that when you oxidize this one copper compound that has this nitrogen functionality to it, you pull the electrons out of the nitrogen. When you have light atoms like nitrogen or oxygen that are electron deficient, that tends to make them very reactive," Lancaster said. "And in this case, the nitrogen is so reactive it is able to break C-H bonds and start making C-N bonds."

While researchers have long suspected that this type of nitrogen that lacks two specific electrons - a nitrene - was tied to reactivity, the elusive species had yet to be directly spotted. Among the specialties of Lancaster's lab is employing X-ray spectroscopy to determine where electrons are missing in transition metal-containing compounds. Lancaster and DiMucci went to work inventorying the electrons on the copper and the nitrogen atoms and were able to clear away some of the atomic clutter and zero in on the nitrogen to locate two holes that indicated where the two electrons were being lost.

Lancaster and DiMucci performed some high-level computational analysis to confirm their findings.

"We were really stunned how strong the agreement was between the theory we used and our data," DiMucci said. "Our calculations told us that if we really had what we thought we did, then we'd be looking for two peaks separated by 0.6 electron volts, and that's exactly what we got."

The Harvard team constructed the ligand structure to secure the reactive nitrene and keep the catalyst together. Once a C-H bond is introduced, the nitrene is able to unshackle it. This streamlined conversion process could lead to cheaper, more efficient production of pharmaceuticals, detergents and dyes.

"At the end of the day, when we find a big, hairy organic molecule, we'd like to be able to make it en masse if it's a potential drug," Lancaster said. "And that means coming up with clever ways to install particular functionalities, like C-O bonds and C-N bonds. The more tools we have in our arsenal to make these changes - because it's very difficult to turn the C-H bond into a C-N bond - the better off we are."

Lancaster praised the work of his Harvard, Stanford and SLAC colleagues, and the potential applications of their collaborative work.

"We all have our specializations," he said, "and teaming up spectroscopy and electronic structure theory with synthesis is a powerful combo to answer these chemical questions - especially when you have physicists coming up with exquisite new instruments to make our job a lot easier."

Credit: 
Cornell University

Research could help flexible technology last longer, avoid critical failures

image: Guy German is an associate professor of biomedical engineering at Binghamton University, State University of New York.

Image: 
Binghamton University, State University of New York

BINGHAMTON, NY - Whether from regular use, overuse or abuse, every device is bound to develop cracks at some point. That's just the nature of things.

Cracks can be especially dangerous, though, when working with biomedical devices that can mean life or death to a patient.

A new study from a Binghamton University research team uses the topography of human skin as a model not for preventing cracks but for directing them in the best way possible to avoid critical components and make repairs easy.

The study, published Sept. 17 in the journal Scientific Reports, is led by Binghamton Univerity Associate Professor of Biomedical Engineering Guy German and PhD student Christopher Maiorana. For the study, Maiorana engineered a series of single-layer and dual-layer membranes from silicone-based polydimethylsiloxane (PDMS), an inert and nontoxic material used in biomedical research. Embedded into the layers are tiny channels meant to guide any cracks that form - which, when part of a biomedical device, would give more control over how the cracks form. Potential damage could go around critical areas of flexible electronics, for instance, increasing its functional lifespan.

"In this relatively new field of hyperelastic materials - materials that can really stretch - there's been a lot of work, but not in the area of fracture control," German said. "Fracture control has only been explored in more brittle materials."

What's particularly important, Maiorana and German said, is having PDMS as the basis for the flexible membrane, since it is known for its wide variety of uses. The study also integrates other common materials.

"We do it without using any exotic material," Maiorana said. "We're not inventing some new metal or ceramic. We're using rubber or modifying normal glass to do these things. We've taken this really basic idea and made it functional."

German's ongoing research on human skin made him realize that the outermost layer - known as the stratum corneum - exhibits a network of v-shaped topographical microchannels that appear to be capable of guiding fractures to the skin.

This study began with the idea of recreating this effect in nonbiological materials. Previous attempts to direct microcracks have utilized more solid means, such as copper films around the most sensitive parts of flexible electronics components.

"Even though this membrane looks and feels exactly like a normal, boring membrane," he said, "you stretch it and you can get cracks to deviate at 45-degree angles away from where it ordinarily would have cracked. I think it's pretty cool."

Because of the long fabrication period for the membranes, Maiorana often would spend a week to produce one and then tear it apart in a matter of seconds - only to start all over again with the next one. He credited the increasing precision of additive manufacturing and its ability to print ever-smaller features for making the production of the membranes possible.

"Chris was designing his own fabrication systems to make these substrates," German said, "because he had to 3-D print a mold and then use this clever system to control the depth of these canyons in the substrate. It's really technically challenging."

Maiorana added: "There is a certain level of art to it. You think there's an entire scientific process, and there is, but part of it is that you've done this process before and you know what it's supposed to look like."

This study, German said, furthers the quest of biomedical engineers to learn from what nature has already perfected.

"It doesn't matter how good an engineer you are - evolution thought of it first," he said. "Evolution always wins."

Credit: 
Binghamton University

Mice, like humans, fidget when deep in thought

image: Brain activity of mice during decision making tasks.

Image: 
Anne Churchland

Almost everyone fidgets, said Cold Spring Harbor Laboratory Associate Professor Anne Churchland. She referred to a collage of videos she compiled of different people rocking back and forth in their chairs, clicking a pen, shaking their legs.

"This is a collection of uninstructed movements," she explained. "These are all people who were thinking and talking, and these are probably familiar fidgets that you've seen people do."

It turns out that humans aren't the only animals that fidget. In a new study published in the journal Nature Neuroscience, Churchland, co-first authors Simon Musall and Matt Kaufman, and colleagues observed that the neural activity of mice performing trained tasks indicated that they also seemed to fidget while making decisions.

The focus of the Churchland lab is on investigating the neural circuits that are connected to decision making. The starting point for this paper was looking at the neural activity all across the top part of the brain in a mouse while it was engaged in decision-making tasks.

They measured the activity with wide-field imaging, which Churchland likens to an fMRI (used for mapping brain activity) for a mouse.

"You can see activity in a big part of the brain all at the same time," she said. "We usually just measure neurons in one small part of the brain at a time, and don't usually get a whole big-picture view like this."

For the science community researching cognition, the new results present a major cautionary tale about discerning signals uniquely related to cognitive processes and signals related to background movements, Churchland said. Researchers will have to work to disentangle the two signals from one another, and this study provides guidelines and computational codes on how to correct for it.

The mice in Churchland's lab were trained to make specific movements, like grabbing little handles to initiate a trial, and lick one way or the other to report their decisions. The scientists expected to see neural activity related to movements that had to do with handle grabbing or with licking. What they saw was that one seemingly simple task set off a symphony of electrical activity across the mouse brain. It made the team realize that they weren't thinking about decision making in a very complete way.

"This really didn't look like what we expected it to. So, we dug a little deeper and tried to figure out what was driving this activity," Churchland said.

They found that most of the activity was driven by uninstructed movements that the animal was making like hind limb movements, pupil dilations, facial movements, nose movements, and whisker movements.

"We think of all these together as making up a movement landscape. We were always aware of a few of these movements, but the landscape is turning out to be much richer than we had realized," she said. "We thought the animals were 100 percent focused on our task, and the licking and the grabbing and the deciding. It turned out that they had their own set of priorities that involved a lot of movements of all different kinds."

Churchland and her lab are still working to understand what these priorities are and how they accompany decisions. To Churchland, an interesting, albeit speculative, idea to come out of the study is that movements are more tightly connected to cognition than people recognized before.

"One hypothesis that we find intriguing is that in some way, maybe the movements are part of the process of thinking and deciding," she said. "There might be this other aspect of fidgeting that people haven't really considered before, which is that it's part of how we call up that cognitive machinery. There are a lot of people who want to move when they're thinking. And for a lot of people, it seems that part of what it means for them to think is to be moving around."

Churchland thinks that to understand this link between cognition and movements better, it's worthwhile to look at animals that are genetic models of human irregularities where the amount of movement and the kind of movement are aberrant. An example is the genetic model of Attention Deficit Hyperactivity Disorder (ADHD). Under many circumstances, people diagnosed with ADHD move a lot more.

"Maybe they need to move more because activating their cognitive machinery requires more movements compared to the average person. Or, maybe ADHD is an example of a system that is a little bit miscalibrated," she said. "Many of us can rock our chair a little bit and finally concentrate deeply. Whereas people with other kinds of brains, if they want to concentrate deeply, simply rocking the chair doesn't work that well."

Credit: 
Cold Spring Harbor Laboratory

How and when was carbon distributed in the Earth?

image: A large metallic iron ball was formed during heating and was surrounded by quenched silicate melts

Image: 
Ehime University

It is generally accepted that planetary surfaces were covered with molten silicate, a "magma ocean", during the formation of terrestrial planets. In a deep magma ocean, iron would separate from silicate, sink, and eventually form a metallic core. In this stage, elemental partitioning between a metallic core and a magma ocean would have occurred and siderophile elements would be removed from the magma ocean. Such a chemically differentiated magma ocean formed the present-day Earth's mantle. Previous studies have experimentally investigated carbon partitioning between iron liquid and silicate melt under high-pressure conditions and found that a terrestrial magma ocean should be more depleted in carbon than the present day. Thus, how and when the carbon abundance in the Earth's mantle has been established is still poorly understood.

All previous studies have used a graphite capsule, and therefore, the sample was saturated with carbon. However, the bulk Earth is unlikely to be saturated with carbon given the carbon abundance in chondrites which are believed to be the building blocks of the Earth. Moreover, it is known that the partition coefficient varies with the bulk concentration of the element of interest even if experimental conditions are identical. In order to investigate the effect of bulk carbon concentration on its liquid metal-silicate partitioning behavior, researchers at Ehime University, Kyoto University, and JAMSTEC have conducted new carbon partitioning experiments at carbon-undersaturated conditions using a boron nitride capsule.

The new experimental result shows that the partition coefficient of carbon between iron liquid and silicate melt at carbon-undersaturated conditions is several times lower than previous studies using a graphite capsule. This suggests that carbon in a magma ocean may not have been as depleted as previously thought and requiring re-investigation of the core-mantle partitioning of carbon.

Credit: 
Ehime University

Do children's brains really get thinner?

image: Higher myelination (darker stain) is found in the face-selective area of higher visual cortex, as compared to place selective area.

Image: 
MPI CBS

Using state-of-the-art brain imaging techniques, Vaidehi Natu at Stanford University in California, along with her colleagues from the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig, Germany, provide striking results that suggest the brains of children likely do not thin as much as expected. Rather, it seems there is an increase of myelin, which is the fatty sheath insulating nerve fibers.

Stepping back a bit, studies have shown repeatedly that certain regions of the cerebral cortex (the outer-most layer of the brain) get thinner as children develop. And at a mere 3mm thick, on average, studies have reported that kids can apparently lose close to 1mm of gray matter by adulthood. Various hypotheses have been put forth to explain these huge losses. For example, it is established that gray matter cells and their connections can be naturally 'pruned', presumably to promote a more efficient brain. So perhaps extensive pruning in young brains could explain the thinning. Alternatively, we know that our brains expand during development. Perhaps the cortex gets stretched in the process? The new research certainly does not rule out these processes, and in fact finds evidence for the latter. However, the new work does suggest that a prominent change has gone undetected, due to limitations of prior measurements.

More specifically, the present study shows that when measured with quantitative MRI (or qMRI), it appears that young brains are actually becoming more myelinated. That's a good thing, but it may be messing up estimates of cortical thickness (the gray matter). Myelin is the 'white' in white matter. It's a fatty sheath that insulates many nerve fibers and allows faster neurotransmission. The problem is, measurements of gray matter thickness critically depend on detecting the border between white and gray matter. As Natu and Kirilina have found out, this limit can be obscured and cortical thickness underestimated if myelination increases during development. The researchers obtained these results by examining adults and children using state-of-the-art quantitative MRI techniques.

Evgeniya Kirilina, who is working in the department of Neurophysics at MPI CBS, offers a word of caution. 'The fact that the cortex thins during development is well established, even with histological methods. We are not claiming that thinning does not occur. But estimates may be off in some cases due to concurrent myelination.'

The team was actually looking at three specialized patches of brain in higher visual cortex. Despite their close proximity, each showed a unique developmental pattern, underlining the need for cautious interpretation. The face and word recognition areas showed the myelination effect described above, whereas the place recognition area showed apparent thinning but no indication of myelination. Instead, it seemed to structurally change, stretching over time. Highlighting the important link between structure and function, the differences in myelination of these functionally specialized regions were confirmed in post mortem brains of adults, using both ultra-high field MRI and histology.

The implications of these new findings are quite broad. Decades of work will need to be revisited and assessed for accuracy. For example, there is a rich literature suggesting that the thickness of the cortex changes when learning new skills. It will now need to be determined whether myelination also plays a role. Further, degradation of myelin can lead to debilitating diseases. This is exactly what occurs in Multiple Sclerosis. More accurate measurement techniques like qMRI promise to improve our detection, monitoring, and treatment of such conditions.

Credit: 
Max Planck Institute for Human Cognitive and Brain Sciences

More discussion needed about vulvovaginal health at well woman visits

CLEVELAND, Ohio (September 24, 2019)--Despite the wealth of information now available about menopause, women are still not comfortable in proactively discussing vaginal issues related to menopause with their healthcare providers, who appear equally uncomfortable and unlikely to initiate the conversation. That's according to a new study which will be presented during The North American Menopause Society (NAMS) Annual Meeting in Chicago, September 25-28, 2019.

In this new study involving more than 1,500 postmenopausal women, 45% reported some type of postmenopausal vulvovaginal symptom, such as vaginal dryness, itching, soreness, or odor. Of these symptomatic women, only 39% discussed their symptoms at their well woman visits. When conversations about vulvovaginal health did take place, researchers discovered that it was the patient who more often initiated the discussion than the clinician (59% vs. 22%), with 16% reporting that both started the discussions.

Of the women who entered into a conversation with their healthcare providers, 83% were satisfied or very satisfied with the results of the discussions as they led to helpful recommendations. Of the women who didn't have a conversation, 18% wished they had.

"Nearly half of these postmenopausal women reported having a vulvovaginal problem, yet a minority discussed their symptoms at a well woman visit," says Dr. Amanda Clark, lead author of the study and an affiliate investigator with the Kaiser Permanente Center for Health Research in Portland, Oregon. "Since the discussions that did occur led to helpful interventions, this suggests a role for greater clinician-initiated screening for genitourinary syndrome of menopause."

"With so many options now available, such as over-the-counter lubricants and moisturizers as well as low dose vaginal hormonal products containing estrogen or DHEA, there is no reason for women to continue to suffer in silence," says Dr. Stephanie Faubion, NAMS medical director. "Hopefully studies like this one will open the door to better patient-provider communication at well woman visits."

Credit: 
The Menopause Society

Some high-cholesterol genes differ between countries

Some of the genes that predict the risk of high cholesterol don't apply to people from Uganda the same as they do in European populations, finds a new UCL-led study.

The new Nature Communications study adds to evidence that genetic research involved in drug development and risk prediction testing might not apply equally to non-European populations.

"Genome-wide association studies, facilitated by the mapping of the human genome, have transformed our understanding of how our genetics impact our traits, behaviours and disease risks. But the large majority of them have been conducted in people of European descent, so there's a growing concern that the findings might not uniformly apply to people of diverse backgrounds," said the study's lead author, Dr Karoline Kuchenbaecker (UCL Genetics Institute and UCL Psychiatry).

She and her colleagues investigated the known genetic variants that affect blood fat levels, a major cardiovascular risk factor, to test whether they applied to different groups in the UK, Greece, China, Japan and Uganda.

They found that the results were broadly consistent across European and Asian groups, with about three quarters of genetic markers applied similarly across the different groups, but only 10% of the genetic markers for triglycerides (the most common type of fat in the body) were implicated in the same cardiovascular risk factors among people from Uganda.

The researchers point out that even if genetics are nearly universal, environments are more variable, and some genes may have different, undiscovered effects in different environments. Genes predicting high cholesterol may not be risky for people with diets and lifestyles typical of rural Uganda.

"Our findings should serve as a major warning of caution to the field of genetics research - you cannot blindly apply findings from ancestrally European study groups to everyone else," said Dr Kuchenbaecker.

"We need to ensure that diverse groups are represented in research before proceeding with developing new tests or treatments - otherwise, the consequence will be a very unfair NHS where some new drugs and genetic tests are only suitable for people of European descent."

Credit: 
University College London

UCI study reveals critical role of brain circuits in improving learning and memory

image: Xiangmin Xu, PhD, an anatomy and neurobiology professor in the UCI School of Medicine led a team of scientists to discover how newly identified neural circuits in the brain's hippocampal formation play a critical role in object-location learning and memory.

Image: 
UCI School of Medicine

Irvine, Calif. - September 23, 2019 - A University of California, Irvine-led team of scientists has discovered how newly identified neural circuits in the brain's hippocampal formation play a critical role in object-location learning and memory.

The study, published today in Nature Neuroscience, was led by Xiangmin Xu, PhD, an anatomy and neurobiology professor in the UCI School of Medicine, and conducted in collaboration with Douglas A. Nitz, PhD, professor and chair of the Department of Cognitive Science at the University of California, San Diego; Qing Nie, PhD, Chancellor's Professor of mathematics and developmental and cell biology at UCI; and, Todd C. Holmes, professor and vice chair of UCI's Department of Physiology & Biophysics.

Loss of object location memory is one of the key impairments in Alzheimer's disease (AD), the most common form of dementia in the elderly. These new findings in hippocampal circuit mechanisms provide an intriguing new target to counteract AD-related memory impairments.

"Our study was made possible by new viral genetic based mapping approaches for examining connectivity between structures. These new mapping tools enabled us to identify novel circuits within and between the hippocampus and cortex," said Xu.

Xu and his colleagues used monosynaptic rabies retrograde tracing and herpes (H129)-based anterograde tracing to establish new cortico-hippocampal circuitry associated with subiculum (SUB) projections to hippocampal CA1. Xu and an international team of investigators was recently awarded an NIH BRAIN Initiative grant to develop new H129 viral tracers as a brain mapping tool for use by the entire neuroscience community.

The team revealed the hippocampal sub-circuit mechanism highly relevant to learning and memory disorders including Alzheimer's disease. These findings may be used to better treat Alzheimer's disease and other neurological disorders, delay their onset, and possibly prevent them from developing in the first place.

Credit: 
University of California - Irvine

NASA catches Tropical Storm Lorena's landfall approach

image: NASA-NOAA's Suomi NPP satellite passed over Tropical Storm Lorena as it was approaching landfall in northwestern Mexico on Sept. 21 at 4:42 p.m. EDT (2042 UTC).

Image: 
NASA Worldview, Earth Observing System Data and Information System (EOSDIS)

As Tropical Storm Lorena was nearing landfall in northwestern Mexico, NASA-NOAA's Suomi NPP satellite provided forecasters with an image of the storm. By Monday, Sept. 23, Lorena's remnants were affecting the southern U.S. and bringing heavy rainfall to Arizona.

Visible imagery from NASA satellites help forecasters understand if a storm is organizing or weakening. The Visible Infrared Imaging Radiometer Suite (VIIRS) instrument aboard Suomi NPP provided a visible image of Lorena on Sept. 21 at 4:42 p.m. EDT (2042 UTC).

The shape of a tropical cyclone provides forecasters with an idea of its organization and strength, and NASA-NOAA's Suomi NPP satellite provided a visible image of the storm to forecasters as its center was approaching landfall. The storm already appeared elongated from south to north after its northeastern side had begun moving over the high terrain of northwestern Mexico. Lorena made a slow track to the coast and made landfall about 12 hours later.

Lorena's Final Advisory

At 11 am EDT on Sunday, Sept. 22, NOAA's National Hurricane Center issued the final advisory on the system. By that time, Post-Tropical Cyclone Lorena crossed the coast of northwestern Mexico in the morning. The center of the disturbance was estimated near latitude 28.8 degrees north and longitude 111.5 degrees west. The post-tropical cyclone was moving toward the north near 9 mph (15 kph). Maximum sustained winds associated with this system are near 30 mph (45 kph) with higher gusts.

After landfall, Lorena's remnant clouds and rain moved north into Arizona.

Lorena's Remnants in Arizona on Sept. 23

NOAA's Weather Prediction Center College Park, Md. reported, "Moisture from the remnants of Lorena will contribute to heavy rain, strong to severe thunderstorms and possible flooding across the Southwest through Tuesday. There should be enough moisture in place to support a significant rainfall event with widespread 1 to 2 inch rainfall totals with much higher amounts locally, with the greatest amounts in central and southern Arizona.  This degree of rainfall warrants flash flood concerns, and a Moderate Risk of excessive rainfall is in effect for that region.  Some strong to severe thunderstorms will also be possible."

Hurricanes are the most powerful weather event on Earth. NASA's expertise in space and scientific exploration contributes to essential services provided to the American people by other federal agencies, such as hurricane weather forecasting.

For updated forecasts. visit: http://www.nhc.noaa.gov

Credit: 
NASA/Goddard Space Flight Center

2019 Arctic sea ice minimum tied for second lowest on record

image: An opening in the sea ice cover north of Greenland is partially filled in by much smaller sea ice rubble and floes, as seen during an Operation IceBridge flight on Sept. 9, 2019.

Image: 
NASA/Linette Boisvert

The extent of Arctic sea ice at the end of this summer was effectively tied with 2007 and 2016 for second lowest since modern record keeping began in the late 1970s. An analysis of satellite data by NASA and the National Snow and Ice Data Center (NSIDC) at the University of Colorado Boulder shows that the 2019 minimum extent, which was likely reached on Sept. 18, measured 1.60 million square miles (4.15 million square kilometers).

The Arctic sea ice cap is an expanse of frozen seawater floating on top of the Arctic Ocean and neighboring seas. Every year, it expands and thickens during the fall and winter and grows smaller and thinner during the spring and summer. But in the past decades, increasing temperatures have caused marked decreases in the Arctic sea ice extents in all seasons, with particularly rapid reductions in the minimum end-of-summer ice extent.

Changes in Arctic sea ice cover have wide-ranging impacts. The sea ice affects local ecosystems, regional and global weather patterns, and the circulation of the oceans.

"This year's minimum sea ice extent shows that there is no sign that the sea ice cover is rebounding," said Claire Parkinson, a climate change senior scientist at NASA's Goddard Space Flight Center in Greenbelt, Maryland. "The long-term trend for Arctic sea ice extent has been definitively downward. But in recent years, the extent is low enough that weather conditions can either make that particular year's extent into a new record low or keep it within the group of the lowest."

The melt season started with a very low sea ice extent, followed by a very rapid ice loss in July that slowed down considerably after mid-August. Microwave instruments onboard United States Department of Defense's meteorological satellites monitored the changes from space.

"This was an interesting melt season," said Walt Meier, a sea ice researcher at NSIDC. "At the beginning of August we were at record low ice levels for that time of the year, so a new minimum record low could have been in the offering.

"But unlike 2012, the year with the lowest ice extent on record, which experienced a powerful August cyclone that smashed the ice cover and accelerated its decline, the 2019 melt season didn't see any extreme weather events. Although it was a warm summer in the Arctic, with average temperatures 7 to 9 degrees Fahrenheit (4 to 5 degrees Celsius) above what is normal for the central Arctic, events such as this year's severe Arctic wildfire season or European heat wave ended up not having much impact on the sea ice melt.

"By the time the Siberian fires kicked into high gear in late July, the Sun was already getting low in the Arctic, so the effect of the soot from the fires darkening the sea ice surface wasn't that large," Meier said. "As for the European heat wave, it definitely affected land ice loss in Greenland and also caused a spike in melt along Greenland's east coast, but that's an area where sea ice is being transported down the coast and melting fairly quickly anyway."

Credit: 
NASA/Goddard Space Flight Center