Culture

Benefits of electrification don't accrue equally for women, finds survey of homes in India

image: Household ownership of male vs. female-used appliances for number of years the household has received electricity.

Image: 
<i>Nature Sustainability</i>

Increasing access to clean and affordable energy and improving gender equality are two major sustainable development goals (SDGs) that are believed to be strongly linked. With electricity access, less time and effort in the developing world is needed for tasks related to cooking, water collection, and other housework, which are typically undertaken by women.

"The prevailing view with electricity access is that if households receive a grid connection, it should especially benefit women," said Daniel Armanios, assistant professor of Engineering and Public Policy (EPP) at Carnegie Mellon University.

A new study published in Nature Sustainability, however, shows that the linkages between these goals can be more complex than anticipated. "It's not enough to just look at access, because that does not adequately consider the local social context and household power dynamics," said Armanios, the study's corresponding author. "You have to look at whether the use (of that electricity) is equitable as well." The research team also includes fellow CMU EPP professor, Paulina Jaramillo, as well as first-author Meital Rosenberg and Michaël Aklin, a professor of political science, both from the University of Pittsburgh.

Using data collected from electrified areas of rural India, the team shows that as households gain access to basic levels of electricity, men in the households tend to dominate electricity use patterns, which could in turn suggest men benefit more than women from such access.

The researchers employed a two-part mixed-methods approach to understand how electrified households use energy. First, Rosenberg traveled to Gujarat, India, where she conducted detailed interviews with over 30 women in electrified households. These interviews revealed what appliances were in each household and, importantly, who typically used them.

The study categorized common appliances according to typical use patterns as more male-used, more female-used, or neutral. Households tended to have more male-used appliances than neutral, and more neutral than female-used appliances. Some of this disparity the researchers attribute to the specialty nature of some appliances that are more female-used, like sewing machines, mixers, and grinders. However, the gender gap of electricity use existed even for the least expensive appliances, such as fans and light bulbs. While the poorest households in the survey had multiple bulbs and fans, they were rarely found in kitchen spaces, despite interviewees saying that this location would make their household duties easier and free up time for other activities.

Through the interviews, the team learned that only about a quarter of the women felt that electricity had granted them added time to pursue activities that they wanted to do outside of housework. Many of the women interviewed reported explicitly that the appliances purchased in their house were used predominantly by their children and husband. For these electrified households in Gujarat, where resources are scarce, male use of electricity is prioritized.

"Other researchers have shown that electricity access can provide important benefits for poorer households and improve female well-being," said Jaramillo. "However, we suggest that dynamics within the households can affect the way household members use electricity and thereby maintain or exacerbate unequal gender relationships."

The results from the field interviews in Gujarat provided a rubric for the team to assess whether their findings generalized across a much wider swath of India. In a previous study, Aklin and colleagues surveyed thousands of households in six energy-poor Indian states. Respondents identified what appliances their household used upon being connected to the grid.

Combining Aklin's survey with Rosenberg's insights from Gujarat, the team found that the same patterns of gender inequality within households persisted in this larger dataset: households had more male-used appliances compared to more female-used appliances, even when controlling for household income. However, in female-led households, these patterns of electricity use did not hold: in some cases, female-led households were more likely to have light bulbs and fans in the kitchen, unlike male-led households. These results show that women will choose to use electricity differently than is typical in male-led households, and how male-female power differences in this context influence electricity use patterns.

This gender electricity use gap persists for years, too; households continued to have more male-used appliances than female-used appliances a decade after first receiving electricity, even for households higher on the socioeconomic scale. "Access to electricity is a necessary precondition to achieving many development goals," said Jaramillo, "but it is not a sufficient one to help developing countries overcome social norms that can drive who benefits from development." Social contexts ultimately shape how sustainable development interventions unfold.

"India has the largest unelectrified population of any country," said Armanios, "and so the lessons we learn about electricity access have a lot to do with what happens there." Beyond understanding the linkage between energy access and gender equality in India, the study also provides a useful framework for considering sustainable development interventions and research going forward. "When people study sustainable development goals, they tend to look at them in isolation," he said. "Our study advocates for more analysis as to their interactions and develops a framework for which to do that."

Credit: 
College of Engineering, Carnegie Mellon University

3D atlas of the bone marrow -- in single cell resolution

Stem cells located in the bone marrow generate and control the production of blood and immune cells. Researchers from EMBL, DKFZ and HI-STEM have now developed new methods to reveal the three-dimensional organization of the bone marrow at the single cell level. Using this approach the teams have identified previously unknown cell types that create specific local environments required for blood generation from stem cells. The study, published in Nature Cell Biology, reveals an unexpected complexity of the bone marrow and its microdomains at an unprecedented resolution and provides a novel scientific basis to study blood diseases such as leukemias.

In the published study researchers from European Molecular Biology Laboratory (EMBL), the German Cancer Research Center (DKFZ) and the Heidelberg Institute for Stem Cell Technology and Experimental Medicine* (HI-STEM gGmbH) present new methods permitting the characterisation of complex organs. The team focused their research on the murine bone marrow, as it harbours blood stem cells that are responsible for life-long blood production. Because of the ability to influence stem cells and to sustain blood production, there is a growing interest in exploiting the bone marrow environment, also called niche, as a target for novel leukaemia treatments. "So far, very little was known about how different cells are organised within the bone marrow and how they interact to maintain blood stem cells," explains Chiara Baccin, post-doc in the Steinmetz Group at EMBL. "Our approach unveils the cellular composition, the three-dimensional organisation and the intercellular communication in the bone marrow, a tissue that has thus far been difficult to study using conventional methods," further explains Jude Al-Sabah, PhD student in the Haas Group at HI-STEM and DKFZ.

In order to understand which cells can be found in the bone marrow, where they are localised and how they might impact on stem cells, the researchers combined single-cell and spatial transcriptomics with novel computational methods. By analysing the RNA content of individual bone marrow cells, the team identified 32 different cell types, including extremely rare and previously unknown cell types. "We believe that these rare 'niche cells' establish unique environments in the bone marrow that are required for stem cell function and production of new blood and immune cells," explains Simon Haas, group leader at the DKFZ and HI-STEM, and one of the initiators of the study.

Using novel computational methods, the researchers were not only able to determine the organisation of the different cell types in the bone marrow in 3D, but could also predict their cellular interactions and communication. "It's the first evidence that spatial interactions in a tissue can be deduced computationally on the basis of genomic data," explains Lars Velten, staff scientist in the Steinmetz Group.

"Our dataset is publicly accessible to any laboratory in the world and it could be instrumental in refining in vivo studies," says Lars Steinmetz, group leader and director of the Life Science Alliance at EMBL Heidelberg. The data, which is now already used by different teams all over the world, is accessible via a user-friendly web app.

The developed methods can in principle be used to analyse the 3D organisation of any organ at the single cell level. "Our approach is widely applicable and could also be used to study the complex pathology of human diseases such as anemia or leukemia" highlights Andreas Trumpp, managing director of HI-STEM and division head at DKFZ.

Credit: 
German Cancer Research Center (Deutsches Krebsforschungszentrum, DKFZ)

New nano-barrier for composites could strengthen spacecraft payloads

The University of Surrey has developed a robust multi-layed nano-barrier for ultra-lightweight and stable carbon fibre reinforced polymers (CFRPs) that could be used to build high precision instrument structures for future space missions.

CFRP is used in current space missions, but its applications are limited because the material absorbs moisture. This is often released as gas during a mission, causing the material to expand and affect the stability and integrity of the structure. Engineers try to minimise this problem with CFRP by performing long, expensive procedures such as drying, recalibrations and bake-out- all of which may not completely resolve the issue.

In a paper published by the journal Nature Materials, scientists and engineers from Surrey and Airbus Defence and Space detail how they have developed a multi-layered nano-barrier that bonds with the CFRP and eliminates the need for multiple bake-out stages and the controlled storage required in its unprotected state.

Surrey engineers have shown that their thin nano-barrier - measuring only sub-micrometers in thickness, compared to the tens of micrometers of current space mission coatings - is less susceptible to stress and contamination at the surface, keeping its integrity even after multiple thermal cycles.

Professor Ravi Silva, Director of the Advanced Technology Institute at the University of Surrey, said: "We are confident that the reinforced composite we have reported is a significant improvement over similar methods and materials already on the market. These encouraging results suggest that our barrier could eliminate the considerable costs and dangers associated with using carbon fibre reinforced polymers in space missions."

Christian Wilhelmi, Head of Mechanical Subsystems and Research and Technology Friedrichshafen at Airbus Defence and Space, said: "We have been using carbon-fibre composites on our spacecraft and instrument structures for many years, but the newly developed nano-barrier together with our ultra-high-modulus CFRP manufacturing capability will enable us to create the next generation of non-outgassing CFRP materials with much more dimensional stability for optics and payload support. Reaching this milestone gives us the confidence to look at instrument-scale manufacturing to fully prove the technology."

Professor David Sampson, Vice-Provost Research and Innovation at the University of Surrey, said: "This research project continues the University of Surrey's long and close partnership with Airbus. Advanced materials for spacecraft is a further excellent example of how Surrey supports the Space Sector. We have been doing so for decades, and we are fully committed to strengthening our support for the sector going forwards. I look forward to more brilliant advances from the Surrey-Airbus relationship in years to come."

Credit: 
University of Surrey

Electronics at the speed of light

image: This is an illustration of how electrons can be imagined to move between two arms of a metallic nanoantenna, driven by a single-cycle light wave.

Image: 
University of Konstanz

Contemporary electronic components, which are traditionally based on silicon semiconductor technology, can be switched on or off within picoseconds (i.e. 10-12 seconds). Standard mobile phones and computers work at maximum frequencies of several gigahertz (1 GHz = 109 Hz) while individual transistors can approach one terahertz (1 THz = 1012 Hz). Further increasing the speed at which electronic switching devices can be opened or closed using the standard technology has since proven a challenge. A recent series of experiments - conducted at the University of Konstanz and reported in a recent publication in Nature Physics - demonstrates that electrons can be induced to move at sub-femtosecond speeds, i.e. faster than 10-15 seconds, by manipulating them with tailored light waves.

"This may well be the distant future of electronics", says Alfred Leitenstorfer, Professor of Ultrafast Phenomena and Photonics at the University of Konstanz (Germany) and co-author of the study. "Our experiments with single-cycle light pulses have taken us well into the attosecond range of electron transport". Leitenstorfer and his team from the Department of Physics and the Center for Applied Photonics (CAP) at the University of Konstanz believe that the future of electronics lies in integrated plasmonic and optoelectronic devices that operate in the single-electron regime at optical - rather than microwave - frequencies.

Credit: 
University of Konstanz

Plant-rich diet protects mice against foodborne infection, UTSW researchers find

image: This is Vanessa Sperandio.

Image: 
UTSW

DALLAS - Dec. 23, 2019 - Mice fed a plant-rich diet are less susceptible to gastrointestinal (GI) infection from a pathogen such as the one currently under investigation for a widespread E. coli outbreak tied to romaine lettuce, UT Southwestern researchers report. A strain of E. coli known as EHEC, which causes debilitating and potentially deadly inflammation in the colon with symptoms such as bloody diarrhea and vomiting, is implicated in several foodborne outbreaks worldwide each year.

"There has been a lot of hearsay about whether a plant-based diet is better for intestinal health than a typical Western diet, which is higher in oils and protein but relatively low in fruits and vegetables," says Vanessa Sperandio, Ph.D., professor of microbiology and biochemistry at UT Southwestern. "So we decided to test it."

Her study on a mouse model of EHEC is published this week in Nature Microbiology.

"Plant-rich diets are high in pectin, a gel-like substance found in many fruits and vegetables. Pectin is digested by the gut microbiota into galacturonic acid, which we find can inhibit the virulence of EHEC," she adds.

"This is relevant to public health because EHEC outbreaks lead to hemorrhagic colitis, which is debilitating and sometimes causes death, particularly in the very young and the elderly," she says.

Intestinal pathogens like EHEC sense the complex chemistry inside the GI tract to compete with the gut's resident microbiota to establish a foothold, Sperandio says. Over centuries, the pathogens have developed different strategies to compete against the so-called good, or commensal, microbes that normally line the gut.

Those commensals include harmless strains of E. coli living in the colons of humans and other mammals, where they help the host's normal digestion process, she adds. The word commensal means "eating at the same table" and that is what the symbiotic bacteria that make up the gut's microbiota do.

The commensals that line the gut present a significant barrier to intestinal pathogens, Sperandio explains. EHEC and similar gram-negative bugs overcome that barrier by deploying a secretion system called T3SS.

T3SSs act like molecular syringes to inject a mix of virulence proteins into the cells lining the host's colon, setting off inflammation and symptoms of infection. Because mice are unaffected by EHEC, researchers use a similar pathogen, Citrobacter rodentium, in mouse studies, Sperandio explains.

"Our study finds first that the good E. coli and the pathogenic ones like EHEC use different sugars as nutrients," she says, adding that the two types of E. coli may have evolved to avoid competing for the same energy sources. "Second, we find that dietary pectin protects against the pathway the pathogenic EHEC uses to become more virulent."

Another type of commensal gut bacteria breaks down dietary pectin from fruit and vegetables, creating galacturonic acid, a sugar acid that the EHEC and C. rodentium use in two ways. Initially, the pathogen uses that sugar acid as an energy source to expand in the gut, Sperandio says.

"Once the sugar acid becomes depleted, the pathogen changes its survival strategy, almost like flipping a switch," she says. Instead of using the galacturonic acid for nourishment, the infectious bacteria employs it in a signaling pathway that increases the EHEC's and similar bacteria's virulence using the syringe-like T3SS.

In the study, mice fed pectin for about a week withstood infection. Comparing the colons of six mice fed a chow diet with 5 percent extra pectin from citrus peel with four mice on a typical diet, the researchers found a much lower rate of infection in the pectin-eating mice, Sperandio says.

The amount of bacteria in the mouse gut was measured by daily stool checks and by analysis of the amount of bacteria in a pouch at the juncture of the small and large intestines, called the cecum, at the experiment's end.

The researchers found that mice on the pectin-enriched chow had about 10,000 bacteria in the cecum compared to 1 million bacteria in mice on the typical diet. The pectin group also had fewer symptoms, she says, adding that a pectin level of 5 percent appears to prevent the pathogen from activating its virulence repertoire.

She stresses that the research is one step in a journey to define the molecular mechanisms that govern how the commensal species in the gut impact the virulence of intestinal pathogens.

"This is not translatable to humans yet. We hope a better understanding of how intestinal disease develops will lead to strategies to reduce the incidence or, at least, the symptoms caused by these gram-negative pathogens, possibly through new vaccines or drugs," she says.

Credit: 
UT Southwestern Medical Center

The birds and the bees and the bearded dragons

image: This is the acrodont Pogona, commonly known as the bearded dragon.

Image: 
Arthur Georges, University of Canberra

Sex is an ancient and widespread phenomenon, with over 99% of eukaryotes (cells with nuclei) partaking in some form of sexual reproduction, at least occasionally. Given the relative ubiquity and presumed importance of sex, it is perhaps surprising that the mechanisms that determine an individual's sex vary so spectacularly across organisms. Mechanisms for sex determination can depend on environmental signals, such as temperature, or can be genetically based, with one sex carrying different alleles, genes, or chromosomes--or even different numbers of each of these--from the other. The most well-studied system for sex determination is the XY system, which can be found in most mammals. In this system, females have two of the same type of sex chromosomes (XX) and males have two different types of sex chromosomes (XY).

The existence of multiple sex-determination mechanisms across the tree of life implies that these systems experience some degree of turnover, in which one system replaces another. Unfortunately, unraveling evolutionary history of various sex-determination systems can be difficult. In XY systems, the age of the X and Y chromosomes can be estimated by comparing rates of substitution between the two sex chromosomes, since they were once homologs. However, in lineages that have experienced turnover of the sex-determination system, this approach provides no information on whether or how long an XY system may have persisted before replacement with the new system. Luckily, in a new article published in Genome Biology and Evolution, titled "Deciphering ancestral sex chromosome turnovers based on analysis of male mutation bias" (Acosta et al. 2019), Professor Diego Cortez and colleagues at the National Autonomous University of Mexico describe a new method for addressing this limitation and providing new insights into the lifespan and turnover of sex-determination systems.

According to Cortez, development of the new method arose out of necessity: "We were working on the sex chromosomes of the green anole. Some of the analyses suggested that the sex system could be >160 million years old, that is, it could have originated in the ancestor of pleurodonts [a group including iguanas, anoles, and spiny lizards] and acrodonts [including bearded dragons, and chameleons]." However, because acrodonts do not currently have an XY sex-determination system, Cortez realized that "a method that could corroborate such an evolutionary scenario did not exist." So, he and his colleagues developed one.

The new analytical method is based on a phenomenon known as male mutation bias. In many vertebrates, male gametes undergo more replication cycles than female gametes, leading to a higher mutation rate on the male-specific (in this case, Y) chromosome and a lower mutation rate on the sex chromosome found more often in females (in this case, X). Male mutation bias has been observed in most mammals, as well as birds, snakes, and fish. Cortez and his colleagues hypothesized that this signature of different mutation rates on ancestral sex chromosomes may persist long after turnover of the sex-determination system and loss of the sex chromosomes themselves.

To verify this, they identified sets of genes that are present on the X chromosome in species with XY sex determination and on autosomes (non-sex chromosomes) in species with alternate sex-determination systems. They then compared the substitution rates at neutrally evolving sites in these genes across lineages--autosomal genes that were once on an X chromosome should have lower rates of evolution than other autosomal genes. In a proof-of-concept experiment, they used this method to analyze substitution rates in placental mammals (which have XY sex determination) and a monotreme, the platypus (which has an alternate system including five X and five Y chromosomes). They found that substitution rates were significantly lower in the X-linked genes from placental mammals than in the autosomal genes from platypus and the non-mammalian vertebrate outgroup species. Based on their simulations, this indicated that the platypus lineage never shared the placental XY system, a finding that is consistent with studies based on other methods.

The researchers then applied their method to the analysis of X-linked genes in three pleurodont species (XY sex determination) and their autosomal orthologs in three acrodont species (which have either a variety of sex chromosomes or temperature-dependent sex determination). As with the mammalian analysis, the results showed that the X-linked pleurodont genes had a significantly lower mutation rate than the autosomal genes from the outgroups (in this case, five snake species). Intriguingly, however, the acrodont sequences exhibited substitution rates that were intermediate between those of the X-linked pleurodont genes and the autosomal outgroup genes. This suggested that the acrodonts shared the pleurodont XY system for several million years before it was subsequently lost, as recently as 20-60 million years ago in some lineages.

The authors are quick to point out that, while their conclusions are robust, their time estimates "are not super precise" due to the lack of substantial genomic sequences from reptiles. They note that "more sequence data from more reptile species will increase the resolution and validate/correct the observations."

Despite this constraint, it is clear that this method offers considerable promise for future analyses of sex-determination systems. According to Cortez, their novel method "could be very useful to investigate other lineages where sex chromosome transitions could have happened." Specifically, the researchers believe that their method could shed light on the rate at which sex chromosomes appear and disappear. They hope that other researchers will test the method in different species and different sex chromosome systems, as it could be used in any system with male mutation bias. Data from other species could ultimately lead to answers about how long sex chromosomes last and whether sex chromosomes have an "expiration date", thus providing new insights into the evolutionary processes that drive sex-determination system turnover.

Credit: 
SMBE Journals (Molecular Biology and Evolution and Genome Biology and Evolution)

Massive gas disk raises questions about planet formation theory

image: The distribution of dust is shown in red; the distribution of carbon monoxide is shown in green; and the distribution of carbon atoms is shown in blue.

Image: 
ALMA (ESO/NAOJ/NRAO), Higuchi et al.

Astronomers using the Atacama Large Millimeter/submillimeter Array (ALMA) found a young star surrounded by an astonishing mass of gas. The star, called 49 Ceti, is 40 million years old and conventional theories of planet formation predict that the gas should have disappeared by that age. The enigmatically large amount of gas requests a reconsideration of our current understanding of planet formation.

Planets are formed in gaseous dusty disks called protoplanetary disks around young stars. Dust particles aggregate together to form Earth-like planets or to become the cores of more massive planets by collecting large amounts of gas from the disk to form Jupiter-like gaseous giant planets. According to current theories, as time goes by the gas in the disk is either incorporated into planets or blown away by radiation pressure from the central star. In the end, the star is surrounded by planets and a disk of dusty debris. This dusty disk, called a debris disk, implies that the planet formation process is almost finished.

Recent advances in radio telescopes have yielded a surprise in this field. Astronomers have found that several debris disk still possess some amount of gas. If the gas remains long in the debris disks, planetary seeds may have enough time and material to evolve to giant planets like Jupiter. Therefore, the gas in a debris disk affects the composition of the resultant planetary system.

"We found atomic carbon gas in the debris disk around 49 Ceti by using more than 100 hours of observations on the ASTE telescope," says Aya Higuchi, an astronomer at the National Astronomical Observatory of Japan (NAOJ). ASTE is a 10-m diameter radio telescope in Chile operated by NAOJ. "As a natural extension, we used ALMA to obtain a more detailed view, and that gave us the second surprise. The carbon gas around 49 Ceti turned out to be 10 times more abundant than our previous estimation."

Thanks to ALMA's high resolution, the team revealed the spatial distribution of carbon atoms in a debris disk for the first time. Carbon atoms are more widely distributed than carbon monoxide, the second most abundant molecules around young stars, hydrogen molecules being the most abundant. The amount of carbon atoms is so large that the team even detected faint radio waves from a rarer form of carbon, 13C. This is the first detection of the 13C emission at 492 GHz in any astronomical object, which is usually hidden behind the emission of normal 12C.

"The amount of 13C is only 1% of 12C, therefore the detection of 13C in the debris disk was totally unexpected," says Higuchi. "It is clear evidence that 49 Ceti has a surprisingly large amount of gas."

What is the origin of the gas? Researchers have suggested two possibilities. One is that it is remnant gas that survived the dissipation process in the final phase of planet formation. The amount of gas around 49 Ceti is, however, comparable to those around much younger stars in the active planet formation phase. There are no theoretical models to explain how so much gas could have persisted for so long. The other possibility is that the gas was released by the collisions of small bodies like comets. But the number of collisions needed to explain the large amount of gas around 49 Ceti is too large to be accommodated in current theories. The present ALMA results prompt a reconsideration of the planet formation models.

Credit: 
National Institutes of Natural Sciences

How to tell if a brain is awake

Remarkably, scientists are still debating just how to reliably determine whether someone is conscious. This question is of great practical importance when making medical decisions about anesthesia or treating patients in vegetative state or coma.

Currently, researchers rely on various measurements from an electroencephalogram, or EEG, to assess level of consciousness in the brain. A Michigan Medicine team was able to demonstrate, using rats, that the EEG doesn't always track with being awake.

"EEG doesn't necessarily correlate with behavior," says Dinesh Pal, Ph.D., assistant professor of anesthesiology at the U-M Medical School. "We are raising more questions and asking that people are more cautious when interpreting EEG data."

Under anesthesia, an EEG will display a sort of signature of unconsciousness: reduced brain connectivity; increased slow waves, which are also associated with deep sleep, vegetative state and coma; and less complexity or less change in brain activity over time.

Building on data from a 2018 study, Pal and his team wanted to see what happened to these measures when a brain was awakened under anesthesia. To do so, they targeted an area of the brain called the medial prefrontal cortex, which has been shown to play a role in attention, self-processing and coordinating consciousness.

Using a drug in that part of the brain that mimics the activity of neurotransmitter acetylcholine, the team was able to rouse some of the rats so that they were up and moving around despite the fact that they were receiving continuous anesthesia. Using the same drug in the back of the brain did not awaken the rats. So, both groups of rats had anesthesia in the brain but only one group "woke up."

Then, "we took the EEG data and looked at those factors that have been considered correlates of wakefulness. We figured if the animals were waking up, even while still exposed to anesthesia, then these factors should also come back up. However, despite wakeful behavior, the EEGs were the same in the moving rats and the non-moving anesthetized rats," says Pal.

What does this mean for the EEG's ability to reflect consciousness?
"The study does support the possibility that certain EEG features might not always accurately capture the level of consciousness in surgical patients," says senior author George A. Mashour, M.D., Ph.D., chair of the U-M Department of Anesthesiology.

However, "EEG likely does have value in helping us understand if patients are unconscious. For example, a suppressed EEG would suggest a very high probability of unconsciousness during general anesthesia. However, using high anesthetic doses to suppress the EEG might have other consequences, like low blood pressure, that we want to avoid. So, we will have to continue to be judicious in assessing the many indices available, including pharmacologic dosing guidelines, brain activity, and cardiovascular activity."

Pal notes that there is physiological precedent for an EEG mismatching behavior; for example, the brain of someone in REM sleep is almost identical to an awake brain. "No monitor is perfect, but the current monitors we use for the brain are good and do their job most of the time. However, our data suggest there are exceptions."

Their study raises intriguing questions about how consciousness is reflected in the brain, says Pal. "These measures do have value and we have to do more studies. Maybe they are associated with awareness and what we call the content of consciousness. With rats, we don't know-we can't ask them."

Credit: 
Michigan Medicine - University of Michigan

A new method for boosting the learning of mathematics

How can mathematics learning in primary school be facilitated? A recent study conducted by the University of Geneva (UNIGE), Switzerland, had shown that our everyday knowledge strongly influences our ability to solve problems, sometimes leading us into making errors. This is why UNIGE, in collaboration with four research teams in France, has now developed an intervention to promote the learning of maths in school. Named ACE-ArithmEcole, the programme is designed to help schoolchildren surpass their intuitions and informal knowledge, and rely instead on the use of arithmetic principles. And the results are surprising. More than half (50.5%) of the students who took part in the intervention were able to solve difficult problems, as compared to only 29.8% for pupils who followed the standard course of study. The corresponding study can be found in the journal ZDM Mathematics Education.

From the age of 6 or 7, schoolchildren are confronted with mathematical problems involving additions and subtractions. Instinctively, they use mental simulations of the situations described by the problems in order to come up with solutions. But as soon as a problem becomes complex, recourse to this representation using imagery becomes impossible or leads the student into error. "We reflected on a method that would enable them to detach themselves from these initial representations and that would foster the use of abstract principles of arithmetic," explains Katarina Gvozdic, a researcher at the Faculty of Psychology and Education (FPSE) at UNIGE. This approach, based on semantic re-encoding, spurs students to achieve knowledge in arithmetic at an early age. It was put into practice by teachers in a primary school arithmetic course called ACE-ArithmEcole that substituted the standard arithmetic curriculum.

So that intuitive mental representations will give way to mathematical representations

At the end of the school year, the UNIGE team evaluated ten classes of children aged 6 to 7 in France (second grade of primary school). In five classes, known as the control classes, the teachers had taught maths in a conventional way. In the other five classes, they had implemented the ACE-ArithmEcole intervention which encouraged students to favour abstraction. "To get the students to practice semantic re-encoding, we provided them with different tools such as line diagrams and box diagrams," says Emmanuel Sander, professor at the Department of Education of the FPSE at UNIGE. The idea is that when they read a problem, such as "Luke has 22 marbles, he loses 18. How many marbles does he have left?", the pupils should detach themselves from the idea that subtraction always consists in a search for what remains after a loss, and should instead manage to see it as the calculation of a difference, or a distance that has to be measured. It's all about showing students how to re-encode this situation."

After a year of teaching based on this practice, the UNIGE researchers evaluated their intervention by asking the pupils to solve problems that were divided into three main categories: combine ("I have 7 blue marbles and 4 red marbles, how many do I have in all?"), comparison ("I have a bouquet with 7 roses and 11 daisies, how many more daisies do I have than roses?") and change problems ("I had 4 euros and I earned some more. Now I have 11. How much did I earn?"). In each of these categories, there were some problems that were easy to represent mentally and to solve using informal strategies, and others that were difficult to simulate mentally and for which it was necessary to have recourse to arithmetic principles.

Undeniable results

At the conclusion of the tests, researchers observed undeniable results. Amongst students who had learned to solve mathematical problems with the ACE-ArithmEcole method, 63.4% gave correct answers to the problems that were easy to simulate mentally, and 50.5% found the answers to the more complex problems. "In contrast, only 42.2% of the pupils in the standard curriculum succeeded in solving simple problems, and only 29.8% gave the right answer to the complex problems," exclaims Katarina Gvozdic. "Yet their level measured on other aspects of maths was exactly the same," adds Emmanuel Sander.

This discrepancy can be explained by the frequent recourse to the use of mathematical principles rather than to mental simulations by the students who had taken part in the ACE-ArithmEcole intervention. "Thanks to the representational tools that had been offered to them and to the activities they had recourse to in class, the students learned to detach themselves from informal mental simulations and avoid the traps they lead to," comments Katarina Gvozdic enthusiastically.

The results are promising and they provide a foundation for promoting abstraction and breaking away from mental simulations. "Now we want to extend this teaching method to higher classes, incorporating multiplication and division as well," continues Gvozdic. "Moreover, the method could be applied to other subjects--such as science and grammar--for which intuitive conceptions constitute obstacles," adds Sander. The idea is to put semantic re-encoding to widespread use in schools and to incorporate it more amply into teaching methods.

Credit: 
Université de Genève

People think marketing and political campaigns use psychology to influence their behaviors

A new study has shown that whilst people think advertising and political campaigns exploit psychological research to control their unconscious behaviours, ultimately they feel the choices they make are still their own.

The research, led by Dr Magda Osman from Queen Mary University of London, asked people to volunteer everyday examples of when they feel their behaviours have been manipulated without them knowing.

Examples were collected from participants across the UK, US, Canada and Australia and categorised. Almost half of the examples were marketing or advertising-related. Another seven percent were associated with politics, primarily the use of political campaigning techniques to influence voting behaviours.

Following this, a new set of participants were asked to assess the examples and determine to what extent their free choice would be affected in these situations. Participants judged that whilst psychological research was used in advertising and political contexts to manipulate their unconscious behaviours, they were still able to maintain free choice when it came to their purchasing or voting decisions.

This contrasted with examples from therapeutic situations, such as hypnotherapy or the use of placebo treatments, which people felt had greater impact on their unconscious behaviours and they weren't able to consciously control.

Unlike normal psychological studies where examples used in the study are constructed by the researchers, this study was entirely unique with real-life examples being volunteered by participants themselves.

Dr Osman, Reader in Experimental Psychology at Queen Mary, said: "This study reveals two critical things, the first is that in peoples' minds subliminal advertising still exists as a phenomenon, when really this is just a myth as psychological research over the last 60 years has shown that it cannot actually influence our consumer behaviours."

"Secondly, although people across the different countries and individual backgrounds consistently volunteered the same kinds of examples, such as advertising and political campaigning, when assessing them, they still felt the choices they made in these contexts were under their own conscious control. This is important as we often see reports in the media on complex campaign tactics, particularly in politics, persuading people to make choices that they are unaware of, but our research suggests that people don't believe they actually work."

The types of examples commonly volunteered and the judgments then made about them were similar regardless of country and the age, gender, political affiliation, educational level, and religiosity of individuals.

Common examples generated by participants for marketing and advertising-related contexts included the use of subliminal advertising, commercial jingles or the placement of goods at eye level on supermarket shelves.

In the political category most examples corresponded to targeted advertising or campaign tactics that encourage party leaders to dress and behave in a certain way to influence people's voting decisions.

Credit: 
Queen Mary University of London

Delivery of healthy donor cells key to correcting bone disorder, UConn researchers find

Osteogensis imperfecta (OI) is a group of genetic disorders that mainly affect the bone. Patients with OI have bones that break easily, sometimes with no apparent cause.

The disorder is commonly caused by mutations associated with type 1 collagen or molecules that participate in collagen processing, which results in a defective collagen bone matrix. Current treatments for OI aim to help correct the defective bone matrix, but fail to focus on the underlying collagen defect

In the journal STEM CELLS, research group of Dr. Ivo Kalajzic, lead investigator and professor, presents a study with potential for new treatments to address the root cause of weak and brittle bones. This work is a result of concerted effort by a postdoctoral fellow Dr. Ben Sinder, Dr. Sanja Novak, research instructor and Dr. Peter Maye, associate professor, in the Center for Regenerative Medicine and Skeletal Development in the Department of Reconstructive Sciences at the UConn School of Dental Medicine, along with Dr. Brya Matthews now a researcher at the University of Auckland, New Zealand.

"This is a basic research study with potential for future translation into practice," Kalajzic said.

In this study, the researchers transplanted healthy donor bone marrow cells directly into the femur of mice with OI. One month post-transplantation, the researchers found that 18% of the surface that was injected with the donor cells expressed osteoblasts--cells that help form new bone tissue--that indicated engraftment, or growth. Long-term engraftment was then observed 3 and 6 months post transplantation. The researchers found that healthy donor cells that replace mutant collagen have the ability to help improve bone strength and structure .

The study proved that healthy donor stem cells that produce normal collagen in OI patients has the potential to increase bone mass and correct the mutant collagen matrix. The findings unlock the potential for new therapies to help correct the adverse effects of OI.

Credit: 
University of Connecticut

Overuse of herbicides costing UK economy £400 million per year

image: Black-grass at a medium level in winter wheat field.

Image: 
Helen Hicks

Scientists from international conservation charity ZSL (Zoological Society of London) have for the first time put an economic figure on the herbicidal resistance of a major agricultural weed that is decimating winter-wheat farms across the UK.

A vital ingredient in mince pies, biscuits and stuffing - and of course a large amount fed to turkeys, the future of Christmas dinners containing wheat could be at risk, with the persistent weed making its way across British fields.

Black-grass (Alopecurus myosuroides) is a native annual weed which although natural, large infestations in farmers' fields can force them to abandon their winter wheat - the UK's main cereal crop. Farmers have been using herbicides to try and tackle the black-grass problem - but in many areas of England the agricultural weed is now resistant to these herbicides.
The cost of black-grass heralded as 'Western Europe's most economically significant weed', is setting back the UK economy £400 million and 800,000 tonnes of lost wheat yield each year, with potential implications for national food security.

Published in Nature Sustainability today (23 December 2019), researchers from ZSL's Institute of Zoology, Rothamsted Research and Sheffield University have devised a new model which helps quantify the economic costs of the resistant weed and its impact on yield under various farming scenarios.

An estimated four million tonnes of pesticide are applied to crops worldwide each year. There are 253 known herbicide-resistant weeds already, and unlike the known-costs to the economy of human antibiotic resistance - which runs into trillions of dollars - estimates of the costs of resistance to agricultural xenobiotics (e.g. antimycotics, pesticides) are severely lacking.

Over-use of herbicides can lead to poor water quality and causes loss of wild plant diversity and indirect damage to surrounding invertebrate, bird and mammal biodiversity relying on the plants.

The ZSL research found the UK is losing 0.82 million tonnes in wheat yield each year (equivalent to roughly 5% of the UK's domestic wheat consumption) due to herbicide resistant black-grass. The worst-case scenario - where all fields contained large amounts of resistant black-grass - is estimated to result in an annual cost of £1 billion, with a wheat yield loss of 3.4 million tonnes per year.

Lead author and postdoctoral researcher at ZSL's Institute of Zoology, Dr Alexa Varah said: "This study represents the first national-scale estimate of the economic costs and yield losses due to herbicide resistance, and the figure is shockingly higher than I think most would imagine.

"We need to reduce pesticide use nationwide, which might mean introducing statutory limits on pesticide use, or support to farmers to encourage reduced use and adoption of alternative management strategies. Allocating public money for independent farm advisory services and research and development could help too."

Management industry recommendations have so far advised using a mixture of herbicides, designed to prevent the evolution of 'specialist' resistance, however alarmingly recent research has revealed that this method actually alters the type of resistance to a more generalist resistance, giving resistance to chemicals the plants have never been exposed to.

Glyphosate is now one of the few herbicides that black-grass has not evolved resistance to, with farmers now reliant on repeated applications to control the weed. However, evidence from a recent study shows that resistance to glyphosate is now evolving in the field too.

Dr Varah added; "Farmers need to be able to adapt their management to implement more truly integrated pest management strategies - such as much more diverse crop rotations and strict field hygiene measures.

"Currently resistance management is the responsibility of individual practitioners, but this isn't a sustainable approach. It should be regulated through a national approach, linking the economic, agricultural, environmental and health aspects of this issue in a National Action Plan - that also targets glyphosate resistance.

"Understanding the economic and potential food security issues is a vital step, before looking at biodiversity, carbon emissions and water quality impacts in greater detail. We hope to use this method to aid the development of future models to help us understand how British farmers battling black-grass could do it in a way that is more beneficial to biodiversity like insects, mammals, wild plants and threatened farmland bird species like skylarks, lapwing and tree sparrows - unearthing how their numbers are linked to changes in farming practices."

Credit: 
Zoological Society of London

Researchers to develop a theory of transients in graphene

image: Scientists to develop a theory of transients in graphene.

Image: 
Peter the Great St.Petersburg Polytechnic University

The article "Equilibration of energies in a two-dimensional harmonic graphene lattice" published in the oldest scientific journal in the world Philosophical Transactions of the Royal Society considers the behavior of graphene in the moment of its transition from the state of thermal equilibrium and the process of returning to this state. The scientific report is conducted by Vitaly Kuzkin, the deputy director of Higher School of Theoretical mechanics and Research Educational Centre "Gazpromneft-Polytech" of Peter the Great St.Petersburg Polytechnic University (SPbPU) in collaboration with Igor Berinskii from the School of Mechanical Engineering, Tel Aviv University (Israel) in the field of materials science, solid mechanics and dynamics of mechanical systems.

"Our research group develops a theory that describes the transition to thermal equilibrium in crystals which are initially in a nonequilibrium state. It can be caused, for example, by high-speed laser exposure or the passage of shock waves. In this paper, we applied this theory to graphene", notes Vitaly Kuzkin. Usually, transients occur rather quickly and have a high frequency, but graphene turned out to be unique here - some transients in graphene have very low frequencies. "

The research results are important for investigation of heat transport and other nonequilibrium thermodynamic processes in graphene.

"Graphene is a very promising material. It has many useful properties like strength, stiffness, high heat and electrical conductivity. It can be used in flexible electronics, wearable devices, and in creation of composite materials," - explains Vitaly Kuzkin.

Credit: 
Peter the Great Saint-Petersburg Polytechnic University

Where do baby sea turtles go? New research technique may provide answers

image: Sea turtle hatchling heads to the ocean, where it will be for several years. Those years are often referred to as the "lost years."

Image: 
G. Stahelin

A team of Florida researchers and their collaborators created a first-of-its-kind computer model that tracks where sea turtle hatchlings go after they leave Florida's shores, giving scientists a new tool to figure out where young turtles spend their "lost years."

Nathan Putman, a biologist with LGL Ecological Research Assoc. based in Texas, led the study, which included 22 collaborators across Mexico, the southeastern United States, the Caribbean, and Europe. Co-authors include UCF Associate Professor Kate Mansfield, who leads UCF's Marine Turtle Research Group, and UCF assistant research scientist Erin Seney.

"The model gives community groups, scientists, nonprofit agencies and governments across borders a tool to help inform conservation efforts and guide policies to protect sea turtle species and balance the needs of fisheries and other human activity," Putman said.

The team's simulation model and findings were published this week in the online journal Ecography.

The model is built to predict loggerhead, green turtle and Kemp's ridley abundance, according to the authors. To create the model, the team looked at ocean circulation data over the past 30 years. These data are known to be reliable and routinely used by National Ocean and Atmospheric Administration and other agencies. The team also used sea turtle nesting and stranding data from various sources along the Caribbean, Gulf of Mexico and Florida coasts. The dataset includes more than 30 years of information from UCF, which has been monitoring sea turtle nests in east Central Florida since the late 1970s. Mansfield, Seney and Putman previously worked together on other sea turtle studies in the Gulf of Mexico.

"The combination of big data is what made this computer model so robust, reliable and powerful," Putman said.

The group used U.S. and Mexico stranding data--information about where sea turtles washed ashore for a variety of reasons--to check if the computer model was accurate, Putman said. The model also accounts for hurricanes and their impact on the ocean, but it does not take into consideration manmade threats such as the 2010 Deepwater Horizon oil spill in the Gulf of Mexico, which occurred during the years analyzed in the study.

The computer model also predicts where the turtles go during their "lost years" - a period after the turtles break free from their eggs on the shoreline and head into the ocean in the Gulf of Mexico and northwest Atlantic. The turtles spend years among sargassum in the ocean, and any data about that time is scarce. Better data exist when they are larger juveniles and return to forage closer to coastlines. What young sea turtles do in between hatching and returned to nearshore waters takes place during what is called the "lost years" and is the foundation of sea turtle populations. Understanding where and when the youngest sea turtles go is critical to understanding the threats these young turtles may encounter, and for better predicting population trends throughout the long lives of these species, said Mansfield.

This work was supported in part by a National Academy of Sciences gulf research program grant awarded to Mansfield, Seney and Putman to synthesize available sea turtle datasets across the Gulf of Mexico.

"While localized data collection and research projects are important for understanding species' biology, health and ecology, the turtles studied in one location typically spend different parts of their lives in other places, including migrations from offshore to inshore waters, from juvenile to adult foraging grounds, and between foraging and nesting areas," said Seney, who helped coordinate data compilation from the multiple locations. "Our extensive collaborations on this project allowed us to study the Gulf of Mexico's three most abundant sea turtle species and to integrate nesting beach data for distant nesting populations that ended up having close connections to the 1- to 3-year-old turtles living and stranding along various portions of the U.S. Gulf coast. Without the involvement of our Mexican and Costa Rican collaborators, a big piece of this picture would have been missing."

Credit: 
University of Central Florida

Moms' obesity in pregnancy is linked to lag in sons' development and IQ

December 23, 2019 -- A mother's obesity in pregnancy can affect her child's development years down the road, according to researchers who found lagging motor skills in preschoolers and lower IQ in middle childhood for boys whose mothers were severely overweight while pregnant. A team of epidemiologists, nutritionists and environmental health researchers at Columbia University Mailman School of Public Health and the University of Texas at Austin and found that the differences are comparable to the impact of lead exposure in early childhood. The findings are published in BMC Pediatrics.

The researchers studied 368 mothers and their children, all from similar economic circumstances and neighborhoods, during pregnancy and when the children were 3 and 7 years of age. At age 3, the researchers measured the children's motor skills and found that maternal obesity during pregnancy was strongly associated with lower motor skills in boys. At age 7, they again measured the children and found that the boys whose mothers were overweight or obese in pregnancy had scores 5 or more points lower on full-scale IQ tests, compared to boys whose mothers had been at a normal weight. No effect was found in the girls.

"What's striking is, even using different age-appropriate developmental assessments, we found these associations in both early and middle childhood, meaning these effects persist over time," said Elizabeth Widen, assistant professor of nutritional sciences at UT Austin and a co-author. "These findings aren't meant to shame or scare anyone. We are just beginning to understand some of these interactions between mothers' weight and the health of their babies."

It is not altogether clear why obesity in pregnancy would affect a child later, though previous research has found links between a mother's diet and cognitive development, such as higher IQ scores in kids whose mothers have more of certain fatty acids found in fish. Dietary and behavioral differences may be driving factors, or fetal development may be affected by some of the things that tend to happen in the bodies of people with a lot of extra weight, such as inflammation, metabolic stress, hormonal disruptions and high amounts of insulin and glucose.

The researchers controlled for several factors in their analysis, including race and ethnicity, marital status, the mother's education and IQ, as well as whether the children were born prematurely or exposed to environmental toxic chemicals like air pollution. What the pregnant mothers ate or whether they breastfed were not included in the analysis.

The team also examined and accounted for the nurturing environment in a child's home, looking at how parents interacted with their children and if the child was provided with books and toys. A nurturing home environment was found to lessen the negative effects of obesity.

According to Widen and senior author Andrew Rundle, DrPH, associate professor of Epidemiology at Columbia Mailman School, while the results showed that the effect on IQ was smaller in nurturing home environments, it was still there.

This is not the first study to find that boys appear to be more vulnerable in utero. Earlier research found lower performance IQ in boys but not girls whose mothers were exposed to lead, and a 2019 study suggested boys whose moms had fluoride in pregnancy scored lower on an IQ assessment.

Because childhood IQ is a predictor of education level, socio-economic status and professional success later in life, researchers say there is potential for impacts to last into adulthood.

The research team advised women who are obese or overweight when they become pregnant to eat a well-balanced diet that is rich in fruits and vegetables, take a prenatal vitamin, stay active and make sure to get enough fatty acids such as the kind found in fish oil. Giving children a nurturing home environment also matters, as does seeing a doctor regularly, including during pregnancy to discuss weight gain. Working with your doctor and talking about what is appropriate for your circumstances are recommended.

Credit: 
Columbia University's Mailman School of Public Health