Earth

Fishing alters fish behaviour and features in exploited ecosystems

image: Labrus bergylta spotted morphotyp

Image: 
Olga Reñones

Not all specimens of the same species are the same: there is a marked variability within the same population and sometimes these morphological differences are translated into a different behaviour.

A study by the UB shows that fishing alters resource distribution and therefore, the behaviour of two typologies of the same fish species, Labrus bergylta. These results, published in the journal Marine Ecology Progress Series, show that fishing hardens the understanding of how the features of species have evolved in exploited ecosystems, since it has an impact on how these act and feed from animals. Also, results ratify the importance of marine reservoirs to understand the original behaviour of these ecosystems before human intervention.

The article is signed by Lluís Cardona, Àlex Aguilar and Fabiana Saporiti researchers from the Department of Evolutionary Biology, Ecology and Environmental Sciences and the Biodiversity Research Institute (IRBio) of the University of Barcelona. Experts from the Spanish Institute of Oceanography and the University of Essex (United Kingdom) also took part in the study.

The existence of different forms of the same species, called morphotypes, is frequent in vertebrate animals and depends to a large extent on the abundance of available preys during the first years of life, as well as on the competition with other congeners. To find out if two morphotypes of the same species differ in the use of resources and if this diversity is affected by fishing, the UB team launched a study on Labrus bergylta, a fish in the order of Perciformes and the family of the wrasses, very common on the northern coasts of the Iberian Peninsula and on the Atlantic coasts of Europe.

The researchers compared the middle patterns of use and the feeding of two morphotypes of this fish, one plain and the other with spots, in two different habitats: in the Cíes Islands (Vigo), a protected marine area where recreational fishing is not allowed, and in contiguous areas open to fishing. With this aim, they first studied visually the number of specimens of each morphotype in the two areas and then used stable isotope analysis techniques of carbon and nitrogen to find out the differences in the type of feeding.

Fishing exploitation hardens the understanding of original trophic niches

The results show that the two morphotypes differ consistently in their use of the habitat both inside and outside the marine reserve, but only in the marine reserve do they also differ in their diet. According to the researchers, this is because of fishing: by reducing the size of the population, it reduces intraspecific competition. "The distribution of resources between these two varieties depends on the density, so the current behavior in areas open to fishing is not informative about their original trophic niches. This shows that many of the features that we see in exploited wild species may have more to do with that exploitation and not with adaptations to the natural environment, since it has been transformed by humans", says Lluís Cardona.

These conclusions show the importance of protected areas to understand the behavior of marine species. "Comparing the biology of the species inside and outside the marine reserves and other protected areas allows us to understand the changes in the biology of the exploited species, which otherwise would not be clear", highlights Lluís Cardona.

Given the situation, the authors point out the importance of analyzing how these changes are transferred to the rest of the trophic web and see if the same happens with other species in other regions. "This is particularly relevant for the North Atlantic Ocean, where a century of intense human exploitation has decimated the populations of most long-lived marine species", concludes the researcher.

Credit: 
University of Barcelona

How hope can make you happier with your lot

Having hope for the future could protect people from risky behaviours such as drinking and gambling - according to new research from the University of East Anglia.

Researchers studied 'relative deprivation' - the feeling that other people have things better than you in life.

They wanted to find out why only some people experiencing this turn to escapist and risky behaviours such as drinking alcohol, taking drugs, over-eating or gambling, while others do not.

And they found that the answer lies in hope.

Postgraduate researcher Shahriar Keshavarz, from UEA's School of Psychology, said: "I think most people have experienced relative deprivation at some point in their lives. It's that feeling of being unhappy with your lot, the belief that your situation is worse than others, that other people are doing better than you.

"Roosevelt famously said that 'comparison is the thief of joy'. It's that feeling you have when a friend buys a new car, or your sister gets married, or a colleague finds a better job or has a better income.

"Relative deprivation can trigger negative emotions like anger and resentment, and it has been associated with poor coping strategies like risk taking, drinking, taking drugs or gambling.

"But not everyone scoring high on measures of relative deprivation makes these poor life choices. We wanted to find out why some people seem to cope better, or even use the experience to their advantage to improve their own situation.

"There is a lot of evidence to show that remaining hopeful in the face of adversity can be advantageous, so we wanted to see if hope can help people feel happier with their lot and buffer against risky behaviours."

The research team carried out two lab-based experiments with 55 volunteers. The volunteers were quizzed to find out how much they feel relative deprivation and hope.

The researchers also induced feelings of relative deprivation in the volunteers, by telling them how deprived they were compared to their peers, based on a questionnaire about their family income, age and gender.

They then took part in specially designed gambling games that involved risk-taking and placing bets with a chance to win real money.

Dr Piers Fleming, also from UEA's School of Psychology, said: "The aim of this part of the study was to see whether feeling relatively deprived - elicited by the knowledge that one has less income than similar others - causes greater risk-taking among low-hopers and decreased risk-taking among high-hopers.

"We looked at the people who scored high for relative deprivation, the ones that thought their situation in life was worse than those around them. And we looked at those who also scored high for hope.

"We found that the volunteers who scored high for hope, were much less likely to take risks in the game. Those who weren't too hopeful, were a lot more likely to take risks."

Another experiment looked at whether hope helped people in the real world. They worked with 122 volunteers who had gambled at least once in the last year. The volunteers took part in questionnaires to gauge how hopeful they are, whether they feel relatively deprived and to measure problem gambling.

Of the participants, 33 had no gambling problems (27 per cent), 32 had low level of problems (26 per cent), 46 had moderate level of problems leading to some negative consequences (38 per cent) and 11 were problem gamblers with a possible loss of control (9 per cent).

Mr Keshavarz said: "When we looked at these scores compared to scores for hope and relative deprivation, we found that increased hope was associated with a decreased likelihood of losing control of gambling behaviour - even in those who experienced relative deprivation.

"Interestingly, our study found no significant relation between hope and gambling severity among relatively privileged persons. We don't know why this is, but it could be that they are gambling recreationally or better able to stop when the fun stops."

The research team say that nurturing hope in people who are unhappy with their lot could protect against harmful behaviours like drinking and gambling.

Credit: 
University of East Anglia

African American youth who receive positive messages about their racial group may perform better in school

Youth of color represent over half of the school-aged population (kindergarten through twelfth grade) in public schools in the United States. This creates a need for evidence-driven approaches that address the pervasive Black-White achievement gap. A new longitudinal study shows that African American youth who receive positive messages about their racial group in school achieved better school grades one to two years later.

The findings were published in an article written by researchers at the University of Pittsburgh that appears in Child Development, a journal of the Society for Research in Child Development.

"African American youth who received positive messages from educators and school personnel about their racial group had better grades up to 1-2 years later," said Juan Del Toro, postdoctoral research scientist at the University of Pittsburgh. "Our results suggest that African American youth are more likely to be successful in school when they feel a positive sense of community and interdependence."

The study initially assessed 961 sixth-, eighth- and tenth grade African American students enrolled in 17 public schools throughout the Mid-Atlantic region of the United States during the 2016-2017 academic school year, following them subsequently over three academic years. Participating students completed computer-based 45- minute surveys that measured the following:

Academic performance: grade point averages ranging from 0-4 were obtained for each academic year of the study period.

School cultural socialization: adolescents' perceptions about whether and how their school leaders and educators provided positive messages about their own racial group.

Ethnic-racial identity development: identity exploration and identity commitment (the feelings of connection and belonging to one's ethnic-racial group).

Researchers used three waves of yearly longitudinal data to examine whether:

adolescents' perceptions of school cultural socialization (engagement in endorsing racial pride messages) predict identity exploration, identity commitment, and overall grade point averages over a three-year period, and

a longitudinal link exists between school cultural socialization and school grades conveyed through identity exploration and identity commitment.

"Because the school environment represents a prominent developmental context during adolescence, elucidating the consequences of school cultural socialization is critical to understanding whether such practices effectively promote African American youth's academic performance," said Ming-Te Wang, professor at the University of Pittsburgh. "By understanding that schools can act as agents of positive cultural socialization, we can better inform schools as to why and how they should engage in practices that promote African American pride, history and heritage."

The authors recognize several limitations of the present study that future studies should address, including: the self-reporting by students of both school cultural socialization and student demographics, lack of comparability across students in the way grade point averages were reported and missing data for the most disadvantaged students in the sample.

Credit: 
Society for Research in Child Development

Tepary beans -- a versatile and sustainable native crop

image: Tepary beans created through crossbreeding display variation for seed coat colors.

Image: 
Tomilee Turner

Agriculture accounts for more than a third of water use in the United States. In drier parts of the country, like the southwestern U.S., that fraction can be much higher. For example, more than 75% of New Mexico's water use is for agriculture.

Richard Pratt, a member of the Crop Science Society of America, studies native crops that can enhance food security while reducing water use. "Water sustainability and food security are tightly linked," he explains.

Pratt recently presented his research at the virtual 2020 ASA-CSSA-SSSA Annual Meeting, hosted by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America.

One of the top candidates for enhancing food security with less water is the tepary bean. It is a native crop that has been cultivated for thousands of years.

"Tepary beans, or teparies, are basically an all-around champion of desert adaptation in an agricultural context," says Pratt.

Teparies are relatively drought and heat tolerant. That is especially true when compared with their common bean cousins like pintos and kidney beans.

Since teparies need less water than many other bean crops, they can be one solution to dwindling water resources.

"We are facing growing water demand coupled with decreasing water supply and quality," Pratt says. "This gap continues to widen, and the status quo is unsustainable."

Native crops, like teparies, can shift the status quo in different ways. For example, one approach that could lead to using less water for farming is breeding more heat and drought tolerant crops. "That way we can get more 'crop per drop'," says Pratt.

Heirloom crop varieties from the Southwest, like teparies and their wild relatives, can be used as breeding resources.

"It takes time to do the breeding, but less thirsty crops grown more efficiently will help," explains Pratt.

A more radical approach would be changing what crops are grown in arid parts of the U.S., such as the Southwest.

Farmers may consider shifting away from 'thirsty' crops, such as pecans, maize and common beans. In their place, farmers could grow crops like pistachios, sorghum and teparies.

But heirloom varieties of native crops can have lower yields than modern varieties. Also, investments in new processing facilities may be needed, and developing markets for new crops can take time.

"There is no free lunch," says Pratt. "But on the brighter side, native crops may offer unique nutritional or quality traits that consumers are looking for."

For example, teparies have an excellent nutritional profile. They can be used for dry beans or as a forage crop. In fact, select varieties of teparies have nutritional profiles comparable to that of alfalfa, a popular forage crop.

Teparies can also be an effective cover crop. These are crops planted for soil management purposes, such as reducing erosion or enhancing soil health and nutrients.

Leguminous cover crops, such as clovers or hairy vetch, have root nodules housing microbes. These microbes can 'fix' or add atmospheric nitrogen to the soil, increasing productivity.

Tepary beans also typically have root nodules but it was unclear if these nodules would be present when teparies are grown in the hot desert soils of the American Southwest.

"It was great to dig up tepary roots and see nodules that bring in 'free' nitrogen into the cropping system," says Pratt. This finding shows that teparies can be a particularly effective cover crop.

"We now have confidence to go forward with teparies as a prospective forage and cover crop," says Pratt.

Future work will focus on finding ways to improve teparies as a crop. For example, tepary bean pods can release the seeds before harvest. "That poses a risk for seed production," says Pratt. "Further research is needed to reduce that problem."

Ultimately, teparies can help improve food security and water management, expand the availability of high quality, locally produced food, and retain agriculture as a part of a vibrant economy.

Richard Pratt is a plant scientist and Director of the Cropping Systems Research Inovation Program at New Mexico State University. This work was supported by the New Mexico State University Agricultural Experiment Station and the United States Department of Agriculture National Institute of Food and Agriculture Hatch Project (Accession 1010445) entitled "Tepary bean: a prospective non-thirsty forage and cover crop."

Credit: 
American Society of Agronomy

Scientists precisely predict intricate evolutions of multiple-period patterns in bilayers

image: (a) Flat-wrinkle-tripling-fold transition with substrate precompression; (b) flat-wrinkle-doubling-quadrupling-fold transformation under direct compression; (c) flat-wrinkle-ridge transition with substrate pre-stretch and large modulus ratio; (d) flat-wrinkle-hierarchical transition with substrate pre-stretch and small modulus ratio.

Image: 
@Science China Press

Surface instability of compliant film/substrate bilayers has raised considerable interests due to its broad applications such as wrinkle-driven surface renewal and antifouling, shape-morphing for camouflaging skins, and micro/nano-scale surface patterning control. However, it is still a challenge to precisely predict and continuously trace secondary bifurcation transitions in the nonlinear post-buckling region. Fundamental understanding and quantitative prediction of morphological evolution and pattern selection are, in fact, crucial for the effective use of wrinkling as a tool for morphological design.

Lately, an article entitled "Intricate evolutions of multiple-period post-buckling patterns in bilayers", published in SCIENCE CHINA Physics, Mechanics & Astronomy by a soft matter mechanics group at Fudan University, has reported rich successive post-buckling phenomena involving multiple-period pattern transitions (see Fig. 1), based on their lattice models of hyperelastic bilayers.

Researchers have developed lattice models to quantitatively predict the nonlinear surface morphology evolution with multiple mode transitions in hyperelasric bilayers. Based on these models, they have revealed an intricate post-buckling phenomenon involving successive bifurcations: flat-wrinkle-doubling-quadrupling-fold. They have examined effects of modulus ratio, dimension and loading types on pattern formation and evolution. With high substrate pre-tension, hierarchical wrinkles in the form of alternating packets of large and small undulations appear in the bilayer with a low modulus ratio at the secondary bifurcation, while a wrinkle-to-ridge mode transition occurs with a relatively high modulus ratio. While with a moderate substrate pre-compression and modulus ratio, the bilayer tends to evolve into a period-tripling mode. Lastly, they have provided phase diagrams based on neo-Hookean and Arruda-Boyce constitutive laws to characterize the influence of pre-stretch and modulus ratio on pattern selection (see Fig. 2). Both hyperelastic models demonstrate the same trend of mode transition and similar deformation shapes in film/substrate bilayers.

This work not only advances the fundamental understanding of nonlinear morphological transitions of soft bilayer materials, but also gives a way to quantitatively predict and design multiple-period or localized surfaces, which is promising to guide smart surface regulation in broad applications.

Credit: 
Science China Press

The melting of the Greenland ice sheet could lead to a sea level rise of 18 cm in 2100!

image: Evolution of the surface mass balance (snowfall - melting) with the old (cmip5) and new (cmip6) scenarios. The blue colour indicates a mass loss in mm/year

Image: 
©Université de Liège / X.Fettweis

A new study, headed by researchers from the Universities of Liège and Oslo, applying the latest climate models, of which the MAR predicts a 60% greater melting of the Greenland ice sheet than previously predicted. Data that will be included in the next IPCC report. This study is published in Nature Communications.

The Greenland ice sheet, the second largest after the Antarctic's, covers an area of 1.7 million square kilometres. Its total melting could lead to a significant rise in ocean levels, up to 7 metres. Although we are not there yet, the previous scenarios predicted by climate models have just been revised upwards, predicting a rise in sea levels of up to 18 cm by 2100 (compared to the 10 cm announced previously) just because of the increase in surface melting. Within the framework of the next IPCC report (AR6) which will appear in 2022, the University of Liège Laboratory of climatology has been led to apply, within the framework of the ISMIP6 project, the MAR climate model which it is developing to downscale the old and new IPCC scenarios. The results obtained showed that for the same evolution of greenhouse gas concentrations till 2100, these new scenarios predict a 60% greater surface melting of the Greenland ice cap than previously estimated for the previous IPCC report (AR5, 2013).

The MAR model was the first to demonstrate that the Greenland ice sheet would melt further with a warming of the Arctic in summer. While our MAR model suggested that in 2100 the surface melting of the Greenland ice sheet would contribute to a rise in the oceans of around ten centimetres in the worst-case scenario (i.e. if we do not change our habits)," explains Stefan Hofer, post-doc researcher at the University of Oslo, "our new projections now suggest a rise of 18 cm". As the new IPCC scenarios are based on models whose physics have been improved - in particular by incorporating a better representation of cloudiness - and whose spatial resolution has been increased, these new projections should in theory be more robust and reliable.

The team of the Laboratory of Climatology was the first to downscale these scenarios on the Greenland ice cap. "It would now be interesting, says Xavier Fettweis, researcher and director of the Laboratory, to analyse how these future projections are sensitive to the MAR model that we are developing by downscalling these scenarios with other models than MAR as we have done on the present climate (GrSMBMIP)". This study will be carried out within the framework of the European project PROTECT (H2020). The objective of this project is to assess and project changes in the terrestrial cryosphere, with fully quantified uncertainties, in order to produce robust global, regional and local projections of sea level rise over a range of time scales. https://protect-slreu/

Credit: 
University of Liège

New fullerene crystal production method 50 times faster than predecessor

image: (a) Photo of produced FFMP on quarts plate and (b)-(d) scanning electron microscope images of samples.

Image: 
Yokohama National University

Researchers from Yokohama National University and the University of Electro-Communications in Japan have developed a highly efficient technique for producing a unique fullerene crystal, called fullerene finned-micropillar (FFMP), that is of significant use for next-generation electronics.

Fullerene is a popular choice for developing technologies not only due to its small size, it is also very durable and contains semiconductor properties, making it a good candidate in devices such as field-effect transistors, solar cells, superconductive materials, and chemical sensors. The material is in use now, however, it is difficult to handle because fullerene is a nano-scaled and generally come in a powdery state. As a solution of this problem, one-dimensional fullerene crystals are produced and used.

"Producing one-dimensional fullerene crystals requires expert skills and takes several days with typical production methods. In this study, we succeeded in developing a very simple fabricating method by using an annealing process," said Dr. Takahide Oya, Associate Professor at Yokohama National University and corresponding author of the study.

In a paper published in Scientific Reports in November 2020 (DOI: 10.1038/s41598-020-76252-6), the team details how they utilized a small heating apparatus that accepted fullerene and heated it to a temperature of 1,173 Kelvin for about an hour. The fullerene originally deposited in the heating apparatus de-crystallizes due to the heat and subsequently re-crystallizes as the temperature is lowered. This overall process, known as annealing, is over fifty times faster than the older technique for producing fullerene crystals.

"By using our method, mass production of one-dimensional fullerene crystals can be produced in an hour. The produced fullerene crystals that we named 'fullerene finned-micropillar (FFMP)' have a distinctive structure," said Oya.

The team is also confident that the fullerene crystals produced in this new, more efficient production process will have similar qualities to fullerene crystals such as fullerene nanowhiskers produced using the older methods.

"FFMP is expected to have electrical conductivity and n-type semiconductor functionality," Oya said.

More tests are required to confirm that FFMP does indeed retain the qualities so useful for electronic implementation, but positive results could mean solar cells with much higher efficiency, extremely small circuits integrated in flexible devices for example.

The team has already examined this annealing under different environmental conditions, temperatures, and heating time. Having studied the process, the team now has their sights set on characterizing the FFMP in the context of an electrical component. "As the next step of this study, confirming and obtaining the electrical conductivity and the n-type semiconductor functionality is expected, because the ordinary fullerene has such properties. In addition, developing 'fullerene-finned nano pillar (FFNP)' by modifying the process is also expected. We believe that FFMPs (or FFNPs) will be useful for field-effect transistors, organic photovoltaics, and so on in the near future," said Oya.

This will not be the first time Oya and his team have tackled special, small scale materials for use in electronics.

"We have already had a technique for making carbon nanotube, or CNT -- one dimensional nano-carbon material -- composite papers and CNT composite threads/textiles as unique CNT composite materials," said Oya. "Therefore, we will develop FFMP composite materials along with their applications. We believe the useful FFMP composites (and the combination with CNT composites) will be used in our daily life in near future."

Credit: 
Yokohama National University

Mystery solved: new study shows link between hot and dry weather and air quality in Korea

image: Air pollution from human activities and dry, sunny weather combine to increase surface ozone concentrations, and ozone emissions are very harmful to health

Image: 
Daniel Moqvist on Unsplash

While ozone in the stratosphere acts as barrier that protects the Earth from ultraviolet radiation, ground-level (or tropospheric) ozone is a dangerous trace gas that can cause serious health problems. This ozone is the result of photochemical reactions between nitrogen oxides and volatile organic compounds, which are two major air pollutants.

Over the past decades, East Asia has witnessed a marked degradation of air quality, especially so in terms of ground-level ozone, that is consistent with human activity. However, in Korea, the specific reasons behind increase in ozone levels during warm seasons remain a mystery among atmospheric scientists.

To shed light on this issue, a team of scientists, including Prof Jin-Ho Yoon from Gwangju Institute of Science and Technology, Korea, recently conducted a study that was published in Atmospheric Environment. They focused on the relationship between large-scale weather patterns (called synoptic-scale weather) and surface ozone concentration. To do this, they used synoptic weather data from 17 airport meteorological stations and hourly observations of ground-level ozone concentrations from 306 monitoring sites.

One of the main findings of the study was that a particular synoptic weather pattern called 'dry tropical' was consistently associated with high ozone concentration. This is because ozone formation requires sunlight--which implies that dry and warm atmospheric conditions are favorable for its formation.

Most importantly, the researchers found that dry tropical weather had steadily become more frequent in Korea over the past 50 years, which is consistent with the gradual increase in tropospheric ozone levels. "We estimate that tropospheric ozone concentration could increase by 3.5% if the frequency of dry tropical weather doubles, and by an alarming 7.5% if it triples," comments Prof. Yoon. "Our results imply that future air quality regulations in Korea should be issued together with those related to global and regional warming," he adds.

Overall, this study provides valuable insights to tackle the long-standing mystery of tropospheric ozone in Korea. Lead author Dr. Hyun Cheol Kim from the National Oceanic and Atmospheric Administration and the University of Maryland, USA, remarks: "Understanding the relationship between synoptic weather patterns and surface ozone concentration will help us assess the contribution of meteorological conditions to regional air quality and establish an effective early warning system." Let us hope this study brings more attention to the serious and interlinked issues of air pollution and climate change so that decisionmakers can act in time.

Credit: 
GIST (Gwangju Institute of Science and Technology)

Primitive fish fossils reveal developmental origins of teeth

image: Part of a jawbone of the 422-million-year-old fossil bony fish Lophosteus, visualised with a high-resolution X-ray technique. On the right, the surface of the jawbone is shown in grey. In the middle, exposed teeth are highlighted in gold and dermal odontodes in shades of purple, pink and red. On the left, the bone itself is made transparent, revealing internal blood vessels and pulp cavities, shown in blue and green, as well as the embedded teeth and dermal odontodes.

Image: 
Chen et al. (CC BY 4.0)

Teeth and hard structures called dermal odontodes are evolutionarily related, arising from the same developmental system, a new study published today in eLife shows.

These findings in ancient fish fossils contradict established claims about the difference between the two structures based on modern sharks, and provide potential new insights into the origins and development of teeth.

Odontodes are hard structures made of dentine, the main substance in ivory, and are found on the outside surfaces of animals with backbones (vertebrates). Teeth are an example of odontodes but some animals also have them on their skin, such as the tooth-like 'scales' of sharks. These are known as dermal odontodes.

"Teeth and dermal odontodes are thought to have evolved separately because they seem to develop in different ways," says lead author Donglei Chen, a researcher at the Department of Organismal Biology, Uppsala University, Sweden. "However, most of what we know is limited to modern sharks in which the difference between these structures has become very distinct. To understand the relationship between the two more clearly, we needed to turn to the fossil record."

The team looked at fossils of one of the earliest bony fishes called Lophosteus which lived more than 400 million years ago. They chose this fish because it represents an early stage of tooth evolution, bringing them closer to the time when teeth and dermal odontodes could have separated in the hopes that any developmental similarities between the two would be more obvious.

The researchers used high-resolution X-ray imaging to look at the three-dimensional structure of odontodes in Lophosteus at different stages of development. They found that the appearance of odontodes were similar at the early stages of development but would change depending on whether they grew into the mouth or the face. This suggests there were different chemical signals in each area directing their development. At the later stages, some dermal odontodes would move from the face to the mouth and begin to look like teeth.

These findings suggest that both types of odontodes are able to respond to the same signals controlling each other's development and are made by the same developmental system - not separate systems as previously thought.

"In addition to casting light on the early evolution of our own teeth, our results point to a previously unrecognised evolutionary-developmental relationship between teeth and dermal odontodes," says senior author Per Ahlberg, PhD, Professor at the Department of Organismal Biology, Uppsala University. "This has potential implications for understanding the signalling that occurs during development and could inspire new lines of developmental research in other organisms."

Credit: 
eLife

This is your brain on code: JHU deciphers neural mechanics of computer programming

image: The graphic shows the how the brain activates during computer programming, compared to brain activations for logical reasoning and language.

Image: 
Johns Hopkins University

By mapping the brain activity of expert computer programmers while they puzzled over code, Johns Hopkins University scientists have found the neural mechanics behind this increasingly vital skill.

Though researchers have long suspected the brain mechanism for computer programming would be similar to that for math or even language, this study revealed that when seasoned coders work, most brain activity happens in the network responsible for logical reasoning, though in the left brain region, which is favored by language.

"Because there are so many ways people learn programming, everything from do-it-yourself tutorials to formal courses, it's surprising that we find such a consistent brain activation pattern across people who code," said lead author Yun-Fei Liu, a PhD student in the university's Neuroplasticity and Development Lab. "It's especially surprising because we know there seems to be a crucial period that usually terminates in early adolescence for language acquisition, but many people learn to code as adults."

The findings are published today in the journal eLife.

Researchers have long known what happens in the brain when someone reads, plays music or does math. But despite our increasing reliance on technology, almost nothing is known about the neural mechanisms of computer programming.

"People want to know what makes someone a good programmer," Liu said. "If we know what kind of neuro mechanisms are activated when someone is programming, we might be able to find a better training program for programmers."

Many people assume techies have math-centric minds, and think the brain region for programing would be the same as the one used when solving math problems, Liu said. Others believe that programming languages are called languages for a reason and the neural mechanism underlying programming would be shared with language processing. Or it could be parts of the brain used for logical reasoning or the type of problem-solving known as "executive control."

To get to the bottom of it, Liu had 15 experienced programmers, each highly proficient in the programming language Python, lie in an fMRI scanner so he could measure their brain activity while they worked on coding questions.

In each case, the same part of the brain lit up: the area responsible for logical reasoning. And though the act of logical reasoning has no brain hemisphere preference, coding strongly favored the left hemisphere, the area that correlates with language.

Next, the lab hopes to determine if learning to code, like learning a language, is easier for the young.

"It's true that adults can learn to code but are kids even better at it? Or maybe coding doesn't have a critical learning period and that's what makes it special," said senior author Marina Bedny, an associate professor of Psychological and Brain Sciences. "It could be that our education system is wrong, and we should be teaching kids to code in middle school or else they're missing an opportunity to be the best they can."

Credit: 
Johns Hopkins University

Empowering women could help address climate change

Current and future damages of climate change depend greatly on the ability of affected populations to adapt to changing conditions. According to an international group of researchers, building capacity to adapt to such changes will require eradicating inequalities of many sorts, including gender.

Vulnerability and exposure to the effects of climate change differs significantly across social groups, defined not only by income levels but also by gender, education, and racial and ethnic profiles. Understanding how these inequalities will evolve in the future appears particularly important for the design of policies aimed at reducing the impact of climate change globally. In a new article published in Nature Communications, an interdisciplinary group of researchers from IIASA, Humboldt University, the Vienna University of Economics and Business, and Climate Analytics have developed projections of a gender inequality index throughout the 21st century to shed light on such developments.

The linkages between gender inequality and adaptive capacity to climate change relate to a large number of factors that differ across countries and over time, and also range from uneven access to resources to cultural norms. In addition, women's representation in politics has been shown to lead to more stringent climate action, thus affecting mitigation policies as well. Building capacity to adapt to climate change will require eradicating inequalities of many sorts, including those in terms of gender. To the extent that gender inequality is a determinant of adaptation to climate change impacts and may also have an effect on the implementation of mitigation policies, projections of trajectories in gender inequality can highlight potential future challenges to combating the negative effects of climate change. According to the authors, such projections, combined with existing scenarios for the future path of population growth, education, and income, can contribute significantly to our understanding of the obstacles faced by future societies in their efforts to foster climate resilient development.

The study provides the first ever, quantified scenarios of gender inequality over the 21st century, using the Shared Socioeconomic Pathways (SSPs), which are widely used in climate change research. These pathways add an important dimension for understanding societies' adaptive capacity in the face of climate change.

"Women are more vulnerable to the impacts of climate change, not because there's something inherently vulnerable about women, but because of different social and cultural structures that stand in their way," explains study lead author Marina Andrijevic, a researcher associated with Humboldt University and Climate Analytics, Berlin, Germany. "Disempowerment comes in many forms, from the lack of access to financial resources, education, and information, to social norms or expectations that affect, for example, women's mobility. These considerations have to be taken into account when thinking about what challenges to adaptation a society might face."

The study employed an established Gender Inequality Index of the United Nations Development Programme for its projections. The index measures whether women are disadvantaged in comparison to men with regard to health, labor market participation, education, and political participation. Eradicating these inequalities is expected to improve overall resilience to climate change impacts through improvements in women's access to better education, resources, and maternal health.

"Our projections provide an evidence-based assessment of future trajectories of gender inequality that can be used as an input to guide policymaking at the global level," notes coauthor Jesus Crespo Cuaresma, an IIASA researcher and professor at the Vienna University of Economics and Business. "While achieving gender equality does not automatically ensure the resilience of societies to the impacts of climate change, especially at low levels of socioeconomic development, it is an important factor for improving adaptive capacity globally."

Projections of future socioeconomic dynamics and gender inequality show that faster societal progress in areas such as education can improve the situation for millions of girls around the world already in 2030. The "what-if" scenarios defined by the SSPs are also useful for the assessment of progress towards the Sustainable Development Goals (SDGs), and can be used to monitor the fulfillment of the objective of achieving gender equality and empowerment of all women and girls by 2030.

"We hope that our work will help streamline considerations of gender inequality in our analyses of current and future challenges for adaptation to climate change, and what they mean for the world with the increasing impacts of climate change," Andrijevic concludes.

Credit: 
International Institute for Applied Systems Analysis

Physicians say non-contact infrared thermometers fall short as COVID-19 screeners

image: Physicians at Johns Hopkins Medicine and the University of Maryland Medical School say a non-contact infrared thermometer, such as the one being used here to check a traveler for fever at the airport, is a poor means of screening for COVID-19 infection.

Image: 
Public domain image

While a fever is one of the most common symptoms for people who get sick with COVID-19, taking one's temperature is a poor means of screening who is infected with the SARS-CoV-2 virus that causes the disease, and more importantly, who might be contagious. That's the conclusion of a perspective editorial by researchers at Johns Hopkins Medicine and the University of Maryland School of Medicine that describes why temperature screening -- primarily done with a non-contact infrared thermometer (NCIT) -- doesn't work as an effective strategy for stemming the spread of COVID-19.

The editorial was published Dec. 14, 2020, in Open Forum Infectious Diseases, the online journal of the Infectious Diseases Society of America. The authors are William Wright, D.O., M.P.H., assistant professor of medicine at the Johns Hopkins University School of Medicine, and Philip Mackowiak, M.D., M.B.A., emeritus professor of medicine at the University of Maryland School of Medicine.

In March 2020, the U.S. Department of Health and Human Services and the U.S. Centers for Disease Control and Prevention released guidelines for Americans to determine if they needed to seek medical attention for symptoms suggestive of infection with SARS-CoV-2, with temperature screening playing an integral role. According to the guidelines, fever is defined as a temperature -- taken with an NCIT near the forehead -- of greater than or equal to 100.4 degrees Fahrenheit (38.0 degrees Celsius) for non-health care settings and greater than or equal to 100.0 degrees Fahrenheit (37.8 degrees Celsius) for health care ones. This is the first aspect of COVID-19 screening by temperature that Wright and Mackowiak question in their editorial.

"Readings obtained with NCITs are influenced by numerous human, environmental and equipment variables, all of which can affect their accuracy, reproducibility and relationship with the measure closest to what could be called the 'body temperature' -- the core temperature, or the temperature of blood in the pulmonary vein," says Wright. "However, the only way to reliably take the core temperature requires catherization of the pulmonary artery, which is neither safe nor practical as a screening test."

In their editorial, Wright and Mackowiak provide statistics to show that NCIT fails as a screening test for SARS-CoV-2 infection.

"As of Feb. 23, 2020, more than 46,000 travelers were screened with NCITs at U.S. airports, and only one person was identified as having SARS-CoV-2," says Wright. "In a second example, CDC staff and U.S. customs officials screened approximately 268,000 travelers through April 21, 2020, finding only 14 people with the virus."

From a November 2020 CDC report, Wright and Mackowiak provide further support for their concern about temperature screenings for COVID-19. The report, they say, states that among approximately 766,000 travelers screened during the period Jan. 17 to Sept. 13, 2020, only one person per 85,000 -- or about 0.001% -- later tested positive for SARS-CoV-2. Additionally, only 47 out of 278 people (17%) in that group with symptoms similar to SARS-CoV-2 had a measured temperature meeting the CDC criteria for fever.

Another problem with NCITs, Wright says, is that they may give misleading readings throughout the course of a fever that make it difficult to determine when someone is actually feverish or not.

"During the period when a fever is rising, a rise in core temperature occurs that causes blood vessels near the skin's surface to constrict and reduce the amount of heat they release," Wright explains. "And during a fever drop, the opposite happens. So, basing a fever detection on NCIT measurements that measure heat radiating from the forehead may be totally off the mark."

Wright and Mackowiak conclude their editorial by saying that these and other factors affecting thermal screening with NCITs must be addressed to develop better programs for distinguishing people infected with SARS-CoV-2 from those who are not.

Among the strategies for improvement that they suggest are: (1) lowering the cutoff temperature used to identify symptomatic infected people, especially when screening those who are elderly or immunocompromised, (2) group testing to enable real-time surveillance and monitoring of the virus in a more manageable situation, (3) "smart" thermometers -- wearable thermometers paired with GPS devices such as smartphones, and (4) monitoring sewage sludge for SARS-CoV-2.

Credit: 
Johns Hopkins Medicine

Cancer researchers identify potential new class of drugs to treat blood and bone marrow cancers

CLEVELAND - A new study by researchers in Cleveland Clinic's Taussig Cancer Institute and Lerner Research Institute describes a novel class of targeted cancer drugs that may prove effective in treating certain common types of leukemia. The results first appeared online in Blood Cancer Discovery.

Myeloid leukemias are cancers derived from stem and progenitor cells in the bone marrow that give rise to all normal blood cells. One of the most common mutations involved in driving myeloid leukemias are found in the TET2 gene, which has been investigated for the last decade by Jaroslaw Maciejewski, MD, PhD, a practicing hematologist and chair of the Cleveland Clinic Department of Translational Hematology & Oncology Research.
In the new study, Dr. Maciejewski and his collaborator in the Department of Translational Hematology & Oncology Research, Babal Kant Jha, PhD, report a new pharmacological strategy to preferentially target and eliminate leukemia cells with TET2 mutations.

"In preclinical models, we found that a synthetic molecule called TETi76 was able to target and kill the mutant cancer cells both in the early phases of disease--what we call clonal hematopoiesis of indeterminate potential, or CHIP--and in fully developed TET2 mutant myeloid leukemia," said Dr. Maciejewski.

The research team designed TETi76 to replicate and amplify the effects of a natural molecule called 2HG (2-hydroxyglutarate), which inhibits the enzymatic activity of TET genes.
The TET DNA dioxygenase gene family codes for enzymes that remove chemical groups from DNA molecules, which ultimately changes what genes are expressed and can contribute to the development and spread of disease.

While all members of the TET family are dioxygenases, the most powerful enzymatic activity belongs to TET2. Even when TET2 is mutated, however, its related genes TET1 and TET3 provide residual enzymatic activity. While significantly less, this activity is still enough to facilitate the spread of mutated cancer cells. Drs. Maciejewski's and Jha's new pharmacologic strategy to selectively eliminate TET2 mutant leukemia cells centers on targeting their reliance on this residual DNA dioxygenase activity.
"We took lessons from the natural biological capabilities of 2HG," explained Dr. Jha, a principal investigator.. "We studied the molecule and rationally designed a novel small molecule, synthesized by our chemistry group headed by James Phillips, PhD. Together, we generated TETi76--a similar, but more potent version capable of inhibiting not just TET2, but also the remaining disease-driving enzymatic activity of TET1 and TET3."

The researchers studied TETi76's effects in both preclinical disease and xenograft models (where human cancer cells are implanted into preclinical models). Additional studies will be critical to investigate the small molecule's cancer-fighting capabilities in patients.
"We are optimistic about our results, which show not just that TETi76 preferentially restricts the growth and spread of cells with TET2 mutations, but also gives survival advantage to normal stem and progenitor cells," said Dr. Jha.

Myeloid leukiemias are commonly treated with chemotherapy, either alone or in combination with targeted drugs. More research is needed, but this early preclinical data suggest TETi76 may be a promising, more effective candidate to replace the targeted drugs currently used.

Credit: 
Cleveland Clinic

Error correction means California's future wetter winters may never come

RICHLAND, Wash.--California and other areas of the U.S. Southwest may see less future winter precipitation than previously projected by climate models. After probing a persistent error in widely used models, researchers at the Department of Energy's Pacific Northwest National Laboratory estimate that California will likely experience drier winters in the future than projected by some climate models, meaning residents may see less spring runoff, higher spring temperatures, and an increased risk of wildfire in coming years.

Earth scientist Lu Dong, who led the study alongside atmospheric scientist Ruby Leung, presented her findings at the American Geophysical Union's fall meeting on Tuesday, Dec. 1, and will answer questions virtually on Wednesday, Dec. 16.

As imperfect simulations of vastly complex systems, today's climate models have biases and errors. When new model generations are refined and grow increasingly accurate, some biases are reduced while others linger. One such long-lived bias in many models is the misrepresentation of an important circulation feature called the intertropical convergence zone, commonly known as the ITCZ.

The ITCZ marks an area just north of the Earth's equator where northeast trade winds from the northern hemisphere clash with southeast trade winds from the southern hemisphere. Strong sunlight and warm water heat the air here, energizing it along with the moisture it holds to move upward.

As the air rises, it expands and cools. Condensing moisture provides more energy to produce thunderstorms with intense rainfall. From space, one can even see a thick band of clouds, unbroken for hundreds of miles as they move about the region.

"The ITCZ produces the strongest, long line of persistent convection in the world," said Dong. "It can influence the global water cycle and climate over much of the Earth," including, she added, California's climate.

Doubling down on climate model bias

Many climate models mistakenly depict a double ITCZ: two bands appearing in both hemispheres instead of one, which imbues uncertainty in model projections. Scientists refer to this as the double-ITCZ bias. Variations in the wind and pressure systems that influence the ITCZ add to that uncertainty.

"There's a lot of uncertainty in California's future precipitation," said Dong, who described climate models that project a range of winter wetness in the state averaged over multiple years, from high increases to small decreases. "We want to know where this uncertainty comes from so we can better project future changes in precipitation."

To peer through the effect of the double-ITCZ bias and create more accurate projections, Dong and atmospheric scientist Ruby Leung analyzed data from nearly 40 climate models, uncovering statistical and mechanistic links between the bias and the models' outputs. The lion's share of the models they analyzed projected a sharpening of California's seasonal precipitation cycle, bringing wetter winters and drier fall and spring seasons.

Soft, white snow rests on either side of a California waterway. Winter precipitation includes more than just rain, encompassing snowpack in mountainous areas and other factors that influence climate processes throughout the year.

Less water, more fire

Those uncovered relationships, Dong said, now cast doubt on estimations from CMIP5 models that projected wetter winters in the future. Models saddled with a larger double-ITCZ bias, it turns out, tend to exaggerate the U.S. Southwest's wetter winters. They also understate the drier winters in the Mediterranean Basin, which also features pronounced wet winters and dry summers similar to California, under warming climate scenarios.

Correcting for the bias reduces winter precipitation projections to a level that's roughly equal to California's current winters, amounting to little change and no future wetter winters. In the Mediterranean Basin, said Dong, the correction means winter drying will be intensified by 32 percent.

"An important implication of this work," said Dong, "is that a reduction in estimated winter precipitation will likely mean a reduction in spring runoff and an increase in spring temperature, and both increase the likelihood of wildfire risk in California."

Learning from climate models

Though the study's focus was restricted solely to winter precipitation, said Leung, its implications reach to all seasons.

"The implications aren't just about how wet things will or won't be," said Leung. "When people think about precipitation, they tend to think about how much rain they'll get. But precipitation has a lot of implications, like snowpack in mountainous areas, for example, and that means whatever changes we see in winter precipitation will have subsequent implications for springtime or even summertime. The impacts don't just affect winter; they'll be felt throughout the year."

The findings do not bode well for agricultural production, as over one third of the country's vegetables are grown in California soil, and two thirds of its fruits and nuts are grown on California farms, according to the California Department of Food and Agriculture. Almonds and grapes, two especially water-hungry crops, were among the state's top producing commodities, bringing in a combined $11.5 billion in 2019.

Over 4 million acres and nearly 10,500 structures burned in the state's 2020 wildfire season. The fire season has grown longer, according to Cal Fire, which cites warmer spring temperatures as one of the reasons forests are now more susceptible to wildfire.

Dong and her research partners hope the findings will better inform resource management groups as they prepare for coming wildfire seasons and plan for drier-than-expected winters.

The double-ITCZ bias is prominent in all CMIP5 climate models, said Leung, as well as CMIP6 models, the most recent generation, though the latter were not considered in this work. "If you look at the whole ensemble of models," said Leung, "you see quite similar biases."

Credit: 
DOE/Pacific Northwest National Laboratory

Attitudes about climate change are shifting, even in Texas

Longstanding skepticism among Texans toward the climate movement has shifted, and attitudes in the nation's leading energy-producing state now mirror those in the rest of the United States.

About 80% of Americans - almost 81% of Texans - say they believe climate change is happening, according to new research by UH Energy and the University of Houston Hobby School of Public Affairs. Slightly lower percentages said they believe the change is driven by human activities.

Most said they are willing to pay more for electricity derived from natural gas produced without venting and flaring, electricity derived from renewable generation that factors in the cost of the grid, and low-carbon or carbon-neutral transportation fuels and other energy products.

"People are aware of climate change and believe it is real," said Ramanan Krishnamoorti, chief energy officer at UH. "That is true even in Texas, where people have been less likely to say they believe in climate change and, especially, change caused by human activities."

But Krishnamoorti said researchers also found that while most people understand the link between climate change and fossil fuels, they are less sophisticated in their knowledge about potential solutions, from carbon taxes to emissions trading systems. Only 58% believe individual consumer choices are responsible for climate change.

The report, Carbon Management: Changing Attitudes and an Opportunity for Action, was released less than a month before the Texas Legislature convenes a session expected to address curbing methane flaring and other emissions. The Biden administration also is likely to consider more stringent environmental regulations, and a number of energy companies have committed to reducing their carbon footprints.

"With so much potential for change ahead, we wanted to assess public attitudes about climate change and support for specific policies aimed at curbing emissions," said Pablo Pinto, director of the Center for Public Policy at the Hobby School. "We found people are worried about climate change and want it to be addressed, but many people, especially older residents, don't understand the strategies being considered."

The researchers will present a webinar discussing the results at noon Friday, Dec. 8. Registration is available at this link.

Among the findings:

Two out of three nationally are worried about climate change. More than 60% of Texans agree

55% agree "the oil and gas industries have deliberately misled people on climate change." 49% of Texans agree

About two-thirds say oil and gas companies should adopt carbon management technologies

56% say government should promote, incentivize and subsidize carbon management technologies; 53% of Texans agreed

64% of people nationally, and 61% of Texans, say hydraulic fracturing has a negative effect on the environment

Mitigation strategies aren't well understood. 61% have heard of carbon taxes, while less than half are familiar with carbon management and just one-third have heard of carbon pricing. Younger people and those with more education had higher levels of awareness

The full report is available on the UH Energy and Hobby School websites.

While large majorities said government, the fossil fuel industry and the transportation sector bear responsibility for climate change, fewer said individual consumer choices were responsible, said Gail Buttorff, co-director of the Survey Research Institute at the Hobby School. Still, among people who were better informed on the topic, about 76% said individual choices were partly to blame.

"We also found that more than 93% are willing to pay more for carbon-neutral energy, and 75% said they would pay between $1 and $5 more per gallon," Buttorff said.

The researchers found generational differences in support for paying higher prices in exchange for carbon-neutral energy, with younger people generally more willing to pay a higher premium.

Francisco Cantú, co-director of the Survey Research Institute at the Hobby School, said demographic changes are likely one reason the study found few differences in attitudes between Texans and people elsewhere in the U.S.

"Texas has a growing population of young people, along with increased migration both from other states and other countries," Cantú said. "That, along with major changes that are already underway in the industry, from the growing use of renewables to industry pledges to decarbonize, suggests regulators could take advantage of the timing to lock in long-term climate strategies."

Credit: 
University of Houston