Earth

Graphene-based memory resistors show promise for brain-based computing

image: Graphene memristors open doors for biomimetic computing

Image: 
Jennifer M. McCann/Penn State

As progress in traditional computing slows, new forms of computing are coming to the forefront. At Penn State, a team of engineers is attempting to pioneer a type of computing that mimics the efficiency of the brain's neural networks while exploiting the brain's analog nature.

Modern computing is digital, made up of two states, on-off or one and zero. An analog computer, like the brain, has many possible states. It is the difference between flipping a light switch on or off and turning a dimmer switch to varying amounts of lighting.

Neuromorphic or brain-inspired computing has been studied for more than 40 years, according to Saptarshi Das, the team leader and Penn State assistant professor of engineering science and mechanics. What's new is that as the limits of digital computing have been reached, the need for high-speed image processing, for instance for self-driving cars, has grown. The rise of big data, which requires types of pattern recognition for which the brain architecture is particularly well suited, is another driver in the pursuit of neuromorphic computing.

"We have powerful computers, no doubt about that, the problem is you have to store the memory in one place and do the computing somewhere else," Das said.

The shuttling of this data from memory to logic and back again takes a lot of energy and slows the speed of computing. In addition, this computer architecture requires a lot of space. If the computation and memory storage could be located in the same space, this bottleneck could be eliminated.

"We are creating artificial neural networks, which seek to emulate the energy and area efficiencies of the brain," explained Thomas Shranghamer, a doctoral student in the Das group and first author on a paper recently published in Nature Communications. "The brain is so compact it can fit on top of your shoulders, whereas a modern supercomputer takes up a space the size of two or three tennis courts."

Like synapses connecting the neurons in the brain that can be reconfigured, the artificial neural networks the team is building can be reconfigured by applying a brief electric field to a sheet of graphene, the one-atomic-thick layer of carbon atoms. In this work they show at least 16 possible memory states, as opposed to the two in most oxide-based memristors, or memory resistors.

"What we have shown is that we can control a large number of memory states with precision using simple graphene field effect transistors," Das said.

The team thinks that ramping up this technology to a commercial scale is feasible. With many of the largest semiconductor companies actively pursuing neuromorphic computing, Das believes they will find this work of interest.

In addition to Das and Shranghamer, the additional author on the paper, titled "Graphene Memristive Synapses for High Precision Neuromorphic Computing," is Aaryan Oberoi, doctoral student in engineering science and mechanics.

Credit: 
Penn State

International team tracks record-setting smoke cloud from Australian wildfires

image: USask researcher Prof. Adam Bourassa

Image: 
Submitted/USask

SASKATOON - Researchers with the University of Saskatchewan's Institute of Space and Atmospheric Studies are part of a global team that has found that the smoke cloud pushed into the stratosphere by last winter's Australian wildfires was three times larger than anything previously recorded.

The cloud, which measured 1,000 kilometres across, remained intact for three months, travelled 66,000 kilometres, and soared to a height of 35 kilometres above Earth. The findings were published in Communications Earth & Environment, part of the prestigious Nature family of research journals.

"When I saw the satellite measurement of the smoke plume at 35 kilometres, it was jaw dropping. I never would have expected that," said Adam Bourassa, professor of physics and engineering physics, who led the USask group which played a key role in analyzing NASA satellite data.

Prior to Australia's "Black Summer," which burned 5.8 million hectares of forest in the southeast part of that continent, the smoke cloud caused by the 2017 forest fires in Western Canada was the largest on record.

The international team was led by Sergey Khaykin from LATMOS (Laboratoire Atmosphères, Milieux, Observations Spatiales) in France. Bourassa said the team's findings provide critical information for understanding how wildfires are impacting the Earth's atmosphere.

"We're seeing records broken in terms of the impact on the atmosphere from these fires," said Bourassa. "Knowing that they're likely to strike more frequently and with more intensity due to climate change, we could end up with a pretty dramatically changed atmosphere."
Bourassa, his post-doctoral student Landon Rieger, and research engineer Daniel Zawada were the only Canadians involved in the project. Bourassa's group has expertise in a specific type of satellite measurement that is very sensitive to smoke in the upper atmosphere. Their contributions were funded in part by the Canadian Space Agency.
According to Bourassa, wildfires such as those in Australia and Western Canada get big enough and hot enough that they generate their own thunderstorms, called Pyrocumulonimbus. These, in turn, create powerful updrafts that push smoke and the surrounding air up past the altitudes where jets fly, into the upper part of the atmosphere called the stratosphere.

"What was also really amazing was that as the smoke sits in the atmosphere, it starts to absorb sunlight and so it starts to heat up," said Bourassa. "And then, because it's getting hotter, it starts to rise in a swirling vortex 'bubble', and it just rose and rose higher and higher through the atmosphere."

Information collected by satellite, using an instrument called a spectrometer, showed smoke from the Australian wildfires blocked sunlight from reaching Earth to an extent never before recorded from wildfires.

The measurement technique, proven by Canadian scientists including Bourassa over a decade ago, measures the sunlight scattered from the atmosphere back to the satellite, generating a detailed, image of layers in the atmosphere.

The stratosphere is typically a "pretty pristine, naturally clean, stable part of atmosphere,"
Bourassa said. However, when aerosols--such as smoke from wildfires or sulphuric acid from a volcanic eruption--are forced up into the stratosphere, they can remain aloft for many months, blocking sunlight from passing through, which in turns changes the balance of the climate system.

While researchers have a general understanding of how these smoke clouds form and why they rise high into the stratosphere, Bourassa said more work needs to be done to understand the underlying mechanisms.

Researchers will also be comparing their findings from Australian wildfires with satellite data captured from California wildfires this past summer and fall.

Credit: 
University of Saskatchewan

Expect more mega-droughts

image: Professor Hamish McGowan gaining access to stalagmites around 120 metres below the surface in the Grotto Cave, NSW.

Image: 
The University of Queensland

Mega-droughts - droughts that last two decades or longer - are tipped to increase thanks to climate change, according to University of Queensland-led research.

UQ's Professor Hamish McGowan said the findings suggested climate change would lead to increased water scarcity, reduced winter snow cover, more frequent bushfires and wind erosion.

The revelation came after an analysis of geological records from the Eemian Period - 129,000 to 116,000 years ago - which offered a proxy of what we could expect in a hotter, drier world.

"We found that, in the past, a similar amount of warming has been associated with mega-drought conditions all over south eastern Australia," Professor McGowan said.

"These drier conditions prevailed for centuries, sometimes for more than 1000 years, with El Niño events most likely increasing their severity."

The team engaged in paleoclimatology - the study of past climates - to see what the world will look like as a result of global warming over the next 20 to 50 years.

"The Eemian Period is the most recent in Earth's history when global temperatures were similar, or possibly slightly warmer than present," Professor McGowan said.

"The 'warmth' of that period was in response to orbital forcing, the effect on climate of slow changes in the tilt of the Earth's axis and shape of the Earth's orbit around the sun.

Professor Hamish McGowan wearing a headlamp crawling through a small cave to gain access to stalagmites around 120 metres below the surface in the Grotto Cave, NSW."In modern times, heating is being caused by high concentrations of greenhouse gases, though this period is still a good analogue for our current-to-near-future climate predictions."

Researchers worked with the New South Wales Parks and Wildlife service to identify stalagmites in the Yarrangobilly Caves in the northern section of Kosciuszko National Park.

Small samples of the calcium carbonate powder contained within the stalagmites were collected, then analysed and dated at UQ.

That analysis allowed the team to identify periods of significantly reduced precipitation during the Eemian Period.

"They're alarming findings, in a long list of alarming findings that climate scientists have released over the last few decades," Professor McGowan said.

"We hope that this new research allows for new insights to our future climate and the risks it may bring, such as drought and associated bushfires.

"But, importantly, if humans continue to warm the planet, this is the future we may all be looking at."

Credit: 
University of Queensland

Waste not, want not: recycled water proves fruitful for greenhouse tomatoes

In the driest state in the driest continent in the world, South Australian farmers are acutely aware of the impact of water shortages and drought. So, when it comes to irrigation, knowing which method works best is vital for sustainable crop development.

Now, new research from the University of South Australia shows that water quality and deficit irrigation schemes each have significant effects on crop development, yield and water productivity - with recycled wastewater achieving the best overall results.

Testing different water sources on greenhouse-grown tomatoes, recycled wastewater outperformed both groundwater, and a water mix of 50 per cent groundwater and 50 per cent recycled wastewater.

Researchers also confirmed that growers using deficit irrigation strategies (irrigation that limits watering in a controlled way) performs best at 80 per cent capacity, ensuring maximum water efficiency while maintaining excellent crop growth and yield levels.

Lead researcher and UniSA PhD candidate, Jeet Chand, says that the findings will provide farmers with valuable insights for productive, profitable and sustainable agricultural management.

"Water is an extremely valuable commodity in dry and arid farming regions, making efficient irrigation strategies and alternative water sources essential for agriculture production," Chand says.

"Deficit irrigation is a strategy commonly used by farmers to minimise water use while maximising crop productivity but finding the most effective balance for greenhouse-grown produce can be tricky.

"In our research we tested optimum water deficit levels for greenhouse-grown tomatoes, showing that water at 80 per cent of field capacity is the superior choice for optimal tomato growth in the Northern Adelaide Plains.

"These results were enhanced by the use of recycled wastewater, which not only fares well for plants (by delivering additional nutrients) and for farmers (by reducing the need for fertilizer) but is also great for the environment."

The Northern Adelaide Plains represents 90 per cent of tomato production in South Australia and contains the largest area of greenhouse coverage in the whole of Australia.

This study simulated tomato growing conditions in this region across the most popular growing season and over two years. It tested groundwater, recycled wastewater and a 50:50 mix of both, across four irrigation scenarios with soil moisture levels at 60, 70, 80 and 100 per cent of field capacity.

The highest growth levels were unsurprisingly achieved through 100 per cent field capacity, but mild water stress (80 per cent water capacity) delivered positive water efficiency without significant yield reduction.

While the results are positive for the tomato industry, Chand says there's also good news for the home-gardening tomato aficionado.

"If you're one of the lucky areas to have access to a verified source of recycled water, then your garden can also benefit from its additional nutrients," Chand says.

"Remember, there is a significant difference between grey water - that is, water from the bath or dishes - and recycled water, so be sure to check your water source with your supplier.

"But if you have access to recycled water, great! Your tomatoes will grow like crazy, and you'll be the envy of all your neighbours."

Credit: 
University of South Australia

Drones as stinger spotters

image: After drone surveillance of the area, researchers netted, counted and measured deadly jellyfish.

Image: 
©TASRU

The research, published today in the journal PLOS One, focused on Chironex fleckeri - a large jellyfish capable of killing a human in under three minutes and considered the most venomous animal in the world.

"Chironex fleckeri is found in waters off northern Australia from October to May, when its liking for shallow, calm, coastal waters can put it on a collision course with swimmers," said PhD candidate and project lead Olivia Rowley.

"Drone surveillance could help make our beaches safer, and reduce our reliance on time-consuming drag netting by life savers."

Ms Rowley and colleagues from the Australian Institute of Tropical Health and Medicine set out to establish the reliability of lower-cost domestic drones in detecting these large, near-transparent jellyfish.

"The attraction of these devices is that they are more affordable, easily transported, and easier to use," Ms Rowley said.

"They don't require as much training and licensing as the higher-end versions and a large number of surf lifesaving clubs, particularly in Australia, already have them in their kit for rip identification, and crocodile and shark spotting."

The researchers tested the drones' accuracy as jellyfish spotters in waters near Weipa on Cape York Peninsula. They deployed 70-metre nets, and then recorded drone footage before pulling in the nets and counting and measuring any jellyfish.

During the experiment the drone pilot kept records of jellyfish spotted during each flight. These records were later compared with the netted numbers, and with the accuracy achieved in a lab-based review of the footage.

The researchers confirmed that reviewing footage after the flights led to significantly high detection rates. They also quantified the effects of weather conditions such as cloud cover and wind on the drones' success rate.

"This has huge implications. Most, if not all, beaches worldwide, from Japan to Europe and beyond, have issues with very harmful jellyfish and presently there is no way of telling if animals are there until someone gets stung," Ms Rowley said.

"This project really highlights the capacity for drones as early warning systems. Using drones is fast, effective and cheap and helps keep those on the front line out of the water and out of harm's way."

The next stage of the project will see this research trialled with Surf Life Saving hubs along the Queensland coast. The trials are funded by the Australian Lions Foundation and will begin in the next month.

Credit: 
James Cook University

Nucleus accumbens recruited by cocaine, sugar are different

image: Ana Clara Bobadilla, a University of Wyoming assistant professor in the School of Pharmacy and the WWAMI Medical Education Program, is lead author of a paper that found that the nucleus accumbens recruited by cocaine use are largely distinct from nucleus accumbens recruited by sucrose, or table sugar.

Image: 
Sarah Pack

Nucleus accumbens in the brain play a central role in the risk-reward circuit. Their operation is based chiefly on three essential neurotransmitters: dopamine, which promotes desire; serotonin, whose effects include satiety and inhibition; and glutamate, which drives goal-directed behaviors and responses to reward-associated cues and contexts.

In a study using genetically modified mice, a University of Wyoming faculty member found that the nucleus accumbens recruited by cocaine use are largely distinct from nucleus accumbens recruited by sucrose, or table sugar. Because they are separate, this poses the possibility that drug use can be addressed without affecting biologically adaptive seeking of reward.

"We established that, in the nucleus accumbens, a key brain region of reward processing, the neuronal ensembles -- a sparse network of neurons activated simultaneously -- are reward-specific, and sucrose and cocaine ensembles are mostly nonoverlapping," says Ana Clara Bobadilla, a UW assistant professor in the School of Pharmacy and in the WWAMI (Washington, Wyoming, Alaska, Montana and Idaho) Medical Education Program.

Bobadilla is lead author of a paper, titled "Cocaine and Sucrose Rewards Recruit Different Seeking Ensembles in the Nucleus Accumbens Core," that was published in the Sept. 28 issue of Molecular Psychiatry. The journal publishes work aimed at elucidating biological mechanisms underlying psychiatric disorders and their treatment. The emphasis is on studies at the interface of pre-clinical and clinical research, including studies at the cellular, molecular, integrative, clinical, imaging and psychopharmacology levels.

Bobadilla conducted the research while completing her postdoctoral work at the Medical University of South Carolina. The project began in mid-2017. One contributor to the study is now working at the University of Colorado Anschutz Medical Campus.

Currently, the recruitment process within each reward-specific ensemble is unknown, she says. However, using molecular biology tools, Bobadilla was able to identify what type of cells was recruited in both the cocaine and sucrose ensemble.

These cells are known as GABAergic projection neurons, also called medium spiny neurons. They comprise 90 percent to 95 percent of the neuronal population with the nucleus accumbens. These medium spiny neurons express the dopamine D1 or D2 receptor.

The study determined the sucrose and cocaine ensembles recruited mostly D1 receptor expressing medium spiny neurons. These results are in line with the general understanding in the field that activation of the D1 pathway promotes reward seeking, while D2 pathway activation can lead to aversion or reduced seeking, Bobadilla says.

"In humans, drugs are rarely used in the vacuum. Most of us have complex lives including lots of sources of nondrug rewards, such as food, water, social interaction or sex," Bobadilla explains. "Like drugs, these rewards drive and influence our behavior constantly. The dual cocaine and sucrose model used in this study allows us to characterize the cocaine-specific ensemble after the mice experienced sucrose, another type of competing reward.

"It is a more complex model, but one that is closer to what occurs in people suffering from substance use disorders, who fight competing rewards daily," she adds.

Bobadilla is now focused on the question of how cells are recruited in ensembles. Additionally, she aims to address another fundamental question in addiction research: whether the same network-specific mechanisms underlie the seeking of all drug rewards.

"All drugs of abuse share high probability of relapse," she says. "However, each class of addictive drug displays different acute pharmacology and synaptic plasticity. We are now investigating if reward-specific properties of ensembles can explain these differences."

Credit: 
University of Wyoming

Habitat loss is bad news for species - especially for top predators

image: Anna Eklöf, senior lecturer, Linköping University

Image: 
Anna Nilsen/Linköping University

Scientists at Linköping University, Sweden, have simulated what happens in ecosystems when the habitats of different species disappear. When plants and animals lose their habitats, predator species at the top of the food chain die out first. The results have been published in Ecology Letters, and may provide information for and strengthen initiatives to preserve biodiversity.

One of the most serious threats against biodiversity is habitat loss. Humans cause severe changes in the landscapes when converting or removing natural land to be used for construction or food production. In addition, climate change is also causing some regions to become uninhabitable for some species. Researchers at Linköping University (LiU) have developed a mathematical method to investigate how large ecosystems are affected when habitat disappears.

"We can reach two important conclusions from our study. The first is that initiatives to preserve biological diversity must preserve habitat and not only focus on a particular species. It is very important to consider the interactions between the ecosystem's species by looking at the food web - which animals and plants are eaten by which other animals. The second conclusion is that the order in which habitats disappear has a profound significance", says Anna Eklöf, senior lecturer in the Department of Physics, Chemistry and Biology (IFM) at Linköping University.

The LiU researchers use modelling and computer simulations to study ecological networks, which describe how the various species in an ecosystem interact. The article, published in Ecology Letters, combines two mathematical models: a classical and a new one. The model described in the article distinguishes between suitable habitats in which species can live, and other areas in which they cannot. Suitable habitat patches are distributed across the landscape, with different plant and animal species spread across them. The species are connected to each other in a food web, a network that describes how they feed on each other. A hare eats several types of plants, the hare and several other prey species can become food for foxes, and the fox is one of the predators at the top of the food web.

The survival of an animal in a particular habitat depends on having the correct prey animals or plants in the same habitat patch. The model developed by the researchers also considers how effectively species can move between the habitat patches. In the real world, the habitat patches are often separated by inhospitable regions, such as a road with heavy traffic, that can prevent plants and animals from moving between them. If dispersal between different habitat patches becomes more difficult, probability increases that a species becomes extinct in the ecological network - which in turn influences the survival of other species.

The researchers have used the model to analyse a large number of simulated networks involving several hundreds of species. They also tested the model on a dataset of measurements describing the food web of the Serengeti National Park in Tanzania. In order to investigate how ecosystems are affected by habitat loss, the researchers ranked the habitat patches in order of how important they are for species at the bottom of the food chain. They then simulated three different ways in which habitat loss can occur: with the least important habitat patches removed first, the most important ones first, or removing them in a random order. The destruction of habitat in a random order is similar to what happens when humans construct roads or buildings without considering how valuable the region is for different species.

"In our model, the species at the upper levels of the food chain die out first when habitat patches are lost. What surprised us was that the damage to the ecosystem was almost the same when patches were lost in a random order as when the most valuable patches were lost first", says research fellow György Barabás.

The researchers emphasize that it is important to classify how significant various habitats patches are when considering initiatives to preserve ecosystems, and to give priority to the most valuable ones. The resilience of the ecosystem to species extinctions can also be improved by, for example, strengthening connections between patches. By taking such matters into consideration when determining how land is used, humans can protect ecosystems and prevent species from becoming extinct - particularly species that are high up in the food chain.

Credit: 
Linköping University

Solved: the mystery of how dark matter in galaxies is distributed

image: Dark matter in two galaxies simulated on a computer. The only difference between them is the nature of dark matter. Without collisions on the left and with collisions on the right. The work suggests that dark matter in real galaxies looks more like the image on the right, less clumpy and more diffuse than the one on the left. The circle marks the end of the galaxy.

Image: 
Image taken from the article Brinckmann et al. (2018, Monthly Notices of the Royal Astronomical Society, 474, 746; https://doi.org/10.1093/mnras/stx2782).

The gravitational force in the Universe under which it has evolved from a state almost uniform at the Big Bang until now, when matter is concentrated in galaxies, stars and planets, is provided by what is termed 'dark matter'. But in spite of the essential role that this extra material plays, we know almost nothing about its nature, behaviour and composition, which is one of the basic problems of modern physics. In a recent article in Astronomy & Astrophysics Letters, scientists at the Instituto de Astrofísica de Canarias (IAC)/University of La Laguna (ULL) and of the National University of the North-West of the Province of Buenos Aires (Junín, Argentina) have shown that the dark matter in galaxies follows a 'maximum entropy' distribution, which sheds light on its nature.

Dark matter makes up 85% of the matter of the Universe, but its existence shows up only on astronomical scales. That is to say, due to its weak interaction, the net effect can only be noticed when it is present in huge quantities. As it cools down only with difficulty, the structures it forms are generally much bigger than planets and stars. As the presence of dark matter shows up only on large scales the discovery of its nature probably has to be made by astrophysical studies.

MAXIMUM ENTROPY

To say that the distribution of dark matter is organized according to maximum entropy (which is equivalent to 'maximum disorder' or 'thermodynamic equilibrium') means that it is found in its most probable state. To reach this 'maximum disorder' the dark matter must have had to collide within itself, just as gas molecules do, so as to reach equilibrium in which its density, pressure, and temperature are related. However, we do not know how the dark matter has reached this type of equilibrium.

"Unlike the molecules in the air, for example, because gravitational action is weak, dark matter particles ought hardly to collide with one another, so that the mechanism by which they reach equilibrium is a mystery", says Jorge Sánchez Almeida, an IAC researcher who is the first author of the article. "However if they did collide with one another this would give them a very special nature, which would partly solve the mystery of their origin", he adds.

The maximum entropy of dark matter has been detected in dwarf galaxies, which have a higher ratio of dark matter to total matter than have more massive galaxies, so it is easier to see the effect in them. However, the researchers expect that it is general behaviour in all types of galaxies.

The study implies that the distribution of matter in thermodynamic equilibrium has a much lower central density that astronomers have assumed for many practical applications, such as in the correct interpretation of gravitational lenses, or when designing experiments to detect dark matter by its self-annihilation.

This central density is basic for the correct interpretation of the curvature of the light by gravitational lenses: if it is less dense the effect of the lens is less. To use a gravitational lens to measure the mass of a galaxy one needs a model, if this model is changed, the measurement changes.

The central density also is very important for the experiments which try to detect dark matter using its self-annihilation. Two dark matter particles could interact and disappear in a process which is highly improbable, but which would be characteristic of their nature. For two particles to interact they must collide. The probability of this collision depends on the density of the dark matter; the higher the concentration of dark matter, the higher is the probability that the particles will collide.

"For that reason, if the density changes so will the expected rate of production of the self-annihilations, and given that the experiments are designed on the prediction of a given rate, if this rate were very low the experiment is unlikely to yield a positive result", says Sánchez Almeida.

Finally, thermodynamic equilibrium for dark matter could also explain the brightness profile of the galaxies. This brightness falls with distance from the centre of a galaxy in a specific way, whose physical origin is unknown, but for which the researchers are working to show that it is the result of an equilibrium with maximum entropy.

SIMULATION VERSUS OBSERVATION

The density of dark matter in the centres of galaxies has been a mystery for decades. There is a strong discrepancy between the predictions of the simulations (a high density) and that which is observed (a low value). Astronomers have put forward many types of mechanisms to resolve this major disagreement.

In this article, the researchers have shown, using basic physical principles, that the observations can be reproduced on the assumption that the dark matter is in equilibrium, i.e., that it has maximum entropy. The consequences of this result could be very important because they indicate that the dark matter has interchanged energy with itself and/or with the remaining "normal" (baryonic) matter.

"The fact that equilibrium has been reached in such a short time, compared with the age of the Universe, could be the result of a type of interaction between dark matter and normal matter in addition to gravity", suggests Ignacio Trujillo, an IAC researcher and a co-author of this article. "The exact nature of this mechanism needs to be explored, but the consequences could be fascinating to understand just what is this component which dominates the total amount of matter in the Universe".

Credit: 
Instituto de Astrofísica de Canarias (IAC)

Models for potential precursors of cells endure simulated early-Earth conditions

image: Membraneless compartments, called complex coacervates, which form micrometer-sized droplets (center), are widely studied as models of protocells, a potential step in the evolution of life on Earth. New research shows that the droplets behave as predicted by an experimentally derived phase diagram (left) in response to a proposed early-Earth environmental process, the wet-dry cycle as could be seen as small ponds or puddles evaporate and reform. The preference for RNA molecules (fluorescently labeled red in the right panel) to accumulate inside the droplets decreases as the solution dries.

Image: 
Hadi Fares, Penn State

Membraneless compartments--models for a potential step in the early evolution of cells--have been shown to persist or form, disappear, and reform in predictable ways through multiple cycles of dehydration and rehydration. Such wet-dry cycles were likely common conditions during the early development of life on Earth and could be a driving force for reactions important for the evolution of life.

Understanding how the compartments--known as complex coacervates--respond to wet-dry cycling also informs current applications of the droplets, which are found in many household items, such as adhesives, cosmetics, fragrances, and food, and could be used in drug delivery systems. A paper describing the research, led by Penn State scientists, appears October 27, 2020 in the journal Nature Communications.

"Wet-dry cycling has gotten attention recently in attempts to produce molecules that could be the precursors to life, things like the building blocks of RNA, DNA, and proteins," said Hadi Fares, a NASA Postdoctoral Program Fellow at Penn State and the first author of the paper. "We are looking into a possible step further in the evolution of life. If these building blocks form compartments--the precursors of cells--what happens if they undergo the same type of wet-dry cycling?"

The researchers make membraneless compartments, which form through liquid-liquid phase separation in a manner akin to oil droplets forming as a salad dressing separates, by controlling the concentrations of reagents in a solution. When the conditions--pH, temperature, salt and polymer concentrations--are right, droplets form that contain higher concentrations of the polymers than the surrounding solution. Like oil drops in water, there is no physical barrier or membrane that separates the droplets from their surroundings.

Dehydrating the solution, like what could happen during dry periods on a pre-life Earth where small ponds or puddles might regularly dry up, changes all of these factors. The researchers, therefore, wanted to know what would happen to the membraneless compartments in their experimental system if they recreated these wet-dry cycles.

"We first mapped out how the compartments form when we alter the concentrations of the polymers and the salt," said Fares. "This 'phase diagram' is experimentally determined and represents the physical chemistry of the system. So, we know whether or not droplets will form for different concentrations of polymers and salt. We can then start with a solution with concentrations at any point on this phase diagram and see what happens when we dehydrate the sample."

If the researchers start with a solution with concentrations that favor the formation of droplets, dehydration can change the concentrations such that the droplets disappear. The droplets then reappear when the sample is rehydrated. They can also start with a solution in which no droplets form and dehydration could bring the concentrations into the range that droplets begin to form. The behavior of the droplets during dehydration and rehydration match the predictions based on the experimentally derived phase diagram and they continue to do so through several iterations of the wet-dry cycle.

Next, the researchers addressed the ability of droplets to incorporate RNA molecules inside of the membraneless compartments. The "RNA world" hypothesis suggests that RNA may have played an important role in the early evolution of life on Earth and previous experimental work has shown that RNA in these solutions becomes concentrated inside of the droplets.

"As we dry droplets that contain RNA, the overall concentration of RNA in the solution increases but the concentration of RNA inside the droplets remains fairly stable," said Fares. "The preference of RNA molecules to be inside the droplets seems to decrease. We believe that this is because as they dry the composition inside the droplets is changing to look more like the composition outside the droplets."

The research team also looked at the ability of RNA to move into and within the droplets during dehydration. As they dry the sample the movement of RNA into and out of the droplets increases massively, but movement within the droplets increases only modestly. This difference in RNA mobility could have implications for the exchange of RNA among droplets during dehydration, which could in turn be functionally important in protocells.

"What we are showing is that as the membraneless compartments dry, they are able to preserve, at least to some extent, their internal environment," said Fares. "Importantly, the behavior of the coacervates, or protocells, whether they persist or disappear and reappear through the wet-dry cycle, is predicable from the physical chemistry of the system. We can therefore use this model system to think about the chemistry that might have been important for the early evolution of life."

Beyond early life scenarios, the research has implications much closer to home.

"People underestimate how important coacervates are beyond their role as a model for protocells," said Christine Keating, Distinguished Professor of Chemistry at Penn State and leader of the research team. "Many of the things that you have in your house that appear cloudy have coacervates in them. Any time you want to compartmentalize something, whether it's for drug delivery, a fragrance, a nutrient, or food product, coacervates may be involved. Understanding something new about the physical chemistry of the process of droplet formation will be important for all of these things."

Credit: 
Penn State

Machine learning helps hunt for COVID-19 therapies

EAST LANSING, Mich. - Michigan State University Foundation Professor Guowei Wei wasn't preparing machine learning techniques for a global health crisis. Still, when one broke out, he and his team were ready to help.

The group already has one machine learning model at work in the pandemic, predicting consequences of mutations to SARS-CoV-2. Now, Wei's team has deployed another to help drug developers on their most promising leads for attacking one of the virus' most compelling targets. The researchers shared their research in the peer-reviewed journal Chemical Science.

Prior to the pandemic, Wei and his team were already developing machine learning computer models -- specifically, models that use what's known as deep learning -- to help save drug developers time and money. The researchers "train" their deep learning models with datasets filled with information about proteins that drug developers want to target with
therapeutics. The models can then make predictions about unknown quantities of interest to help guide drug design and testing.

Over the past three years, the Spartans' models have been among the top performers in a worldwide competition series for computer-aided drug design known as the Drug Design Data Resource, or D3R, Grand Challenge. Then COVID-19 came.

"We knew this was going to be bad. China shut down an entire city with 10 million people," said Wei, who is a professor in the Departments of Mathematics as well as Electrical and Computer Engineering. "We had a technique at hand, and we knew this was important."

Wei and his team have repurposed their deep learning models to focus on a specific SARS-CoV-2 protein called its main protease. The main protease is a cog in the coronavirus's protein machinery that's critical to how the pathogen makes copies of itself. Drugs that disable that cog could thus stop the virus from replicating.

What makes the main protease an even more attractive target is that it's distinct from all known
human proteases, which isn't always the case. Drugs that attack the viral protease are thus less likely to disrupt people's natural biochemistry.

Another advantage of the SARS-CoV-2 main protease is that's it's nearly identical to that of the coronavirus responsible for the 2003 SARS outbreak. This means that drug developers and Wei's team weren't starting completely from scratch. They had information about the structure of the main protease and chemical compounds called protease inhibitors that interfere with the
protein's function.

Still, gaps remained in understanding where those protease inhibitors latch onto the viral protein and how tightly. That's where the Spartans' deep learning models came in.

Wei's team used its models to predict those details for over 100 known protease inhibitors. That data also let the team rank those inhibitors and highlight the most promising ones, which can be very valuable information for labs and companies developing new drugs, Wei said.

"In the early days of a drug discovery campaign, you might have 1,000 candidates," Wei said. Typically, all those candidates would move to preclinical tests in animals, then maybe the most promising 10 or so can safely advance to clinical trials in humans, Wei explained.

By focusing on drugs that are most attracted to the protease's most vulnerable spots, drug developers can whittle down that list of 1,000 from the start, saving money and months, if not years, Wei said.

"This is a way to help drug developers prioritize. They don't have to waste resources to check every single candidate," he said.

But Wei also had a reminder. The team's models do not replace the need for experimental validation, preclinical or clinical trials. Drug developers still need to prove their products are safe before providing them for patients, which can take many years.

For that reason, Wei said, antibody treatments that resemble what immune systems produce naturally to fight the coronavirus will be most likely the first therapies approved during the pandemic. These antibodies, however, target the virus's spike protein, rather than its main protease. Developing protease inhibitors would thus provide a welcome addition in an arsenal to fight a deadly and constantly evolving enemy.

"If developers want to design a new set of drugs, we've shown basically what they need to do," Wei said.

Credit: 
Michigan State University

Sea turtle nesting season winding down in Florida, some numbers are up and it's unexpected

image: A loggerhead hatchling climbs out of a footprint along the beach.Permit: Florida MTP-186.

Image: 
G. Stahelin, UCF MTRG.

Florida's sea turtle nesting surveying comes to a close on Halloween and like everything else in 2020, the season was a bit weird.

The number of green sea turtle nests on central and southern Brevard County, Florida beaches monitored by University of Central biologists were way up during a year they should have been down based on nearly 40 years of historical data.

"Usually, green turtles alternate between high years and low years, but this year they defied expectations," says Chris Long, a doctoral candidate and research assistant with UCF's Marine Turtle Research Group. "Green turtles had the fifth highest year on the Archie Carr Refuge that we've recorded since 1982. There is no evidence pointing to high nesting as a result of fewer people on the beaches or anything pandemic-related like that. It's difficult to know why nesting differed from expectation."

East-Central Florida's coastline (from Brevard to Indian River County) is among the most important nesting areas in the world for loggerhead sea turtles, and it also hosts about one-third of all green turtle nests in the state. The region is at the northern end of a "hotspot" for leatherbacks, which nest on the local beaches at a smaller scale as well. All sea turtles in the U.S. are protected under the Endangered Species Act.

UCF has run a sea turtle monitoring and research program on the beaches of the Archie Carr National Wildlife Refuge (ACNWR) in southern Brevard County for more than 35 years. UCF findings about sea turtle abundance and behavior are among the reasons the refuge was created in 1991. The UCF Marine Turtle Research Group focuses on long-term nesting beach and coastal juvenile sea turtle research in Brevard and Indian River counties locally. The group also studies the oceanic "lost years" tracking turtles in the Gulf of Mexico, North and South Atlantic, and Indian Oceans.

All sea turtles saw an increase in nests along the coastline this year compared to recent years. Here's a look at the numbers recorded by the UCF Marine Turtle Research Group's covering the 13 northernmost miles of the Archie Carr National Wildlife Refuge. Final counts won't be tallied until Oct. 31:

Green turtle nests:

2020: 8,110 (unexpectedly high for a "low year")

2019: 15,784 (record, "high year")

2018: 1,230 (typical "low year")

Loggerhead nests:

2020: 12,968

2019: 10,813

2018: 11,901

Leatherback nests:

2020: 40

2019: 36

2018: 17

Note: there are no clear trends in local leatherback counts; the highest recorded total nests was 55 in 2016.

Credit: 
University of Central Florida

Fish banks

Society will require more food in the coming years to feed a growing population, and seafood will likely make up a significant portion of it. At the same time, we need to conserve natural habitats to ensure the health of our oceans. It seems like a conflict is inevitable.

"Marine protected areas are tools commonly used to conserve marine biodiversity by closing parts of the ocean to fishing," said Reniel Cabral, an assistant researcher at UC Santa Barbara's Environmental Market Solutions Lab. "This creates a potential dilemma when closures cause fishers to lose access to fishing grounds."

A new study indicates that this need not be the case. The paper outlines where the benefits of fishing restrictions could enable a fishery to become more productive, even with the closures. The research, published in the Proceedings of the National Academy of Sciences, lays the groundwork for expanding marine conservation alongside fishery catches.

The benefits of a well-considered marine protected area (MPA) can bolster the productivity of surrounding fisheries, especially when those fisheries are overexploited. The refuge enables populations to rebuild and then spill over into surrounding waters. Protecting an area from fishing also enables resident fish to grow older and larger, and scientists have found that these fish are proportionately more fertile than their smaller counterparts.

What's more, fishing is not well regulated in many regions. The activity can be decentralized and target many different species using a variety of methods. Managing the industry can be nearly impossible, especially for agencies that are underfunded and underpowered. In this context, designating an MPA is relatively simple, especially compared to other management strategies.

"Past studies have shown that MPAs can improve catch when designed well and under the right fishery conditions," said Cabral, the study's lead author. "We asked how you could design a network of marine protected areas to improve fishery productivity, and what the results would be."

Cabral and his colleagues at UC Santa Barbara, the Hawai?i Institute of Marine Biology, and the National Geographic Society began constructing a model of global fisheries that would account for both biologic and economic factors. They leveraged a database of 4,000 fishery stocks, their ecological characteristics, management status and global distributions in combination with a wealth of information on fisheries catch and fisher behavior in response to marine protected areas.

The resulting bio-economic model forecasts how fish populations would respond to the creation of new MPAs based on a variety of factors such as the location and status of fisheries and species mobility and growth rates. This enabled the team to project harvest outcomes over a variety of different reserve designs. The researchers could then see where MPAs would be most beneficial.

"We found that there are a lot of places where you can get food benefits," said coauthor Steve Gaines, dean of UCSB's Bren School of Environmental Science & Management. "So, rather than having this traditional battle between fisheries and conservation, we can now identify the strategic places where we can potentially get both conservation and fishery benefits."

Currently, only 2.5% of the ocean is covered by highly protected MPAs. The study found that strategically protecting an additional 5% of the ocean could increase future catch by 20%, or 9 to 12 million metric tons of fish.

The most promising locations tended to cluster around the South Pacific, southeast Africa and the temperate coasts of North and South America. These are regions where well-placed MPAs have the greatest potential for increasing local catch, whether due to the ecology of the stocks, poor fishery regulation or a combination of the two.

The results offer a rubric for determining what the best strategies will be and what regions will yield the best return. It's a gestalt look at the interplay between marine protected areas and fisheries that can be further developed in the future.

The projections also come only from stocks that scientists have data on. There are plenty of species that don't have enough data for analysis, and they're likely to be in far worse shape than those we do have a tab on, Gaines explained. For these reasons, the team believes the benefits to fisheries would likely exceed the predictions in their paper.

The model's strength lies in its ability to highlight areas where a marine reserve could have high potential benefits for fishing. "Our model can identify areas where MPAs would really improve fishery productivity," said Cabral, "but designing those at the local level will need to be site specific."

In addition to being a relatively simple management tool, marine protected areas can provide a great starting point for getting communities involved in fishery management and encouraging ocean stewardship. "MPAs can encourage community participation, which increases the attention they pay to improving their management for fisheries that are not protected," Cabral said.

Of course, to have this effect, governments need to actively include communities in the planning process. When done well, these stakeholders become active participants in shaping the future of their own resources.

The Environmental Market Solutions Lab is currently applying this model at the country level as well. The results will establish a framework for individual nations to understand the potential benefits of their own marine protected areas.

The team has also taken an active role in applying its research to real-world MPAs. "We are working with multiple countries around the world to support their efforts to place hundreds of thousands of square kilometers of ocean area under protection," said Darcy Bradley, one of the paper's coauthors and the co-director of the lab's Ocean and Fisheries Program.

The group helps with spatial prioritization -- leveraging studies like this one -- as part of the planning process. They also collaborate with national agencies to design fishery monitoring programs and perform fishery assessments. "In each of these engagements, our goal is to take the best available MPA science and translate it into practical outputs to support thriving ocean ecosystems and economies," she said.

Credit: 
University of California - Santa Barbara

Tracing the source of illicit sand--can it be done?

image: Sand

Image: 
Image provided by Zachary Sickman.

Boulder, Colo., USA: If you've visited the beach recently, you might think sand is ubiquitous. But in construction uses, the perfect sand and gravel is not always an easy resource to come by. "Not all sand is equal in terms of what it can be used for," notes Zack Sickman, coauthor of a new study to be presented on Thursday at the Geological Society of America annual meeting. He says concrete aggregate needs sand with a certain size and angularity--the kind that is considered "immature" and often, but not always, found in rivers.

The demand for sand has exploded since WWII. "The consumption of sand and gravel and crushed rock started accelerating [after the Second World War] as a result of all the construction and infrastructure development and urbanization that was happening in many areas," says Aurora Torres, coauthor of the study.

Although construction in Europe and North America has slowed, Torres notes that developing nations are experiencing rapid urbanization, and with it, an increased need for resources. The projected amounts of sand and gravel will likely jump from 35 gigatons per year (2011 levels) to 82 gigatons per year by 2060.

Sand mining, whether legal or illegal, has environmental and social impacts. Sand removal can affect agriculture, water quality, air pollution, sediment availability, and human health issues. While regulations can help lessen impacts, some of this extraction and transport is illegal and unmonitored. It's this illicit trade that the team is focusing on.

Tracing Sand

The answer might be in the grains themselves. Sand is a strong forensic marker--it reflects the unique rocks within a drainage basin and keeps the record from extraction to construction. "There's no reason why I couldn't go take a core of concrete of an existing skyscraper and take that same compositional signature and tie it back to a source," notes Sickmann.

Being able to quickly determine if mined sand is from an illegal area is an important step in tamping down illicit sand trade. Sickmann says that it's possible to determine which river produced a truckload of sand, but "the real crux of the issue is finding the most effective way to use the method in a regulatory capacity."

Sickmann and Torres want to find the most cost effective way to quickly and effectively fingerprint sand from different sources. They plan to start a proof-of-concept test on domestic (U.S.) sand first, where the sources and processing locations are well known. They and colleagues around the U.S. will collect concrete from home improvement stores and see if they can trace back the source of the sand. If they can successfully identify the catchment where the sand was sourced, they can move their methods to other global regions.

Eradicating Illicit Sand

The team thinks a quick sand fingerprinting method would be perfect for areas in Southeast and South Asia. "Not only are there areas of concern there with respect to illicit supply networks, the geology of Southeast Asia and South Asia is complex, and should provide a lot of leverage in distinguishing sand from different regional areas of concern," notes Sickmann. The more diverse the rocks are in a basin, the easier it is to distinguish where the sand originated from.

Sand provenance identification is critical for reducing the impacts of illegal sand mining. "This is a very important issue," says Torres. "I see this as a valuable tool that will be that will complement other strategies that are being put in place to track and monitor these illicit supply networks."

Credit: 
Geological Society of America

High-sugar diet can damage the gut, intensifying risk for colitis

image: Magnified photos show colon sections from control mice (left) and mice fed a high glucose diet for seven days (right). The protective mucus which separates the colon's epithelial lining (at bottom) from fecal content (above) was thinner in mice given glucose.

Image: 
UT Southwestern Medical Center

DALLAS - Oct. 28, 2020 - Mice fed diets high in sugar developed worse colitis, a type of inflammatory bowel disease (IBD), and researchers examining their large intestines found more of the bacteria that can damage the gut's protective mucus layer.

"Colitis is a major public health problem in the U.S. and in other Western countries," says Hasan Zaki, Ph.D., who led the study that appears in today's Science Translational Medicine. "This is very important from a public health point of view."

Colitis can cause persistent diarrhea, abdominal pain, and rectal bleeding. The number of American adults suffering from IBD (which includes Crohn's disease) jumped from 2 million in 1999 to 3 million in 2015, according to the Centers for Disease Control and Prevention. In addition, colitis is beginning to show up in children, who historically did not suffer from it, says Zaki, a UT Southwestern assistant professor of pathology.

Because of the disease's much higher prevalence in Western countries, researchers have looked to the Western-style diet - high in fat, sugar, and animal protein - as a possible risk factor, says Zaki. While high-fat diets have been found to trigger IBD, the role of sugar has been more controversial, he says.

This new study points to sugar - particularly the glucose found in high fructose corn syrup developed by the food industry in the 1960s and then increasingly used to sweeten soft drinks and other foods - as a prime suspect. "The incidence of IBD has also increased in Western countries, particularly among children, over this same period," according to the study.

UT Southwestern researchers fed mice a solution of water with a 10 percent concentration of various dietary sugars - glucose, fructose, and sucrose - for seven days. They found that mice that were either genetically predisposed to develop colitis, or those given a chemical that induces colitis, developed more severe symptoms if they were first given sugar.

The researchers then used gene-sequencing techniques to identify the types and prevalence of bacteria found in the large intestines of mice before and after receiving their sugar regimen. After being given sugar treatments for seven days, those fed sucrose, fructose, and - especially - glucose showed significant changes in the microbial population inside the gut, according to the study.

Bacteria known to produce mucus-degrading enzymes, such as Akkermansia, were found in greater numbers, while some other types of bugs considered good bacteria and commonly found in the gut, such as Lactobacillus, became less abundant.

The researchers saw evidence of a thinning of the mucus layer that protects the lining of the large intestine as well as signs of infection by other bacteria. "The mucus layer protects intestinal mucosal tissue from infiltration of gut microbiota," the study explains. "Higher abundance of mucus-degrading bacteria, including Akkermansia muciniphila and Bacteroides fragilis, in glucose-treated mice is, therefore, a potential risk for the intestinal mucus barrier.

"Due to the erosion of the mucus layer, gut bacteria were in close proximity with the epithelial layer of the large intestine in glucose-treated mice," the study continues. "Breaching of the epithelial barrier is the key initiating event of intestinal inflammation."

Although glucose had the greatest effect, "all three simple sugars profoundly altered the composition of gut microbiota," the study reports. Previous studies have shown that gut microbiota of both humans and mice can change rapidly with a change in diet. "Our study clearly shows that you really have to mind your food," says Zaki.

After finding changes in the gut microbiota in sugar-fed mice, the researchers fed feces from the sugar-treated mice to other mice. Those mice developed worse colitis, suggesting that glucose-induced susceptibility to colitis can be transmitted along with the destructive intestinal microbiota from affected animals.

Zaki says he now plans to study whether and how high sugar intake affects the development of other inflammatory diseases such as obesity, fatty liver disease, and neurodegenerative diseases like Alzheimer's.

Credit: 
UT Southwestern Medical Center

Physicists circumvent centuries-old theory to cancel magnetic fields

A team of scientists including two physicists at the University of Sussex has found a way to circumvent a 178-year old theory which means they can effectively cancel magnetic fields at a distance. They are the first to be able to do so in a way which has practical benefits.

The work is hoped to have a wide variety of applications. For example, patients with neurological disorders such as Alzheimer's or Parkinson's might in future receive a more accurate diagnosis. With the ability to cancel out 'noisy' external magnetic fields, doctors using magnetic field scanners will be able to see more accurately what is happening in the brain.

The study "Tailoring magnetic fields in inaccessible regions" is published in Physical Review Letters. It is an international collaboration between Dr Mark Bason and Jordi Prat-Camps at the University of Sussex, and Rosa Mach-Batlle and Nuria Del-Valle from the Universitat Autonoma de Barcelona and other institutions.

"Earnshaw's Theorem" from 1842 limits the ability to shape magnetic fields. The team were able to calculate an innovative way to circumvent this theory in order to effectively cancel other magnetic fields which can confuse readings in experiments.

In practical terms, they achieved this through creating a device comprised of a careful arrangement of electrical wires. This creates additional fields and so counteracts the effects of the unwanted magnetic field. Scientists have been struggling with this challenge for years but now the team has found a new strategy to deal with these problematic fields. While a similar effect has been achieved at much higher frequencies, this is the first time it has been achieved at low frequencies and static fields - such as biological frequencies - which will unlock a host of useful applications.

Other possible future applications for this work include:

Quantum technology and quantum computing, in which 'noise' from exterior magnetic fields can affect experimental readings

Neuroimaging, in which a technique called 'transcranial magnetic stimulation' activates different areas of the brain through magnetic fields. Using the techniques in this paper, doctors might be able to more carefully address areas of the brain needing stimulation.

Biomedicine, to better control and manipulate nanorobots and magnetic nanoparticles that are moved inside a body by means of external magnetic fields. Potential applications for this development include improved drug delivery and magnetic hyperthermia therapies.

Dr Rosa Mach-Batlle, the lead author on the paper from the Universitat Autonoma de Barcelona, said: "Starting from the fundamental question of whether it was possible or not to create a magnetic source at a distance, we came up with a strategy for controlling magnetism remotely that we believe could have a significant impact in technologies relying on the magnetic field distribution in inaccessible regions, such as inside of a human body."

Dr Mark Bason from the School of Mathematical and Physical Sciences at the University of Sussex said: "We've discovered a way to circumvent Earnshaw's theorem which many people didn't imagine was possible. As a physicist, that's pretty exciting. But it's not just a theoretical exercise as our research might lead to some really important applications: more accurate diagnosis for Motor Neurone Disease patients in future, for example, better understanding of dementia in the brain, or speeding the development of quantum technology."

Credit: 
University of Sussex