Tech

NASA reveals Tropical Cyclone Vayu's compact center

image: On June 12 at 4:05 a.m. EDT (0905 UTC), the MODIS instrument aboard NASA's Aqua satellite provided a visible image of Tropical Cyclone Vayu and showed a compact center of circulation.

Image: 
NASA /NRL

Visible imagery from NASA's Aqua satellite showed Tropical Cyclone Vayu has a compact central dense overcast cloud cover. Vayu's center was off-shore from India's Gujarart coast.

On June 12 at 4:05 a.m. EDT (0905 UTC), the Moderate Resolution Imaging Spectroradiometer or MODIS instrument aboard NASA's Aqua satellite provided a visible image of Vayu. Vayu's center was off the western coast of India, in the Arabian Sea. Although hurricane-strength, the MODIS image showed a cloud-filled center of circulation and compact central dense overcast feature, approximately 90 to 100 nautical miles in diameter. The MODIS visible image also showed a thick band of thunderstorms wrapped into the low-level center from the western and southern quadrants. Meanwhile, satellite microwave imagery revealed a small eye beneath that overcast.

At 11 a.m. EDT (1500), the Joint Typhoon Warning Center reported that Tropical Cyclone Vayu was located near 19.4 degrees north latitude and 69.7 east longitude. That is 376 nautical miles south-southeast of Karachi, Pakistan. Vayu has tracked to the north-northwest. Maximum sustained winds were near 90 knots (104 mph/167 kph).

The Joint Typhoon Warning Center forecasts Vayu to strengthen slightly over the next day. Vayu is forecast to continue tracking north and turn to the northwest, with its center keeping off shore from India as it moves toward Pakistan. The latest forecast turns Vayu to the west, keeping it from a landfall through June 17.

Credit: 
NASA/Goddard Space Flight Center

Organic carbon hides in sediments, keeping oxygen in atmosphere

image: The mixing of organic-rich and sediment-rich waters of the Rio Negro and Solimoes River in the amazon basin.

Image: 
Photo by Chris Linder

A new study from researchers at the Woods Hole Oceanographic Institution (WHOI) and Harvard University may help settle a long-standing question--how small amounts of organic carbon become locked away in rock and sediments, preventing it from decomposing. Knowing exactly how that process occurs could help explain why the mixture of gases in the atmosphere has remained stable for so long, says lead author Jordon Hemingway, a postdoctoral researcher at Harvard and former student at WHOI. The paper publishes June 14 in the journal Nature.

Atmospheric carbon dioxide (CO2), Hemingway notes, is an inorganic form of carbon. Plants, algae, and certain types of bacteria can pull that CO2 out of the air, and use it as a building block for sugars, proteins, and other molecules in their body. The process, which occurs during photosynthesis, transforms inorganic carbon into an "organic" form, while releasing oxygen into the atmosphere. The reverse occurs when those organisms die: microbes start to decompose their bodies, consuming oxygen and releasing CO2 back into the air.

One of the key reasons Earth has remained habitable is that this chemical cycle is slightly imbalanced, Hemingway says. For some reason, a small percentage of organic carbon is not broken down by microbes, but instead stays preserved underground for millions of years.

"If it were perfectly balanced, all the free oxygen in the atmosphere would be used up as quickly as it was created," says Hemingway. "In order to have oxygen left for us to breathe, some of the organic carbon has to be hidden away where it can't decompose."

Based on existing evidence, researchers have developed two possible reasons why carbon is left behind. The first, called "selective preservation," suggests that some molecules of organic carbon may be difficult for microorganisms to break down, so they remain untouched in sediments once all others have decomposed. The second, called the "mineral protection" hypothesis, states that molecules of organic carbon may instead be forming strong chemical bonds with the minerals around them--so strong that bacteria aren't able to pluck them away and "eat" them.

"Historically, it's been hard to tease out which process is dominant. The tools we have for organic geochemistry haven't been sensitive enough," says Hemingway. For this study, he turned to a method called "ramped pyrolysis oxidation", or RPO, to test the hypotheses in sediment samples from around the globe. With a specialized oven, he steadily raised the temperature of each sample to nearly 1000 degrees Celsius, and measured the amount of carbon dioxide it released as it warmed. CO2 released at lower temperatures represented carbon with relatively weak chemical bonds, whereas carbon released at high temperatures denoted strong bonds that took more energy to break. He also measured the age of the CO2 using carbon dating methods.

"If organic molecules are being preserved because of selectivity--because microbes aren't able to break them down-- we would expect to see a pretty narrow range of bond strength in the samples. Microbes would have decomposed the rest, leaving only a few stubborn types of organic carbon behind," he says. "But we actually saw that the diversity of bond strengths grows rather than shrinks with time, indicating that a wide range of organic carbon types are being preserved. We think that means they're getting protection from minerals around them."

Hemingway also saw a pattern in the samples themselves that supported his findings. Fine clays like those found at river outlets had a consistently higher diversity of carbon bonds than coarse or sandy sediments, suggesting that fine sediments provide more surface area on which organic carbon could attach itself.

"If you take, say, granite from New Hampshire and break it down, you'll get a sort of sand. Those grains are relatively large, so there's not that much surface available to interact with organic matter. You really need fine sediments created via chemical weathering at the surface--things like phyllosilicate clays," says Valier Galy, a biogeochemist at WHOI and co-author on the paper.

Although this work provides strong evidence for one hypothesis over another, Hemingway and his colleagues are quick to note that it doesn't provide a definitive answer to the organic carbon puzzle. "We were able to put our finger on the mechanism by which carbon is being preserved, but we don't provide information about other factors, like sensitivity to temperature in the environment, for instance. There are a lot of other factors to consider. This paper is intended as a sort of waypoint to direct biogeochemists in their research," says Galy.

Credit: 
Woods Hole Oceanographic Institution

Discovery of field-induced pair density wave state in high temperature superconductors

image: Superconductors are quantum materials that are perfect transmitters of electricity and electronic information. Presently, cuprates are the best candidate for highest temperature superconductivity at ambient pressure, operating at approximately -120 °C. Improving this involves understanding competing phases, one of which has now been identified.

Image: 
MPI CPfS, artist credit to K. Fujita, Brookhaven National Lab

Superconductors are quantum materials that are perfect transmitters of electricity and electronic information. Although they form the technological basis of solid-state quantum computing, they are also its key limiting factor because conventional superconductors only work at temperatures near -270 °C. This has motivated a worldwide race to try to discover higher temperature superconductors. Materials containing CuO2 crystal layers (cuprates) are, at present, the best candidate for highest temperature superconductivity, operating at approximately -120 °C. But room temperature superconductivity in these compounds appears to be frustrated by the existence of a competing electronic phase, and focus has recently been on identifying and controlling that mysterious second phase.

Superconductivity occurs when electrons form pairs of opposite spin and opposite momentum, and these "Cooper pairs" condense into a homogeneous electronic fluid. However, theory also allows the possibility that these electron pairs crystallize into a "pair density wave" (PDW) state where the density of pairs modulates periodically in space. Intense theoretical interest has emerged in whether such a PDW is the competing phase in cuprates.

To search for evidence of such a PDW state, a team led by Prof. JC Seamus Davis (University of Oxford) and Prof. Andrew P. Mackenzie (Max Planck Institute CPfS, Dresden) with key collaborators Dr. Stephen D. Edkins and Dr. Mohammad Hamidian (Cornell University) and Dr. Kazuhiro Fujita (Brookhaven National Lab.), used high magnetic fields to suppress the homogeneous superconductivity in the cuprate superconductor Bi2Sr2CaCu2O8 . They then carried out atomic-scale visualization of the electronic structure of the new field-induced phase. Under these circumstances, modulations in the density of electronic states containing multiple signatures of a PDW state were discovered. The phenomena are in detailed agreement with theoretical predictions for a field-induced PDW state, implying that it is a pair density wave which competes with superconductivity in cuprates. This discovery makes it clear that in order to understand the mechanism behind the enigmatic high temperature superconductivity of the cuprates, this exotic PDW state needs to be taken into account, and therefore opens a new frontier in cuprate research.

Credit: 
Max Planck Institute for Chemical Physics of Solids

New web-based tool accelerates research on conditions such as dementia, sports concussion

image: The logo of brainlife.io, a cloud-computing web platform that allows scientists to track data and analyses on the brain.

Image: 
Image courtesy of <a href="http://www.brainlife.io" target="_blank">brainlife.io</a>.

Scientists in the United States, Europe and South America are reporting how a new cloud-computing web platform allows scientists to track data and analyses on the brain, potentially reducing delays in discovery.

The project, called brainlife.io, is led by Franco Pestilli, associate professor in the Indiana University Bloomington College of Arts and Sciences' Department of Psychological and Brain Sciences and a member of the IU Network Science Institute, in collaboration with colleagues across the university. At IU, it is speeding research on disorders such as dementia, sports concussion and eye disease.

A new paper on the project was published May 30 in the journal Scientific Data.

"Scientists are increasingly embracing modern technology to reduce human errors in scientific research practice," said Pestilli, who established brainlife.io in 2017 with support from the National Science Foundation and Microsoft.

"This article describes a unique mechanism by which scientists across the world can share data and analyses, which allows them to reproduce research results and extend them to new frontiers of human understanding," he added. "The benefit of such a platform is faster research on brain disease."

The system manages all aspects of research where people are more likely than machines to make mistakes, such as keeping track of data and code for analyses, storing information, and producing visualizations.

At IU, brainlife.io is being used to advance research on multiple health care research studies. Examples include:

- Research on sports concussion and chronic traumatic encephalopathy led by Nicholas Port at the IU School of Optometry in Bloomington.

- Research on Alzheimer's disease led by Andrew Saykin at the IU School of Medicine in Indianapolis.

- Research on macular degeneration in collaboration with researchers in Japan and Europe, which was recently published in the journal Brain Structure and Function.

The new paper provides a "case study" on how to generate a full research study, including data collection, analysis and visualization, on the brainlife.io platform. It also describes how the system preserves data and analyses in a single digital record to create reusable research assets for other scientists to use in their work.

"I like to refer to the new technology as a process of 'data upcycling,'" Pestilli said. "The new records that scientists create and share on brainlife.io can be easily reused by others to go beyond the goals of the original study."

For example, a study on traumatic brain injury could potentially combine data from a study on Alzheimer's disease to understand underlying biological mechanisms in both conditions.

Importantly, Pestilli added, brainlife.io is designed to store and process data derived from diffusion-weighted magnetic resonance imaging -- a form of imaging that uses water molecules in the brain to create a highly detailed roadmap of the nerve tracks in the brain -- as well as tractography, a 3D modeling technical to visually represent these nerves and understand the network of connections that make up the brain.

"The use these imaging techniques has revolutionized knowledge about networks inside the brain and the impact of the brain's white matter on human behavior and disease," Pestilli said. It also generates enormous amounts of data that require serious computer resources to store and analyze.

Some of this computing power comes from Microsoft, which chose brainlife.io as one of the first eight projects to benefit from the company's initiative to award $3 million in compute credits to projects under the NSF's Big Data Spokes and Planning projects, of which IU is a part. The project is also supported under the NSF's BRAIN Initiative, a federal project to generate new physical and conceptual tools to understand the brain.

IU contributors to brainlife.io include Soichi Hayashi, a software engineer at the IU Pervasive Technology Institute; graduate students Brad Caron, Lindsey Kitchell, Brent McPherson and Dan Bullock; and undergraduate students Yiming Qian and Andrew Patterson. Bullock and McPherson were supported by grants from the National Institutes of Health and NSF.

Additional authors on the article include researchers at Indiana University, the University of Michigan at Ann Arbor, Northwestern University, the University of Trento in Italy and CONICET in Argentina.

What they're saying

"This research shares new methods and platforms that help neuroscientists collaborate across disciplinary boundaries, share data, reproduce results and strengthen neuroscience," said Kurt Thoroughman, a program officer at the NSF. "Findings like these demonstrate NSF's commitment to the BRAIN Initiative and efforts to improve our understanding of the neural basis of human cognition. These findings have implications across the brain sciences, including understanding normal brain functions and improving outcomes for the over 1 million Americans with diseases of the brain."

"This work is groundbreaking because it combines brain data with trackable, referenceable processing assets and powerful command line and visualization tools," said Vani Mandava, director of data science at Microsoft Research. "Supporting innovative solutions to technical challenges that support new insights and treatments on the brain -- like this interface that promotes open data and reproducible science -- is part of our data science research mission at Microsoft."

Credit: 
Indiana University

The brains of birds synchronize when they sing duets

image: This is a pair of P. mahali sitting in a tree below their nest. The male bird on the right was equipped with a vocal transmitter on his back and with a neuronal transmitter on its head.

Image: 
Susanne Hoffmann / MPI for Ornithology

When a male or female white-browed sparrow-weaver begins its song, its partner joins in at a certain time. They duet with each other by singing in turn and precisely in tune. A team led by researchers from the Max Planck Institute for Ornithology in Seewiesen used mobile transmitters to simultaneously record neural and acoustic signals from pairs of birds singing duets in their natural habitat. They found that the nerve cell activity in the brain of the singing bird changes and synchronizes with its partner when the partner begins to sing. The brains of both animals then essentially function as one, which leads to the perfect duet.

White-browed sparrow-weavers (Plocepasser mahali) live together in small groups in trees in southern and eastern Africa. Each bird has a roosting nest with an entrance and an exit. The dominant pair will have a breeding nest which is easily recognisable by the fact that one passage is closed to prevent eggs from falling out. In addition to the dominant pair, there are up to eight other birds in the group that help build nests and raise the young. All group members defend their territory against rival groups through duets of the dominant pair and choruses together with the helpers.

White-browed sparrow-weavers are one of the few bird species that sing in duet. It was assumed that some cognitive coordination between individuals was required to synchronise the syllables in the duet, however the underlying neuronal mechanisms of such coordination were unknown.

Miniature transmitters enable recording under natural conditions

"White-browed sparrow-weavers cannot develop their complex social structure in the laboratory. We were therefore only able to investigate the mechanisms of the duet singing in the natural habitat of the birds", says Cornelia Voigt, one of the three lead authors of the study. Because of this, researchers and technicians at the Max Planck Institute for Ornithology in Seewiesen developed mobile microphone transmitters to record the singing in the wild. These weigh only 0.6 g and were attached to the bird like a backpack.

With another newly developed transmitter, weighing only 1 g, the scientists could also make a synchronous record of the brain activity in the birds while they were singing in their natural environment. An antenna placed near the birds' tree recorded up to eight of these signals in parallel. With the help of an external sound card and a laptop, the singing and the brain signals were synchronously recorded with millisecond precision. "The technology we have developed must withstand the extreme conditions of the Kalahari Savannah in northern South Africa", says Susanne Hoffmann, a scientist in the Department of Behavioural Neurobiology. "The electronics for recording the signals were stored in a car. During the day, it got so hot that the laptop almost began to glow. But the recordings all worked well, even when the birds and their transmitters were caught in one of the few downpours".

Brain activity of the duetting birds synchronizes

Lisa Trost, also a scientist in the department, says: "Fortunately, the procedure for fixing the implants for neuronal measurements on the heads of the birds did not take long. After complete recovery, the respective bird was quickly returned to the group and did not lose its social status. All birds sang in the tree immediately after their return". The researchers recorded almost 650 duets. In many cases, the males began with the song and the partner joined in after some introductory syllables. The syllables between the duetting pair followed each other without delay and in perfect coordination. The coordination was so precise that analysis showed only a 0.25s delay between the duetting partners' singing bouts.

The singing of songbirds is controlled by a network of brain nuclei, the vocal control system. In one of these nuclei, the HVC, the call of the partner bird triggers a change in neuronal activity in the bird that began singing. This, in turn, affects its own singing. The result is a precise synchronization of the brain activity of both birds. "The rhythmic duet of the individuals is achieved through sensory information that comes from the partner", says Manfred Gahr, who led the study. The brains of the partners form a network that functions like an extended circuit to organize the temporal pattern for the duet. The researchers suspect that similar mechanisms are also responsible for coordinating movement during social interactions in humans (e.g. dancing with a partner).

"Until now, this kind of study has only been performed in the laboratory. Measuring the activity of nerve cells in the field using wireless transmitters is much less stressful for the birds," says Susanne Hoffmann. "We hope this study has laid the foundation for the further development of neuroethology".

Credit: 
Max-Planck-Gesellschaft

Researchers determine ideal areas and timing for biological control of invasive stink bug

CORVALLIS, Ore. - Biological control of the brown marmorated stink bug, an invasive pest that devastates gardens and crops, would be more effective in natural areas bordering crops or at times when certain insecticides aren't being applied, according to a new Oregon State University study.

The study, published in the Journal of Economic Entomology, advances the understanding of using the samurai wasp for biological control of the brown marmorated stink bug, and has significant implications for growers of orchard fruits and nuts, said David Lowenstein, a postdoctoral research associate in Oregon State's College of Agricultural Sciences and lead author on the study.

Biological control is the use of beneficial insects to manage other insects, which means using less pesticides. The brown marmorated stink bug, which is native to east Asia, has a taste for more than 100 types of crops, including blueberries, wine grapes, cherries and hazelnuts. During the 1990s, it invaded the United States and is now found in 44 states, causing millions of dollars in crop damage.

With funding from the United States Department of Agriculture's Specialty Crop Research Initiative, more than 50 researchers across the U.S. are studying the stink bug to find management solutions, including those at Oregon State.

The stink bug was first detected in Oregon in 2004. Since then, OSU researchers have found the pest in 24 of Oregon's 36 counties, and in all of the state's major fruit-producing regions. The bug also causes nuisance issues when aggregating on the side of homes and sheds in the fall.

The samurai wasp, which is smaller than a pinhead, is native to the same region in east Asia as the brown marmorated stink bug. It lays its eggs inside stink bug eggs, preventing the stink bugs from hatching. Although it is unknown how it arrived in the U.S., surveys conducted in 2014-15 detected the wasp in several locations. It was discovered in Oregon's Willamette Valley in 2016.

The Oregon State study investigated the impacts of different insecticides, commonly used in orchard crops, on samurai wasp survival and reproduction. Some insecticides were highly lethal to the wasp and a few were compatible with using the wasp for biological control.

Lowenstein said the research was necessary for growers, especially those in Oregon's hazelnut industry, who use insecticides but who also want to use the samurai wasp to control stink bugs. They want to know which insecticides will be less harmful to the wasp and when to spray, Lowenstein said.

"Since the discovery of the samurai wasp in Oregon, our research group at OSU has proposed biocontrol for managing the brown marmorated stink bug," Lowenstein said. "We needed to validate the compatibility of this wasp in a commercial environment where insecticides are being used."

The researchers studied samurai wasp compatibility with nine conventional and organic insecticides commonly used in integrated pest management in perennial crops, both in the laboratory and in three hazelnut orchards in the Willamette Valley.

They found that the active ingredients in two classes of insecticides - neonicotinoids and pyrethroids - killed more samurai wasps than the others, both in the field and in the lab. Both classes are "broad-spectrum" insecticides, which are designed to kill or manage a variety of insects.

However, more than 50% of wasps survived contact with insecticides that are better targeted to control chewing insects, such as filbertworm larvae, in hazelnut.

"For someone who wants biological control for samurai wasp, it's best to time it when you aren't applying chemicals unless you are using those more targeted compounds," Lowenstein said.

Orchards may also benefit from biological control if samurai wasps are released in unsprayed areas adjacent to agriculture and in urban areas, where the samurai wasp is thriving. In the last 2½ years OSU has released the wasp at about 60 sites in the state. At about 40% of those sites the wasp has survived into the following season.

The wasp isn't commercially available and OSU rears it primarily for research, but the insects can be distributed to an Oregon location upon request, Lowenstein said, adding that the samurai wasp isn't harmful to people.

"There's no way you are going to confuse this with a yellowjacket. They aren't interested in stinging people," he said. "If you have a samurai wasp on your property you won't even know it's there unless you are seeing its effect, which is less stink bugs."

Oregon State hosts a website with the latest news about the brown marmorated stink bug, photos to help identify it, and instructions on how to report sightings. A fact sheet about the samurai wasp is available through the OSU Extension Service.

Credit: 
Oregon State University

A matter of fine balance

image: Despite more than a million fold difference in the light intensity, our brains enable us to see the same scene in broad daylight and a dim night by the process of normalization. This article shows how brains perform normalization by precisely balancing two equal and opposite forces - excitation and inhibition.

Image: 
Hrishikesh Nambisan

Balance is the key.

It's not exactly neuroscience; except that it is.

Since balance is key for almost everything we do--walking, cycling, and a million other things in life--it really is no surprise that electrical balance holds the key to understanding how our brains function.

In neural networks, excitation/inhibition (E/I) balance occurs when the average levels of excitatory signals match those of inhibitory signals received from all connections. Even individual neurons are known to maintain E/I balance, and a loss of this balance is linked to serious problems such as epilepsy, schizophrenia, and autism.

Although E/I balance is known to operate from the whole brain level to individual neurons, this has always been measured by recording electrical activity straight from animal brains, either when the brain is flooded with information about the world, or is chattering by itself. It has only recently been shown that single neurons in small circuits--composed of just 2 or 3 kinds of neurons--insulated from the rest of the brain are also E/I balanced.

The study which shows this, comes from Upinder Bhalla's group at the National Centre for Biological Sciences (NCBS), Bangalore, and has been published in the journal eLife.

But what good does it do to the brain to maintain such precise balance between neurons? The work demonstrates that E/I balance, and another phenomenon known as E/I delay, together form the biophysical roots of 'normalization'; 'normalization' being the process by which our brains make sense of the world despite huge variations in the information they receive.

"Our first objective was to investigate if E/I balance was also maintained within a sub-circuit," say Aanchal Bhatia and Sahil Moza, who are two of the researchers behind this work.

Bhatia and Moza stimulated neurons in mouse brain slices with different patterns of light using a technique called optogenetics. To precisely control the stimulation they applied, the team designed a contraption of their own with a disembowelled DLP (digital light processing) projector; their final setup can now stimulate neurons (from tens to a few hundred) in random patterns. The team focused on a specific circuit in the hippocampal region of the brain called the CA3-CA1 circuit, known for its role in memory formation.

Thousands of measurements later, the researchers had their answer--sub-circuits also maintain a very precise E/I balance, even with totally random patterns of input!

"This was quite a surprise. These random patterns of input don't correspond to any real-world stimulus, and yet, the excitation and inhibition were precisely balanced.", exclaims Bhatia.

"Think of a neuron as a car. Excitation acts like the accelerator pedal, driving neurons to fire off a signal. Inhibition, however, acts like the brake pedal and pushes neurons away from firing," says Moza.

If a neuron is a car running at a constant speed, one understands that its driver must keep the accelerator and brakes in balance to maintain that speed. However, neurons usually do not have just a single driver--most neurons receive hundreds to thousands of inputs from other neurons--which, using the car analogy, translates to an equally large number of drivers. In E/I balance, all of these driver-neurons' inputs are weighted such that the neuron-car maintains its constant speed. However, the numbers of neuron-drivers for a neuron-car can and do keep changing. How then, does the neuron-car respond sensibly to inputs from such a large number and range of neuron-drivers?

"This is a clever trick that our brains use," says Moza. "As the numbers of neuron-drivers increase, each driver keeps shortening the delay between applying the accelerator and brakes, without ever changing the sum total of acceleration and brakes provided," he adds. What Moza describes with this analogy, is the phenomenon of E/I delay, which has also been demonstrated in their study. E/I delay is a distinctive relationship between the strength of excitation and the timing of inhibition--as excitatory inputs become stronger, the delays between excitation and inhibition become shorter.

"The icing on the cake, however, is our discovery that E/I balance and E/I delay create the biophysical mechanism for a new form of normalization called 'subthreshold divisive normalization'," says Bhatia.

To understand what 'normalization' is, imagine that you are gazing out of your bedroom at the familiar outlines of nearby buildings at midday, then imagine the same scene at midnight. Whether in the full blast of the midday sun or drenched in the mellow hues of sodium vapour street lamps at midnight, you would still recognise the scene outside your window as the same.

How are you doing this?

By 'normalization'. The brain literally 'normalizes' its reactions by dividing the response of each neuron with a common factor that is usually the summed activity of a pool of neurons. This is how one picture, despite sending hugely different signals under different conditions can still be recognised by our brains as being the same.

"Neurons calculate first, shoot later," says Moza, to emphasize that neurons 'normalize' all their responses to inputs before they fire off a signal.

"People have been trying to understand how neurons add up and process the inputs they receive. In other words, does 1+1 equal 2 for a neuron, or is it doing something else?" asks Bhatia.

Turns out, that neurons are doing something much more complex than just adding-up their inputs. They perform a more complex operation--subthreshold divisive normalization--in which, the output does not increase in proportion to the input. In fact, the output increase rate actually decreases in proportion to the input.

Bhatia neatly sums this up by explaining that for the neurons in this study, "1+1 is close to 2, 1+1+1 is a bit less than 3, 1+1+1+1+1 is further less than 5, and this trend continues such that 1+1+1+1+1+1+1+1+1 is way lesser than 9, and so on".

Upinder Bhalla, the third author on the paper, says that this work has broad implications for how the brain computes. "Imagine that you're a cell in a crowd, with a thousand voices coming your way. Having a close balance means that you can ignore most of the thousand inputs, and selectively pay attention to those few signals that are particularly important, by tweaking the balance." He adds, "The study was also an excellent example of E/I (Experiment/IT) balance, where Aanchal did these delicate experiments, and Sahil provided the Information Theory and analysis."

Credit: 
National Centre for Biological Sciences

Not silent at all

The so-called "silent" or "synonymous" genetic alterations do not result in altered proteins. But they can nevertheless influence numerous functions of the cell and thus also disease processes. Scientists from the German Cancer Consortium, German Cancer Research Center, and the University of Freiburg have now created a comprehensive database of all synonymous mutations ever found in cancer. This is a "reference book" that provides cancer researchers with all available information on each of these supposedly "silent" mutations at a glance. Using the example of an important oncogene, the researchers show how synonymous mutations can influence the function of this cancer driver.

Cancer diseases are caused by changes in the genetic material. As a rule, genes that drive cancer growth (oncogenes) or counteract the development of cancer (tumor suppressor genes) are affected. In countless studies over the past decades, cancer researchers have analyzed which mutation plays which role in which type of cancer. However, they have largely focused on mutations that result in an altered amino acid sequence of proteins.

"A large proportion of the genetic alterations do not affect the amino acid sequence at all," explains Sven Diederichs, whose department is affiliated to the German Cancer Research Center (DKFZ), the University of Freiburg and the German Cancer Consortium (DKTK). This is because the genetic code for most amino acids has several "words" that differ in their last, third DNA building block. If a mutation affects this building block, nothing changes in the amino acid sequence and hence in the resulting protein. In the past, this was referred to as "silent" mutations, but now it is more likely to be "synonymous" mutations.

"Today we know that synonymous mutations play a role in many diseases and can, for example, affect the response of cancer to therapy. Nevertheless, their importance for the development of cancer is by no means as well understood as that of protein-changing mutations," said Diederichs. Synonymous mutations can intervene in many ways in important cellular processes: They affect the stability of RNA, its three-dimensional folded structure or how efficiently RNA is translated into proteins.

Diederichs and his colleagues from Heidelberg and Freiburg have now created an extensive database that contains all synonymous mutations discovered in the cancer genome and couples them with comprehensive additional information: What is the function of the affected gene, which position within the gene is mutated? In which cancers has the mutation been discovered so far and how often?

The SynMIC database contains a total of 659,194 entries concerning 88 different types of cancer. "Colleagues can use it as a reference book to obtain simple and comprehensive information about synonymous mutations that occur in the cancers they are dealing with," said Diederichs.

Using the important oncogene KRAS as an example, Diederichs and his team of researchers demonstrate in detail how synonymous mutations that have actually been discovered in cancer patients affect the RNA structure and protein production.

An estimated eight percent of all carcinogenic mutations affecting a single DNA building block are synonymous mutations. Previously, researchers had assumed that these supposedly "silent" mutations were not subject to selective pressure in cancer. But then they should be more or less randomly distributed over the genome - which is not the case, as the detailed decoding of cancer genomes in recent decades has shown.

Diederichs and colleagues now found another argument that strongly argues against the random hypothesis: Particularly at the beginning of all protein-coding gene segments, they mapped considerably fewer mutations - protein-changing as well as synonymous - than in the further course of the genes. "This is an indication that any mutation in this area has a stronger effect on the cell. Selective pressure could then prevent mutated cells from being able to assert themselves," said Diederichs. "And this selective pressure would then obviously also exist against synonymous mutations.

Credit: 
German Cancer Research Center (Deutsches Krebsforschungszentrum, DKFZ)

Overdose, suicide among leading reasons for deaths of new moms

EAST LANSING, Mich. - Overdoses and suicides were among the most common reasons for mothers dying within a year of giving birth in California, according to a new study from Michigan State University and the University of California, Merced.

Lead author Sidra Goldman-Mellor, a psychiatric epidemiologist at UC Merced, and co-author Claire Margerison, a perinatal epidemiologist at MSU, studied more than 1 million California hospital records from 2010 to 2012 to investigate the most common causes of postpartum death.

The study is published in the American Journal of Obstetrics and Gynecology and was funded by the National Institutes of Health.

While maternal death rates during and after pregnancy are on the rise in the United States, California is below the national average. Nevertheless, drug overdose was the second leading cause of death among California mothers within the first year after giving birth, and suicide ranked seventh.

Together, the two causes made up nearly 20% of all California postpartum deaths in those years. Risk of postpartum mortality due to drug overdose and suicide was higher among non-Hispanic white and low-income women.

"These deaths are rare but devastating for families," Margerison said. "We need to place more emphasis on prevention."

While both researchers agree that two years of data is not enough to identify trends, the study could be a jumping-off point for future work into drug- and suicide-related issues surrounding maternal health. The research could also be indicative of what's happening across the U.S.

"Reducing maternal mortality is a priority in the U.S. and worldwide," Goldman-Mellor said. "Drug-related deaths and suicide may account for a substantial and growing portion of maternal deaths, yet information on the incidence of and sociodemographic variation in these deaths is scarce."

Mortality rates have declined in recent years in California because of concerted efforts to improve quality of care, but the researchers pointed out that their data shows mental health and substance use issues are still affecting a large number of new mothers.

"Most of these deaths occur in the second half of the year after birth," Goldman-Mellor said.

It's because of this, Margerison added, that the later postpartum period is an important time to continue providing women with substance use and mental health resources.

One reason contributing to these deaths might be the stigma and potential legal repercussions that remain around admitting to and getting help for substance abuse or psychiatric problems, especially among new mothers, Goldman-Mellor said.

The researchers, who studied epidemiology together as graduate students at UC Berkeley, said they plan to work together to learn more about this and other topics surrounding maternal health outcomes.

Goldman-Mellor said further studies could deepen researchers' understanding of why certain women faced higher and/or lower risk for postpartum death due to drug overdose or suicide, including potentially important factors related to their socioeconomic status, health care access and cultural sources of support.

And, because about 75% of the women who died in the first year had accessed hospital emergency rooms at least once after giving birth, potential intervention points could be identified, as well.

"These deaths are likely just the tip of the iceberg in terms of substance use and mental distress," Margerison said. "We need to take the next steps to understand how to help women who experience these problems during and after pregnancy."

Credit: 
Michigan State University

Hybrid nanostructure steps up light-harvesting efficiency

image: As depicted in the illustration above, the hybrid nanostructure contains molybdenum diselenide (MoSe2) as the base, core-shell cadmium selenide (CdSe)-zinc sulfide (ZnS) quantum dots (QDs) on the outer side, and the allophycocyanin (APC) protein sandwiched between the QDs and MoSe2. When the system is excited with light (blue lightning strike symbol), energy is transferred in a stepwise manner through the different components, as indicated by the gray arrows. A top view of the APC protein structure is shown on the right.

Image: 
ACS Photonics

UPTON, NY--To absorb incoming sunlight, plants and certain kinds of bacteria rely on a light-harvesting protein complex containing molecules called chromophores. This complex funnels solar energy to the photosynthetic reaction center, where it is converted into chemical energy for metabolic processes.

Inspired by this found-in-nature architecture, scientists from the U.S. Department of Energy's (DOE) Brookhaven National Laboratory and Stony Brook University (SBU) have assembled a nanohybrid structure that contains both biologically derived (biotic) and inorganic (abiotic) materials. They combined a light-harvesting protein from a cyanobacteria, semiconducting nanocrystals (quantum dots), and a two-dimensional (2-D) semiconducting transition metal only one atomic layer thick. Described in a paper published on April 29 in ACS Photonics--a journal of the American Chemical Society (ACS)--this nanostructure could be used to improve the efficiency with which solar cells harvest energy from the sun.

"Today's best solar panels can convert nearly 23 percent of the sunlight they absorb into electricity, but on average, their efficiency ranges between 15 and 18 percent," said corresponding author Mircea Cotlet, a materials scientist in the Soft and Bio Nanomaterials Group at Brookhaven Lab's Center for Functional Nanomaterials (CFN)--a DOE Office of Science User Facility. "If this efficiency can be boosted, more electricity can be generated. The assembled biotic-abiotic nanohybrid shows enhanced harvesting of light and generation of electrical charge carriers compared to the 2-D semiconductor-only structure. These properties increase the nanohybrid's response to light when the structure is incorporated into a field-effect transistor (FET), a kind of optoelectronic device."

In designing the nanohybrid, the scientists chose atomically thin 2-D molybdenum diselenide (MoSe2) as the platform for bottom-up assembly. Molybdenum diselenide is a semiconductor, or a material whose electrical conductivity is in between that of a regular conductor (little resistance to the flow of electrical current) and insulator (high resistance). They combined MoSe2 with two strong light-harvesting nanomaterials: quantum dots (QDs) and the allophycocyanin (APC) protein from cyanobacteria.

The scientists chose the components based on their light-harvesting properties and engineered the components' band gaps (minimum energy required to excite an electron to participate in conduction) such that a concerted stepwise energy transfer can be promoted through the nanohybrid in a directional manner. In the hybrid, energy flows from light-excited QDs to the APC protein and then to MoSe2. This energy transfer mimics natural light-harvesting systems where surface chromophores (in this case, QDs) absorb light and direct the harvested energy to intermediate chromophores (here, APC) and finally to the reaction center (here, MoSe2).

To combine the different components, the scientists applied electrostatic self-assembly, a technique based on the interactions between electrically charged particles (opposite charges attract; like charges repel). They then used a specialized optical microscope to probe the transfer of energy through the nanohybrids. These measurements revealed that the addition of the APC protein layer increases the energy transfer efficiency of the nanohybrid with single-layer MoSe2 by 30 percent. They also measured the photoresponse of the nanohybrid incorporated into a fabricated FET and found that it showed the highest responsivity relative to FETs containing only one of the components, producing more than double the amount of photocurrent in response to incoming light.

"More light is transferred to MoSe2 in the biotic-abiotic hybrid," said first author and research associate Mingxing Li, who is working with Cotlet in the CFN Soft and Bio Nanomaterials Group. "Increased light transfer combined with the high charge carrier mobilities in MoSe2 means more carriers will be collected by the electrodes in a solar cell device. This combination is promising for boosting device efficiency."

The scientists proposed that adding APC in between QDs and MoSe2 creates a "funnel-like" energy-transfer effect due to the way that APC preferentially orients itself relative to MoSe2.

"We believe this study represents one of the first demonstrations of a cascaded biotic-abiotic nanohybrid involving a 2-D transition-metal semiconductor," said Li. "In a follow-on study, we will work with theoreticians to more deeply understand the mechanism underlying this enhanced energy transfer and identify its applications in energy harvesting and bioelectronics."

Credit: 
DOE/Brookhaven National Laboratory

Scientists develop a chemocatalytic approach for one-pot reaction of cellulosic ethanol

image: One-pot production of cellulosic ethanol via tandem catalysis over multifunctional Mo/Pt/WOx catalyst

Image: 
WANG Aiqin

Scientists at the Dalian Institute of Chemical Physics (DICP) of the Chinese Academy of Sciences have developed a chemocatalytic approach to convert cellulose into ethanol in a one-pot process by using a multifunctional Mo/Pt/WOx catalyst. This approach opens up an alternative avenue for biofuel production. The findings were published in Joule.

Cellulosic ethanol is one of the most important biofuels, yet commercial production is hindered by the low efficiency and high cost of the bioconversion process.

Prof. WANG Aiqin, leader of the research group, and her colleagues developed a chemocatalytic process in which two separate reactions, cellulose conversion to ethylene glycol and ethylene glycol conversion to ethanol, were coupled in a one-pot reaction by using a multifunctional Mo/Pt/WOx catalyst, thus achieving an ethanol yield of higher than 40 %.

While noting that the new process can still be made more efficient, WANG said that "in principle" the new process can "overcome the intrinsic limitations on ethanol concentration imposed by the bioconversion process."

"With further improvement in catalyst efficiency and robustness, this one-pot chemocatalytic approach shows great potential in the practical production of cellulosic ethanol in the future," WANG said.

Credit: 
Chinese Academy of Sciences Headquarters

Gut microbes respond differently to foods with similar nutrition labels

Foods that look the same on nutrition labels can have vastly different effects on our microbiomes, report researchers in a paper publishing June 12 in the journal Cell Host & Microbe. The researchers' observations of participants' diets and stool samples over the course of 17 days suggested that the correlation between what we eat and what's happening with our gut microbes might not be as straightforward as we thought. This adds an increased level of complexity to research focused on improving health by manipulating the microbiome.

"Nutrition labels are human-centric," says senior author Dan Knights (@KnightsDan), of the Department of Computer Science and Engineering and the BioTechnology Institute at the University of Minnesota. "They don't provide much information about how the microbiome is going to change from day to day or person to person."

In the study, the investigators enrolled 34 participants to record everything they ate for 17 days. Stool samples were collected daily, and shotgun metagenomic sequencing was performed. This allowed the researchers to see at very high resolution how different people's microbiomes, as well as the enzymes and metabolic functions that they influence, were changing from day to day in response to what they ate. It provided a resource for analyzing the relationships between dietary changes and how the microbiome changes over time.

"We expected that by doing this dense sampling--where you could see what people were eating every single day and what's happening to their microbiome--we would be able to correlate dietary nutrients with specific strains of microbes, as well as account for the differences in microbiomes between people," Knights says. "But what we found were not the strong associations we expected. We had to scratch our heads and come up with a new approach for measuring and comparing the different foods."

What the researchers observed was a much closer correspondence between changes in the diet and the microbiome when they considered how foods were related to each other rather than only their nutritional content. For example, two different types of leafy greens like spinach and kale may have a similar influence on the microbiome, whereas another type of vegetable like carrots or tomatoes may have a very different impact, even if the conventional nutrient profiles are similar. The researchers developed a tree structure to relate foods to each other and share statistical information across closely related foods.

Two people in the study consumed nothing but Soylent, a meal replacement drink that is popular with people who work in technology. Although it was a very small sample, data from these participants showed variation in the microbiome from day to day, suggesting that a monotonous diet doesn't necessarily lead to a stable microbiome.

"The microbiome has been linked to a broad range of human conditions, including metabolic disorders, autoimmune diseases, and infections, so there is strong motivation to manipulate the microbiome with diet as a way to influence health," Knights concludes. "This study suggests that it's more complicated than just looking at dietary components like fiber and sugar. Much more research is needed before we can understand how the full range of nutrients in food affects how the microbiome responds to what we eat."

Credit: 
Cell Press

Binary solvent mixture boosting high efficiency of polymer solar cells

image: The chemical structures of electron donor PBDB-T and fluorinated NFA INPIC-4F; the J-V characteristics of PBDB-T:INPIC-4F solar cells cast from different solvents; the AFM images of PBDB-T:INPIC-4F surfaces cast from CB and CB:CF.

Image: 
©Science China Press

Tremendous progress of organic solar cells (OSCs) has been exemplified by the use of non-fullerene electron acceptors (NFAs) in the past few years. Compared with fullerene derivative acceptors, NFAs show a multitude of advantages including tunable energy levels, broad absorption spectrum and strong light absorption ability, as well as high carrier mobility.
To further improve the efficiency of non-fullerene OSCs, fluorine (F) or chlorine (Cl) atoms have been introduced into the chemical structure of NFAs as an effective approach to modulate the HOMO and LUMO levels. With a small Van der Waals radius and large electronegativity, the F atom improves the molecular planarity and aggregation tendency of NFAs, as well as increasing their crystallization ability.

However, the tendency of fluorinated NFAs to self-organize into crystals usually leads to excessive phase separation, which has been found to increase the film surface roughness to enlarge charge recombination at the electrode interface, and more importantly to reduce the bulk heterojunction interfaces within the photoactive layer; effects that all lead to reduced power efficiency.

Very recently, Professor Tao Wang's group in Wuhan University of Technology demonstrated an effective approach to tune the molecular organization of a fluorinated NFA (INPIC-4F), and its phase separation with the donor PBDB-T, by varying the casting solvent (CB, CF and their mixtures). When a high boiling-point solvent CB was employed as the casting solvent, INPIC-4F formed lamellar crystals which further grow into micron-scale spherulites, resulting in a low PCE of 8.1% only. When the low boiling-point solvent CF was used, the crystallization of INPIC-4F has been suppressed and the low structure order leads to a moderate PCE of 11.4%. By using binary solvent mixture (CB:CF=1.5:1, v/v), the efficiency of PBDB-T:INPIC-4F non-fullerene OSCs was improved to 13.1%. These results show great promise of binary solvent strategy to control the molecular order and nanoscale morphology for high efficiency non-fullerene solar cells.

Credit: 
Science China Press

First blood-brain barrier chip using stem cells developed by Ben-Gurion University researchers

image: In the new Ben-Gurion University of the Negev study, the researchers genetically manipulated blood cells collected from an individual into stem cells (known as induced pluripotent stem cells), which can produce any type of cell. The cells are placed on a microfluidic organ-chip developed by Emulate (above) approximately the size of an AA battery, which contains tiny hollow channels lined with tens of thousands of living human cells and tissues.

Image: 
Dr. Gad Vatine/BGU

BEER-SHEVA, Israel...June 12 - Researchers at Ben-Gurion University of the Negev (BGU) and Cedars-Sinai Medical Center in Los Angeles have, for the first time, duplicated a patient's blood-brain barrier (BBB), creating a human BBB chip with stem cells, which can be used to develop personalized medicine and new techniques to research brain disorders.

The new research, published in the journal Cell Stem Cell, is a collaboration between Dr. Gad Vatine of BGU's Regenerative Medicine and Stem Cell Research Center and Department of Physiology and Cell Biology and Dr. Clive N. Svendsen, of Cedars-Sinai Medical Center in Los Angeles.

The blood-brain barrier blocks toxins and other foreign substances in the bloodstream from entering brain tissue and causing damage. But it also can prevent therapeutic drugs from reaching the brain. Neurological disorders such as multiple sclerosis, epilepsy, Alzheimer's disease, and Huntington's disease, which collectively affect millions worldwide, have been linked to a defective blood-brain barrier.

In the study, the researchers genetically manipulated blood cells collected from an individual into stem cells (known as induced pluripotent stem cells), which can produce any type of cell. These are used to create the various cells that comprise the blood-brain barrier. The cells are placed on a microfluidic BBB organ-chip approximately the size of an AA battery, which contains tiny hollow channels lined with tens of thousands of living human cells and tissues. This living, micro-engineered environment recreates the natural physiology and mechanical forces that cells experience within the human body, including the BBB.

The living cells recreate a functioning BBB, including blocking entry of certain drugs. Significantly, when this blood-brain barrier was derived from cells of patients with Allan-Herndon-Dudley syndrome, a rare congenital neurological disorder, and Huntington's disease patients, the barrier malfunctioned in the same way that it does in patients with these diseases.

"By combining patient-specific stem cells and organ-on-chip technology, we generated a personalized model of the human BBB," says Dr. Vatine. "BBB-on-chips generated from several individuals allows the prediction of the best suited brain drug in a personalized manner. The study's findings create dramatic new possibilities for precision medicine."

This is of particular importance for neurological diseases like epilepsy or schizophrenia, for which several FDA-approved drugs are available, but current treatment selections are largely based on trial and error.

"By combining organ-chip technology and human iPSC-derived tissue, we have created a neurovascular unit that recapitulates complex BBB functions, provides a platform for modeling inheritable neurological disorders, and advances drug screening, as well as personalized medicine," Dr. Vatine says.

Credit: 
American Associates, Ben-Gurion University of the Negev

'Interdisciplinary research takes time'

Interdisciplinarity is becoming increasingly important in research. Yet there are structures in place that make careers in science more difficult for interdisciplinary researchers, according to Ruth Müller, Professor of Science and Technology Policy at the Technical University of Munich (TUM). In this interview, she talks about her study on a research center in Sweden and about how existing hurdles could be overcome and interdisciplinary research could be promoted in more sustainable ways.

It seems that new scientific institutions and research projects are all about interdisciplinarity". Is it all hype?

It is not all hype, not at all. We are increasingly encountering issues that cannot be resolved using the methods of any one discipline. As a matter of fact, interdisciplinarity was already enabling major leaps forward even before it was intentionally promoted: After the Second World War, several physicists transferred to biology in the wake of the atomic bomb shock. This influx significantly contributed to the birth of molecular biology, as they applied their physics-based perspectives to biological research questions.

You studied an interdisciplinary research center in Sweden and used interviews to identify which obstacles researchers face when conducting interdisciplinary work. Has something gone fundamentally wrong at this center?

Not at all. It's a great research center with dedicated colleagues who do superb interdisciplinary work. But the study clearly demonstrates the complexity of interdisciplinary research and the specific challenges arising from it.

What exactly did you observe?

Well, for instance, after a while the institute's management came to the conclusion that - despite the institute's important contributions to addressing global challenges - its influence within the scientific community was not significant enough. The most important benchmark of successful research to date is often the number of publications in reputable journals. So this resulted in pressure to publish more articles in such journals. Since the most prestigious journals are often geared towards a traditionally disciplinary audience, this forced researchers to "discipline" their work to a certain extent in order to get published - not least because the number of such high-profile publications significantly influences researchers' success in attaining funding for new projects. Such pressures to become more disciplinary significantly affected the social and intellectual dynamics between the researchers at the center.

Are these fundamental problems that interdisciplinary research centers face?

There is little research into these issues so far. However, some studies indicate that researchers perceive the cost of working interdisciplinarily to be potentially very high - that it poses challenges to their career development, for instance. I have observed this, too: At the Swedish institute, I was told several times about an interdisciplinary PhD researcher whose research was highly valuable in terms of its contribution to addressing global challenges, but who found that at his thesis defense, his research was being assessed by an external examiner based on narrow "disciplinary" perspectives. For him and his supervisors, this raised the question as to how young interdisciplinary researchers can be prepared for an academic world that often still works along highly discipline-specific lines.

What do you think needs to change?

To date, evaluation systems are often based on a single criterion - and this is the number of high-profile publications. However, particularly when it comes to evaluating interdisciplinary research, it would be important to consider a range of evaluation criteria. Alongside publications, these might include research findings that lead to successful applications in society, or that result in actionable knowledge that empowers communities or society at large to tackle social and environmental challenges. To this end, we need well-trained reviewers, who are able to see the big picture and look beyond disciplinary confines. They should have a clear idea which mission an interdisciplinary project aims to accomplish and be able to evaluate its success using a variety of indicators. More reflective engagement with evaluation processes and specific trainings for the reviewers would be key to achieving these goals.

Apart from review processes, what else could be done to promote interdisciplinary research?

Pace is a very important factor: Interdisciplinary research takes time. If you want to develop something together, you first have to find a common language; immerse yourself in each other's way of thinking. In practical terms, one approach would be to allow more time for interdisciplinary theses from the start, for instance by funding interdisciplinary doctoral positions for four years instead of the usual three.

Credit: 
Technical University of Munich (TUM)