Tech

A safer, less expensive and fast charging aqueous battery

image: An electric fan (top left) is powered by the proposed zinc battery; typical charge/discharge profiles of ZIBs at 0.5C (top right); in-situ microscope setup to image the zinc deposition dynamics (bottom left); and the morphology change caused by the zinc deposition (bottom right).

Image: 
University of Houston

Lithium-ion batteries are critical for modern life, from powering our laptops and cell phones to those new holiday toys. But there is a safety risk - the batteries can catch fire.

Zinc-based aqueous batteries avoid the fire hazard by using a water-based electrolyte instead of the conventional chemical solvent. However, uncontrolled dendrite growth limits their ability to provide the high performance and long life needed for practical applications.

Now researchers have reported in Nature Communications that a new 3D zinc-manganese nano-alloy anode has overcome the limitations, resulting in a stable, high-performance, dendrite-free aqueous battery using seawater as the electrolyte.

Xiaonan Shan, co-corresponding author for the work and an assistant professor of electrical and computer engineering at the University of Houston, said the discovery offers promise for energy storage and other applications, including electric vehicles.

"It provides a low-cost, high energy density, stable battery," he said. "It should be of use for reliable, rechargeable batteries."

Shan and UH PhD student Guangxia Feng also developed an in situ optical visualization technique, allowing them to directly observe the reaction dynamics on the anode in real time. "This platform provides us with the capability to directly image the electrode reaction dynamics in situ," Shan said. "This important information provides direct evidence and visualization of the reaction kinetics and helps us to understand phenomena that could not be easily accessed previously."

Testing determined that the novel 3D zinc-manganese nano alloy anode remained stable without degrading throughout 1,000 hours of charge/discharge cycling under high current density (80 mA/cm2).

The anode is the electrode which releases current from a battery, while electrolytes are the medium through which the ionic charge flows between the cathode and anode. Using seawater as the electrolyte rather than highly purified water offers another avenue for lowering battery cost.

Traditional anode materials used in aqueous batteries have been prone to dendrites, tiny growths that can cause the battery to lose power. Shan and his colleagues proposed and demonstrated a strategy to efficiently minimize and suppress dendrite formation in aqueous systems by controlling surface reaction thermodynamics with a zinc alloy and reaction kinetics by a three-dimensional structure.

Shan said researchers at UH and University of Central Florida are currently investigating other metal alloys, in addition to the zinc-manganese alloy.

Credit: 
University of Houston

Researchers acquire 3D images with LED room lighting and a smartphone

image: Researchers developed a way to use overhead LED lighting and a smartphone to create 3D images of a small figurine.

Image: 
Emma Le Francois, University of Strathclyde

WASHINGTON -- As LEDs replace traditional lighting systems, they bring more smart capabilities to everyday lighting. While you might use your smartphone to dim LED lighting at home, researchers have taken this further by tapping into dynamically controlled LEDs to create a simple illumination system for 3D imaging.

"Current video surveillance systems such as the ones used for public transport rely on cameras that provide only 2D information," said Emma Le Francois, a doctoral student in the research group led by Martin Dawson, Johannes Herrnsdorf and Michael Strain at the University of Strathclyde in the UK. "Our new approach could be used to illuminate different indoor areas to allow better surveillance with 3D images, create a smart work area in a factory, or to give robots a more complete sense of their environment."

In The Optical Society (OSA) journal Optics Express, the researchers demonstrate that 3D optical imaging can be performed with a cell phone and LEDs without requiring any complex manual processes to synchronize the camera with the lighting.

"Deploying a smart-illumination system in an indoor area allows any camera in the room to use the light and retrieve the 3D information from the surrounding environment," said Le Francois. "LEDs are being explored for a variety of different applications, such as optical communication, visible light positioning and imaging. One day the LED smart-lighting system used for lighting an indoor area might be used for all of these applications at the same time."

Illuminating from above

Human vision relies on the brain to reconstruct depth information when we view a scene from two slightly different directions with our two eyes. Depth information can also be acquired using a method called photometric stereo imaging in which one detector, or camera, is combined with illumination that comes from multiple directions. This lighting setup allows images to be recorded with different shadowing, which can then be used to reconstruct a 3D image.

Photometric stereo imaging traditionally requires four light sources, such as LEDs, which are deployed symmetrically around the viewing axis of a camera. In the new work, the researchers show that 3D images can also be reconstructed when objects are illuminated from the top down but imaged from the side. This setup allows overhead room lighting to be used for illumination.

In work supported under the UK's EPSRC 'Quantic' research program, the researchers developed algorithms that modulate each LED in a unique way. This acts like a fingerprint that allows the camera to determine which LED generated which image to facilitate the 3D reconstruction. The new modulation approach also carries its own clock signal so that the image acquisition can be self-synchronized with the LEDs by simply using the camera to passively detect the LED clock signal.

"We wanted to make photometric stereo imaging more easily deployable by removing the link between the light sources and the camera," said Le Francois. "To our knowledge, we are the first to demonstrate a top-down illumination system with a side image acquisition where the modulation of the light is self-synchronized with the camera."

3D imaging with a smartphone

To demonstrate this new approach, the researchers used their modulation scheme with a photometric stereo setup based on commercially available LEDs. A simple Arduino board provided the electronic control for the LEDs. Images were captured using the high-speed video mode of a smartphone. They imaged a 48-millimeter-tall figurine that they 3D printed with a matte material to avoid any shiny surfaces that might complicate imaging.

After identifying the best position for the LEDs and the smartphone, the researchers achieved a reconstruction error of just 2.6 millimeters for the figurine when imaged from 42 centimeters away. This error rate shows that the quality of the reconstruction was comparable to that of other photometric stereo imaging approaches. They were also able to reconstruct images of a moving object and showed that the method is not affected by ambient light.

In the current system, the image reconstruction takes a few minutes on a laptop. To make the system practical, the researchers are working to decrease the computational time to just a few seconds by incorporating a deep-learning neural network that would learn to reconstruct the shape of the object from the raw image data.

Credit: 
Optica

Laser harmony

image: The laser constructed by the team of Dr. Stepanenki can be tuned in a similar way to tuning the radio to catch your favorite station. Only with femtosecond precision. PhD student Cássia Corso Silva from the Institute of Physical Chemistry of the Polish Academy of Sciences posed for the photo, photo: Grzegorz Krzy?ewski.

Image: 
IPC PAS/Grzegorz Krzyzewski

Would you like to capture a chemical transformation inside a cell live? Or maybe revolutionize microchips' production by printing paths in a layer that has a thickness of just 100 nanometers? These and many other goals can now be achieved with the latest femtosecond laser created by a team of scientists led by Dr. Yuriy Stepanenko.

These days, there is a multitude of laser light sources. They each have their characteristics and different applications, such as observing stars, treating illnesses, and surface micro-machining. "Our goal is to develop new ones," says Yuriy Stepanenko, head of the team of Ultrafast Laser Techniques at the Institute of Physical Chemistry of the Polish Academy of Sciences. "We deal with sources that produce ultrashort pulses of light. Really very, very short - femtosecond pulses (that's a part of a second with 15 zeros after the decimal point). This is the scale on which, for example, intracellular chemical reactions take place. To see them, we have to "take a photo" in this very short time. And thanks to the new laser, we can do just that.

"We can also use our source for the very precise removal of materials from various surfaces without destroying them," says the scientist. "We could, for example, clean the Mona Lisa using this method without damaging the layers of paint. We would only remove dust and dirt, a layer about 10 nanometers thick," explains Dr. Stepanenko, one of the authors of a study recently published in the Journal of Lightwave Technology.

"But for this sort of job, our laser is even rather too precise," notes Dr. Bernard Piechal, co-author of the publication. "For this, you only need nanosecond pulses, i.e. pulses lasting a thousand times longer. The latter, however, would not be able to, for instance, draw paths of precisely planned depths in ultra-thin materials, e.g. removing gold sprayed on microchips with a precise adjustment of the thickness of the layer being removed. But our laser can do this! It can also make holes in tempered glass or ultra-thin silicon plates. In these conditions, a nanosecond laser would either melt the silicon or "smash" the glass because it produces too much heat. Too much energy is concentrated locally in a very small area. Ours works firmly but gently," grins Dr. Stepanenko.

How was this effect achieved?

"We wanted our source to meet two conditions: it was to be susceptible to mechanical disturbance to the least possible extent, and it was to be mobile," explains Dr. Piechal. "We did not want to create a huge, stationary structure."

Fibre-optic lasers came to the rescue of the team. "This sort of laser is basically an optical fibre enclosed in a ring. The laser pulse runs inside it without being exposed to mechanical disturbances. The optical fibre can be touched, moved, even shaken without compromising the stability of the pulse. Of course, if the light only ran round in a circle like this, it would be useless, so part of this impulse is directed outside the loop in one place in the form of useful flashes," explains Dr. Stepanenko, with a smile.

Here we come to another important parameter of this sort of pulsed laser: the frequency with which the pulses appear at the output. In conventional designs, this frequency depends on the length of the fibre optic loop in which the pulse travels. Its practical length is several dozen metres. Which is quite a lot, isn't it? What if we wanted flashes of light to appear as often as possible? This can be done by reducing the circumference of the ring through which the pulse travels. Only that this sort of action has its limits. "In our lasers, the smallest loop gives pulses every 60 nanoseconds, which is still too slow for our desires," explains the researcher. How can this frequency be accelerated? This is where the new invention of the team from the IPC PAS comes in: a system that allows the basic frequency to be duplicated as if creating harmonic frequencies on the basic frequency of a guitar string. "We use so-called Harmonic Mode Locking," explains Dr. Stepanenko. "What is innovative in our design is that we are able to switch this repetition rate in a controlled way and select only one of the possible harmonics, the particular one we need. You could say that we are like a guitarist- on an open string, i.e. our loop of the fibre, we obtain a specific frequency resulting from its length. When we put our finger exactly in the middle of the string, we get the so-called second harmonic. The pitch increases by an octave and the vibration frequency doubles. If we put our finger on 1/3 of the length of the string, we get a frequency equal to three times higher than on the open string. In our case, we increase the frequency of the pulses by turning the knob. We can only do it in steps, each time getting another harmonic, just as the harmonics in the guitar change in steps, but the range is quite large: we can change our light harmonics from 2 up to 19 times above the basic frequency, i.e. reach a frequency of pulses up to just over 300 MHz.

It is extremely important that the obtained frequencies are stable and can be precisely distinguished. If we choose a harmonic, all the others will be so damped that their "volume" will be about 10 million times lower than that of the chosen one. You could say that we are generating a pure sound and eliminating all the background noise. In addition, the higher the frequency, the better it is defined. "We are the first to have managed to do this so well," says the researcher proudly.

It is left to us to wait for the invention to be implemented in more industrial applications. Perhaps it will mean even thinner and lighter laptops for us or better knowledge of what is happening inside the human body.

Credit: 
Institute of Physical Chemistry of the Polish Academy of Sciences

BAME parliamentary candidates not picked to fight 'winnable seats' in areas with less tolerance towa

The study found a "systematic and quantifiable pattern" of political parties officers opting against fielding minority candidates where they perceive that their non-white appearance might prevent a win. This includes constituencies already held by the party, and those within reach, requiring just a small swing in the vote to change hands.

Dr Patrick English, from the University of Exeter, who carried out the research, said: "This combination of public opinion and party strategy is one of the most significant blockages to electing parliaments which fully reflect the ethnic diversity of their populations, and works in tandem with and drives other exclusionary forces.

"Though much research has focused on potential voter discrimination, focusing on the behaviour of voters alone misses any discrimination which might occur prior to elections."

There would need to be around non-white 95 MPs would need to 'reflect' the size of the minority-ethnic British population - estimated at around 14.5 per cent in England and Wales, according to the most recent Annual Population Survey results. As it stands, only 65 MPs currently fit this profile.

Traditionally, Labour and Conservative candidate selection for British General Elections is handled by local branches and organisations of political parties, with applicants approved and vetted through national processes.

Dr English used regression analysis to show the relationship between election candidates and public opinion since 1997, using a specially created database. Candidate ethnic minority status was determined using online visual information, including social media, candidate pages and news articles. A seat was considered winnable at a given election if the party already holds the seat or it required less than a 5 per cent swing to change parties.

Local public opinion on diversity was measured by combining questions about race and immigration from five different sources: the European Social Survey, the British Election Study, the British Social Attitudes survey, and the European and World Values studies.

There was a rise in the total number of ethnic minority candidates fielded in British General Elections over the study period, with a particularly sizeable jump between 2010 and 2015. The Labour Party provided more opportunities than their Conservative counterparts in four out of seven of the elections in the study, including in 2017 and 2019.

London was far ahead of all other regions in terms of the percentage of seats there 'opened up' for minority representation. Opportunities were also relatively high in the South East and the West Midlands. There were very few "winnable seat" opportunities given to minority candidates in Wales, the North East, the South West, or the East Midlands.

Attitudes in London and Scotland towards immigration were more positive than other regions, while - with the exception of recent years - attitudes in regions such as Wales and the North East are much less positive on average. Some regions did not see such a dramatic change in recent years as others, with Yorkshire and the Humber, the South West, and the North West remaining fairly low on the scale compared to others such as the South East and the West Midlands in later years.

Dr English said: "Sadly this analysis suggests opportunities for BAME candidates wanting to stand for parliament are not equal, and processes for selecting candidates do not treat everyone fairly. This means electoral success is too often biased away from too many.

"Political parties are charged with being 'gatekeepers' to representation, and while they ultimately provide the vast majority of representational opportunities, they can also create punitive pressures on prospective candidates from 'non-traditional' backgrounds seeking to become representatives."

Credit: 
University of Exeter

Rice model offers help for new hips

image: Rice University engineers have designed a computational model that will ultimately serve as the engine to predict how long a hip implant could last for a specific patient. It incorporates fluid dynamics and the physics of implant wear and aims to streamline trial-and-error in the design of future implants.

Image: 
Wikipedia

HOUSTON - (Jan. 11, 2021) - Rice University engineers hope to make life better for those with replacement joints by modeling how artificial hips are likely to rub them the wrong way.

The computational study by the Brown School of Engineering lab of mechanical engineer Fred Higgs simulates and tracks how hips evolve, uniquely incorporating fluid dynamics and roughness of the joint surfaces as well as factors clinicians typically use to predict how well implants will stand up over their expected 15-year lifetime.

The team's immediate goal is to advance the design of more robust prostheses.

Ultimately, they say the model could help clinicians personalize hip joints for patients depending on gender, weight, age and gait variations.

Higgs and co-lead authors Nia Christian, a Rice graduate student, and Gagan Srivastava, a mechanical engineering lecturer at Rice and now a research scientist at Dow Chemical, reported their results in Biotribology.

The researchers saw a need to look beyond the limitations of earlier mechanical studies and standard clinical practices that use simple walking as a baseline to evaluate artificial hips without incorporating higher-impact activities.

"When we talk to surgeons, they tell us a lot of their decisions are based on their wealth of experience," Christian said. "But some have expressed a desire for better diagnostic tools to predict how long an implant is going to last.

"Fifteen years sounds like a long time but if you need to put an artificial hip into someone who's young and active, you want it to last longer so they don't have multiple surgeries," she said.

Higgs' Particle Flow and Tribology Lab was invited by Rice mechanical and bioengineer B.J. Fregly, to collaborate on his work to model human motion to improve life for patients with neurologic and orthopedic impairments.

"He wanted to know if we could predict how long their best candidate hip joints would last," said Higgs, Rice's John and Ann Doerr Professor in Mechanical Engineering and a joint professor of Bioengineering, whose own father's knee replacement partially inspired the study. "So our model uses walking motion of real patients."

Physical simulators need to run millions of cycles to predict wear and failure points, and can take months to get results. Higgs' model seeks to speed up and simplify the process by analyzing real motion capture data like that produced by the Fregly lab along with data from "instrumented" hip implants studied by Georg Bergmann at the Free University of Berlin.

The new study incorporates the four distinct modes of physics -- contact mechanics, fluid dynamics, wear and particle dynamics -- at play in hip motion. No previous studies considered all four simultaneously, according to the researchers.

One issue others didn't consider was the changing makeup of the lubricant between bones. Natural joints contain synovial fluid, an extracellular liquid with a consistency similar to egg whites and secreted by the synovial membrane, connective tissue that lines the joint. When a hip is replaced, the membrane is preserved and continues to express the fluid.

"In healthy natural joints, the fluid generates enough pressure so that you don't have contact, so we all walk without pain," Higgs said. "But an artificial hip joint generally undergoes partial contact, which increasingly wears and deteriorates your implanted joint over time. We call this kind of rubbing mixed lubrication."

That rubbing can lead to increased generation of wear debris, especially from the plastic material -- an ultrahigh molecular weight polyethylene -- commonly used as the socket (the acetabular cup) in artificial joints. These particles, estimated at up to 5 microns in size, mix with the synovial fluid can sometimes escape the joint.

"Eventually, they can loosen the implant or cause the surrounding tissue to break down," Christian said. "And they often get carried to other parts of the body, where they can cause osteolysis. There's a lot of debate over where they end up but you want to avoid having them irritate the rest of your body."

She noted the use of metal sockets rather than plastic is a topic of interest. "There's been a strong push toward metal-on-metal hips because metal is durable," Christian said. "But some of these cause metal shavings to break off. As they build up over time, they seem to be much more damaging than polyethylene particles."

Further inspiration for the new study came from two previous works by Higgs and colleagues that had nothing to do with bioengineering. The first looked at chemical mechanical polishing of semiconductor wafers used in integrated circuit manufacturing. The second pushed their predictive modeling from micro-scale to full wafer-scale interfaces.

The researchers noted future iterations of the model will incorporate more novel materials being used in joint replacement.

Credit: 
Rice University

UVA-led team expands power grid planning to improve system resilience

In most animal species, if a major artery is cut off from the heart, the animal will struggle to survive. The same can be said for many of our critical infrastructure systems, such as electric power, water and communications. They are networked systems with vulnerable connections.

This vulnerability was on display in September 2017 when Hurricane Maria wrecked Puerto Rico's electric power grid, leaving almost all of the island's 3.3 million people without electricity. The months-long blackout that followed was the worst in U.S. history.

Claire Trevisan, then a civil and environmental engineering undergraduate student in the Department of Engineering Systems and Environment at the University of Virginia, took note of Puerto Rico's plight. She asked her fourth-year capstone advisor, professor and associate director of the UVA Environmental Resilience Institute Andres Clarens, if they could use her project to study the problem.

Trevisan's capstone became the impetus behind a critical improvement to the energy system optimization models engineers use to plan infrastructure: Integrating impacts of future hurricanes into decisions about how grids are designed. An interdisciplinary team led by UVA Ph.D. student Jeffrey Bennett and including collaborators from North Carolina State University and the University of Puerto Rico at Mayagüez just published the research in the journal Nature Energy.

Their research demonstrates that modernizing power grids by using more renewable energy sources distributed across the landscape will cost less than repairing hurricane damage to a centralized grid.

Optimization models analyze data to find the cheapest way to deliver power under a set of constraints. Established models already account for costs related to construction, fuels, emissions and resilience -- meaning the system's ability to recover if something disrupts operation -- but the costs of predictable damage from events such as hurricanes, wildfires and floods are not built into existing models.

"In the past, people didn't know how often hurricanes would hit and what kind of damage they would do," Clarens said. "Now we do know those things, and we have the computing power to actually run simulations to say, 'Okay, if we build it this way, how much more is it going to cost your electricity customers?'"

How the grid is configured, or its topology, is integral to the team's research. When the United States was electrified a century ago, the most efficient way to deliver power to customers involved centralized generation plants feeding electricity over a huge network. That was true even with the risk of widespread outages when a main power station or transmission artery was disrupted.

The researchers wanted to examine what happens as grids are gradually redesigned to support more renewable energy sources, which is already underway in much of the country with solar and wind generation. The model can identify the combinations of generation sources that make the most economical sense when you anticipate the cost of hurricane repairs.

"The ability to do this is important because the frequency and severity of storms are increasing as a result of climate change," Clarens said.

Puerto Rico is a good case study to apply the model. The island has been in the path of 13 named storms over the past 25 years. The existing grid architecture remains outdated today, and the system relies on imported fossil fuels, making power expensive. On the other hand, Puerto Rico has abundant solar and wind resources.

One problem in planning, Bennett said, is that government policies such as emissions controls and market conditions -- including decreasing costs of wind and solar energy production and storage -- can create "stranded assets," expensive, built-to-last power plants that end up being retired early because they're no longer economical to run.

Given the large number of policy- and weather-related combinations that could happen in the future, the team needed to run the model on UVA's supercomputer, Rivanna.

"In our study, we simulate the likelihood and intensity of a storm hitting the grid in each five-year time step. The hurricane intensity is used to predict the wind speed and project damage to the electric grid infrastructure," Bennett said.

"The system then builds new infrastructure to be able to meet electricity demand. By considering combinations of hurricane intensities and probabilities, we are then able to project average electricity costs and examine how infrastructure investments vary. Our results show that hurricanes increase electricity costs by 32% based on historical hurricane trends, and more if you consider that storms are increasing in frequency and severity as a result of climate change. Transitioning to renewables and natural gas reduces costs and emissions regardless of hurricane frequency."

Although the research addresses wind damage to the electric power grid brought on by hurricanes, the team's approach can be applied to other weather- and climate-related disasters, Clarens said.

"With this approach to grid decision-making, you can also look at cases like wildfires in the American West and floods in the Midwest," he said. "There are changes to the climate that are impacting our engineered systems. We're trying to develop new tools and new insights that can help us to say, 'Look, the past is not a good model for the future anymore. We need new ways to simulate the future so that we can make the best decisions possible.'"

Credit: 
University of Virginia School of Engineering and Applied Science

Carbon monoxide reduced to valuable liquid fuels

image: An electron microscope images shows copper nanocubes used by Rice University engineers to catalyze the transformation of carbon monoxide into acetic acid.

Image: 
Wang Group/Senftle Group/Rice University

HOUSTON - (Jan. 11, 2021) - A sweet new process is making sour more practical.

Rice University engineers are turning carbon monoxide directly into acetic acid -- the widely used chemical agent that gives vinegar its tang -- with a continuous catalytic reactor that can use renewable electricity efficiently to turn out a highly purified product.

The electrochemical process by the labs of chemical and biomolecular engineers Haotian Wang and Thomas Senftle of Rice's Brown School of Engineering resolves issues with previous attempts to reduce carbon monoxide (CO) into acetic acid. Those processes required additional steps to purify the product.

The environmentally friendly reactor uses nanoscale cubes of copper as the primary catalyst along with a unique solid-state electrolyte.

In 150 hours of continuous lab operation, the device produced a solution that was up to 2% acetic acid in water. The acid component was up to 98% pure, far better than that produced through earlier attempts to catalyze CO into liquid fuel.

Details appear in the Proceedings of the National Academy of Sciences.

Along with vinegar and other foods, acetic acid is used as an antiseptic in medical applications; as a solvent for ink, paint and coatings; and in the production of vinyl acetate, a precursor to common white glue.

The Rice process builds upon the Wang lab's reactor to produce formic acid from carbon dioxide (CO2). That research established an important foundation for Wang, recently named a Packard Fellow, to win a $2 million National Science Foundation (NSF) grant to continue exploring the conversion of greenhouse gases into liquid fuels.

"We're upgrading the product from a one-carbon chemical, the formic acid, to two-carbon, which is more challenging," Wang said. "People traditionally produce acetic acid in liquid electrolytes, but they still have the issue of low performance as well as separating the product from the electrolyte."

"Acetic acid is typically not synthesized, of course, from CO or CO2," Senftle added. "That's the key here: We're taking waste gases we want to mitigate and turning them into a useful product."

It took a careful coupling between the copper catalyst and solid electrolyte, the latter carried over from the formic acid reactor. "Sometimes copper will produce chemicals along two different pathways," Wang said. "It can reduce CO into acetic acid and alcohols. We engineered copper cubes dominated by one facet that can help this carbon-carbon coupling, with edges that direct the carbon-carbon coupling towards acetic acid instead of other products."

Computational models by Senftle and his team helped refine the cubes' form factor. "We were able to show there are types of edge on the cube, basically more corrugated surfaces, that facilitate breaking certain C-O bonds that steer the products one way or the other," he said. "Having more edge sites favors breaking the right bonds at the right time."

Senftle said the project was a great demonstration of how theory and experiment should mesh. "It's a nice example of engineering on many levels, from integration of the components in a reactor all the way down to the mechanism at the atomistic level," he said. "It fits with the themes of molecular nanotechnology, showing how we can scale it up to real-world devices."

The next step in development of a scalable system is to improve upon the system's stability and further reduce the amount of energy the process requires, Wang said.

Rice graduate students Peng Zhu and Chun-Yen Liu and Chuan Xia, the J. Evans Attwell-Welch Postdoctoral Fellow, are co-lead authors of the paper. Co-authors are Rice research scientist Guanhui Gao, postdoctoral researcher Xiao Zhang and graduate student Yang Xia; former Rice postdoctoral researcher Kun Jiang of Shanghai Jiao Tong University, China; and graduate student Yongjiu Lei and Husam Alshareef, a professor of material science and engineering, at King Abdullah University of Science and Technology, Saudi Arabia. Wang and Senftle are assistant professors of chemical and biomolecular engineering.

The NSF and the CIFAR Azrieli Global Scholars Program supported the research.

Credit: 
Rice University

Link between driver of ovarian cancer and metabolism opens up new therapeutic strategies

image: Ovarian cancer cells

Image: 
The Wistar Institute

PHILADELPHIA -- (Jan. 11, 2020) -- Mutations that inactivate the ARID1A gene in ovarian cancer increase utilization of the glutamine amino acid making cancer cells dependent on glutamine metabolism, according to a study by The Wistar Institute published online in Nature Cancer. Researchers also showed that pharmacologic inhibition of glutamine metabolism may represent an effective therapeutic strategy for ARID1A-mutant ovarian cancer.

Up to 60% of ovarian clear cell carcinomas (OCCC) have inactivating mutations in the ARID1A tumor suppressor gene. These mutations are known genetic drivers of this type of cancer, which typically does not respond to chemotherapy and carries the worst prognosis among all subtypes of ovarian cancer.

The laboratory of Rugang Zhang, Ph.D., deputy director of The Wistar Institute Cancer Center, professor and leader of the Immunology, Microenvironment & Metastasis Program, studies the effects of ARID1A inactivation to devise new mechanism-guided therapeutic strategies and combination approaches to enhance immunotherapy for ovarian cancer.

"Metabolic reprogramming is a hallmark of many cancers, including OCCC, so in this study we assessed whether ARID1A plays a role in regulation of metabolism," said Zhang, corresponding author on the paper. "We found that its inactivation in cancer cells creates a specific metabolic requirement for glutamine and exposed this as a vulnerability that could be exploited for therapeutic purposes."

The authors inactivated ARID1A in wild type ovarian cancer cells and observed increased glutamine consumption. Glutamine is normally required for cancer cells to grow, but Zhang and colleagues unveiled a stronger dependence of ARID1A-mutant cells on this amino acid, which significantly enhanced the growth suppression induced by glutamine deprivation.

ARID1A is part of a protein complex called SWI/SNF that modulates gene expression. The authors investigated the transcriptional effect of ARID1A inactivation and found that GLS1, which encodes for the glutaminase enzyme, was the top upregulated gene among those controlling glutamine metabolism. Accordingly, GLS1 was expressed at significantly higher levels in tumor samples from patients with other cancer types that also carry mutations in the SWI/SNF complex.

The team evaluated the therapeutic potential of inhibiting the glutamine metabolism by blocking the glutaminase enzyme with the CB-839 inhibitor. It has been reported that this molecule is under investigation in clinical trials and is well tolerated as a single agent and in combination with other anticancer therapies.

When tested in vivo on OCCC mouse models, CB-839 significantly reduced tumor burden and prolonged survival. These studies were expanded to mice carrying patient-derived tumor transplants, confirming that CB-839 impaired the growth of ARID1A-mutant but not ARID1A-wildtype tumors.

Researchers also combined CB-839 with anti-PDL1 treatment and revealed a synergy between glutaminase inhibitors and immune checkpoint blockade in suppressing the growth of ARID1A-mutant OCCC tumors.

"Our findings suggest that glutaminase inhibitors warrant further studies as a standalone or combinatorial therapeutic intervention for OCCC, for which effective options are very limited," said Shuai Wu, Ph.D., first author of the study and a staff scientist in the Zhang Lab.

Glutaminase inhibitors could become a new strategy to precisely target a specific vulnerability of OCCC cells associated with loss of ARID1A function.

Credit: 
The Wistar Institute

Study shows tweaking one layer of atoms on a catalyst's surface can make it work better

image: An illustration combines two possible types of surface layers for a catalyst that performs the water-splitting reaction, the first step in making hydrogen fuel. The gray surface, top, is lanthanum oxide. The colorful surface is nickel oxide. A study led by researchers at SLAC National Accelerator Laboratory and Stanford University discovered that a rearrangement of atoms in the nickel oxide surface while carrying out the reaction made it twice as efficient, a phenomenon they hope to harness to design better catalysts. Lanthanum atoms are depicted in green, nickel in blue and oxygen in red.

Image: 
CUBE3D Graphic

Scientists crafting a nickel-based catalyst used in making hydrogen fuel built it one atomic layer at a time to gain full control over its chemical properties. But the finished material didn't behave as they expected: As one version of the catalyst went about its work, the top-most layer of atoms rearranged to form a new pattern, as if the square tiles that cover a floor had suddenly changed to hexagons.

But that's ok, they reported today, because understanding and controlling this surprising transformation gives them a new way to turn catalytic activity on and off and make good catalysts even better.

The research team, led by scientists from Stanford University and the Department of Energy's SLAC National Accelerator Laboratory, described their study in Nature Materials today.

"Catalysts can change very quickly during the course of a reaction, and understanding how they transform from an inactive phase to an active one is crucial to designing more efficient catalysts," said Will Chueh, an investigator with the Stanford Institute for Materials and Energy Sciences (SIMES) at SLAC who led the study. "This transformation gives us the equivalent of a knob we can turn to fine-tune their behavior."

Splitting water to make hydrogen fuel

Catalysts help molecules react without being consumed in the reaction, so they can be used over and over. They're the backbone of many green-energy devices.

This particular catalyst, lanthanum nickel oxide or LNO, is used to split water into hydrogen and oxygen in a reaction powered by electricity. It's the first step in generating hydrogen fuel, which has enormous potential for storing renewable energy from sunlight and other sources in a liquid form that's energy-rich and easy to transport. In fact, several manufacturers have already produced electric cars powered by hydrogen fuel cells.

But this first step is also the most difficult one, said Michal Bajdich, a theorist at the SUNCAT Center for Interface Science and Catalysis at SLAC, and researchers have been searching for inexpensive materials that will carry it out more efficiently.

Since reactions take place on a catalyst's surface, researchers have been trying to precisely engineer those surfaces so they promote only one specific chemical reaction with high efficiency.

Building materials one atomic layer at a time

The LNO investigated in this study belongs to a class of promising catalytic materials known as perovskites, named after a natural mineral with a similar atomic structure.

Christoph Baeumer, who came to SLAC as a Marie Curie Fellow from Aachen University in Germany to carry out the study, prepared LNO in what's known as an epitaxial thin film - a film grown in atomically thin layers in a way that creates an extraordinarily precise arrangement of atoms.

Dividing his time between California and Germany, Baeumer made two versions of the film at different temperatures ­­- one with a nickel-rich surface and another with a lanthanum-rich surface. Then the research team ran all the versions through the water-splitting reaction to compare how well they performed.

"We were surprised to discover that the films with nickel-rich surfaces carried out the reaction twice as fast," Baeumer said.

Tuning a catalyst's surface for better performance

To find out why, the team took the films to DOE's Lawrence Berkeley National Laboratory, where a group led by Slavomir Nemsak looked at their atomic structure with X-rays at the Advanced Light Source.

"It was surprising that the difference between the 'good' and the 'bad' catalyst was only in the last atomic layer of the films," Nemsak said. Those investigations also revealed that in films with nickel-rich surface layers that were prepared at cooler temperatures, the top layer of atoms transformed at some point during the water-splitting reaction, and this new arrangement boosted the catalytic activity.

Meanwhile, Jiang Li, a postdoctoral researcher and theorist at SUNCAT, performed computational studies of this very complex system using Berkeley Lab's National Energy Research Scientific Computing Center (NERSC). His conclusions agreed with the experimental results, predicting that the version of the catalyst with the transformed surface - from a cubic pattern to a hexagonal one - would be the most active and stable one.

Bajdich said, "Is the transformation of the nickel-rich surface driven by the way the catalyst is prepared, or by changes it undergoes while it carries out the water-splitting reaction? That's very hard to answer. It looks like both have to occur."

Although this particular catalyst is not the best in the world for splitting water into hydrogen and oxygen, he said, discovering how a surface transformation boosts its activity is important and could potentially apply to other materials too.

"If we can unlock the secrets of this transformation so we can accurately tune it," he said, "then we can leverage this phenomenon to make much better catalysts in the future."

Credit: 
DOE/SLAC National Accelerator Laboratory

2D compound shows unique versatility

image: Rice University materials theorists show how a unique two-dimensional compound of antimony and indium selenide can have distinct properties on each side, depending on polarization by an external electric field, with possible applications in solar energy and quantum computing. The figure indicates that two states for nonvolatile memory devices can be flipped by the polarization of the ferroelectric layer.

Image: 
Illustration by Jun-Jie Zhang/Rice University

HOUSTON - (Jan. 11, 2021) - An atypical two-dimensional sandwich has the tasty part on the outside for scientists and engineers developing multifunctional nanodevices.

An atom-thin layer of semiconductor antimony paired with ferroelectric indium selenide would display unique properties depending on the side and polarization by an external electric field.

The field could be used to stabilize indium selenide's polarization, a long-sought property that tends to be wrecked by internal fields in materials like perovskites but would be highly useful for solar energy applications.

Calculations by Rice materials theorist Boris Yakobson, lead author and researcher Jun-Jie Zhang and graduate student Dongyang Zhu shows switching the material's polarization with an external electric field makes it either a simple insulator with a band gap suitable for visible light absorption or a topological insulator, a material that only conducts electrons along its surface.

Turning the field inward would make the material good for solar panels. Turning it outward could make it useful as a spintronic device for quantum computing.

The lab's study appears in the American Chemical Society journal Nano Letters.

"The ability to switch at will the material's electronic band structure is a very attractive knob," Yakobson said. "The strong coupling between ferroelectric state and topological order can help: the applied voltage switches the topology through the ferroelectric polarization, which serves as an intermediary. This provides a new paradigm for device engineering and control."

Weakly bound by the van der Waals force, the layers change their physical configuration when exposed to an electric field. That changes the compound's band gap, and the change is not trivial, Zhang said.

"The central selenium atoms shift along with switching ferroelectric polarization," he said. "This kind of switching in indium selenide has been observed in recent experiments."

Unlike other structures proposed and ultimately made by experimentalists -- boron buckyballs are a good example -- the switching material may be relatively simple to make, according to the researchers.

"As opposed to typical bulk solids, easy exfoliation of van der Waals crystals along the low surface energy plane realistically allows their reassembly into heterobilayers, opening new possibilities like the one we discovered here," Zhang said.

Credit: 
Rice University

University at Buffalo researchers report quantum-limit-approaching chemical sensing chip

image: The chip, which also may have uses in food safety monitoring, anti-counterfeiting and other fields where trace chemicals are analyzed.

Image: 
Huaxiu Chen, University at Buffalo.

BUFFALO, N.Y. -- University at Buffalo researchers are reporting an advancement of a chemical sensing chip that could lead to handheld devices that detect trace chemicals -- everything from illicit drugs to pollution -- as quickly as a breathalyzer identifies alcohol.

The chip, which also may have uses in food safety monitoring, anti-counterfeiting and other fields where trace chemicals are analyzed, is described in a study that appears on the cover of the Dec. 17 edition of the journal Advanced Optical Materials.

"There is a great need for portable and cost-effective chemical sensors in many areas, especially drug abuse," says the study's lead author Qiaoqiang Gan, PhD, professor of electrical engineering in the UB School of Engineering and Applied Sciences.

The work builds upon previous research Gan's lab led that involved creating a chip that traps light at the edges of gold and silver nanoparticles.

When biological or chemical molecules land on the chip's surface, some of the captured light interacts with the molecules and is "scattered" into light of new energies. This effect occurs in recognizable patterns that act as fingerprints of chemical or biological molecules, revealing information about what compounds are present.

Because all chemicals have unique light-scattering signatures, the technology could eventually be integrated into a handheld device for detecting drugs in blood, breath, urine and other biological samples. It could also be incorporated into other devices to identify chemicals in the air or from water, as well as other surfaces.

The sensing method is called surface-enhanced Raman spectroscopy (SERS).

While effective, the chip the Gan group previously created wasn't uniform in its design. Because the gold and silver was spaced unevenly, it could make scattered molecules difficult to identify, especially if they appeared on different locations of the chip.

Gan and a team of researchers -- featuring members of his lab at UB, and researchers from the University of Shanghai for Science and Technology in China, and King Abdullah University of Science and Technology in Saudi Arabia -- have been working to remedy this shortcoming.

The team used four molecules (BZT, 4-MBA, BPT, and TPT), each with different lengths, in the fabrication process to control the size of the gaps in between the gold and silver nanoparticles. The updated fabrication process is based upon two techniques, atomic layer deposition and self-assembled monolayers, as opposed to the more common and expensive method for SERS chips, electron-beam lithography.

The result is a SERS chip with unprecedented uniformity that is relatively inexpensive to produce. More importantly, it approaches quantum-limit sensing capabilities, says Gan, which was a challenge for conventional SERS chips

"We think the chip will have many uses in addition to handheld drug detection devices," says the first author of this work, Nan Zhang, PhD, a postdoctoral researcher in Gan's lab. "For example, it could be used to assess air and water pollution or the safety of food. It could be useful in the security and defense sectors, and it has tremendous potential in health care."

Credit: 
University at Buffalo

Analytical measurements can predict organic solar cell stability

North Carolina State University-led researchers have developed an analytical measurement "framework" which could allow organic solar cell researchers and manufacturers to determine which materials will produce the most stable solar cells prior to manufacture.

Organic solar cells have increased in efficiency over the past decades, but researchers and manufacturers still struggle with determining which material combinations work best and why, as well as with achieving stable morphology and operation.

"There is still a lot of 'trial and error' guesswork involved in identifying promising materials for these solar cells," says Harald Ade, Goodnight Innovation Distinguished Professor of Physics at NC State and co-corresponding author of the research. "However, we found that if you understand two important parameters for the materials being used, you can predict how stable the active layer morphology will be, which in turn affects efficiency over time."

The parameters in question are the elastic modulus and glass transition - essentially how stiff the material is and at what temperature the material transitions from a rigid state to a rubbery or viscous fluid state.

"The most efficient solar cells are composed of a blend of materials that typically have poor miscibility," says Brendan O'Connor, associate professor of mechanical and aerospace engineering at NC State and co-corresponding author of the research. "Ideally, these blends need to be mixed during fabrication to an optimized composition, but over time they can separate or diffuse into domains that are too pure, which leads to device degradation.

"We wanted to understand what drives this instability in composition. We found that the molecular interactions that fundamentally drive diffusion behavior could be captured with the 'proxy-parameters' of elastic modulus and glass transition temperature."

The team, led by NC State postdoctoral researcher Masoud Ghasemi, used secondary ion mass spectrometry (SIMS), to measure the diffusion behavior of small molecules into a pure polymer layer. They also used differential scanning calorimetry (DSC), and a wrinkling metrology approach to measure the glass transition and elastic modulus of a number of materials that are commonly used in organic solar cells.

Overall, the team found that the most stable organic solar cells contained a small molecule with a high glass transition temperature and a polymer with a large elastic modulus; in other words, a highly rigid material.

"The more rigid materials also have the lowest inherent miscibility," Ghasemi says. "Interestingly, this means that the materials that do not like to mix have the lowest diffusion when forced to do so, resulting in the most stable solar cells."

"Our findings are fairly intuitive," Ade says, "but finding that there is a quantitative relationship between elastic modulus, glass transition and the molecular interactions inside these materials allows us to capture interaction forces at a local level, predicting stability in these systems without requiring trial and error."

Credit: 
North Carolina State University

Levels of stress hormone in saliva of newborn deer fawns may predict mortality

image: Only about half of fawns across the white-tailed deer's range, on average, live to see their first birthday, and predators such as coyotes, bears and bobcats have been blamed for that. But this research suggests that other factors such as disease and physiology may be more influential in very young fawn mortality than previously suspected.

Image: 
Tess Gingery/Penn State

The first-ever study of the levels of the stress hormone cortisol in the saliva of newborn white-tailed deer fawns yielded thought-provoking results that have Penn State researchers suggesting predation is not the only thing in the wild killing fawns.

"We think the hormone offers a way to evaluate factors in the environment that affect fawns, such as disease, but are difficult to evaluate when just looking at a carcass that has been picked over by predators," said researcher Duane Diefenbach, adjunct professor of wildlife ecology. "By then, it's impossible to be certain what truly caused the fawn's demise."

Diefenbach, leader of Penn State's Pennsylvania Cooperative Fish and Wildlife Research Unit in the College of Agricultural Sciences, has a unique perspective on fawn mortality and the impact of predators. In 2000 and again in 2015, he led studies of fawn mortality in Pennsylvania. Both of those research projects included an assessment of predator effects on fawns in the state.

In the cortisol study, led by Tess Gingery, a research technologist in the Department of Ecosystem Science and Management, salivary cortisol concentrations in 19 newborn fawns were evaluated in May and June 2017. Saliva was actually collected from 34 newborn fawns in two study areas, but some samples had to be discarded in the lab for insufficient quantity or poor quality.

To facilitate capture of those fawns, from January to April of that year, the Pennsylvania Game Commission captured pregnant adult female deer using rocket nets and single-gate traps and inserted vaginal implant transmitters. Those devices -- often referred to as VITs -- notified researchers when and where the does gave birth.

Several hours after getting a signal, a crew would scramble to find and process the fawn. The cortisol measuring was just some of the data collected before it was released unharmed to its mother, that lingered nearby. It was important to collect the saliva quickly, however, Gingery explained.

"We know from research performed on adult deer at another university that it takes 20 minutes from the moment the stress hormone cortisol is released from the brain into the body that it shows up in their saliva," she said. "We needed to measure the fawn's 'baseline' cortisol levels, before the stress from capture and being handled kicked in. So, we sopped up saliva with swabs as soon as we could."

While still in the field, researchers separated the saliva from swabs in portable centrifuges and froze the samples for transport to Penn State for analysis.

There is a lot of evidence that suggests cortisol can be bad for an individual's survival, body condition, brain development and immune system, Gingery pointed out. Researchers hypothesized that high cortisol concentrations would be bad for newborn fawns' survival, and the data supported their prediction.

"Fawns that had higher cortisol concentrations in their saliva had lower survival, but we don't know if the high cortisol concentrations caused higher mortality or not," Gingery said. "It could be that elevated stress hormones reflect a fawn's experience with stressful situations rather than causing fawn mortality. For example, a fawn may be starving, which can elevate stress hormone levels, and the lack of food kills the fawn rather than the elevated stress hormone levels."

The research findings, recently published in Integrative Zoology, show that predators aren't the only things influencing fawns. It turns out, Diefenbach noted, that cortisol levels were higher in fawns tested in the central Pennsylvania study area -- where numbers of predators such as coyotes and bears, and predation rates, are known to be lower -- than in the northern Pennsylvania study area -- where predators are known to be more numerous and predation rates higher.

"This research could explain why Delaware -- which has zero predators -- found similar fawn-survival rates to Pennsylvania in a recent study," he said. "Factors other than predation, such as physiological responses like the cortisol we measured, may influence mortality more than predation. As such, the heavy focus on predation in fawn research may be misguided."

Credit: 
Penn State

Machine learning accelerates discovery of materials for use in industrial processes

image: Artificial intelligence enabled autonomous design of nanoporous materials.

Image: 
Courtesy of University of Toronto

TORONTO, ON - New research led by researchers at the University of Toronto (U of T) and Northwestern University employs machine learning to craft the best building blocks in the assembly of framework materials for use in a targeted application.

The findings, published today in Nature Machine Intelligence, demonstrated that the use of artificial intelligence (AI) approaches can help in proposing novel materials for diverse applications. One example is the separation of carbon dioxide from industrial combustion processes. AI approaches promise the acceleration of the design cycle for materials.

With the objective of improving the separation of chemicals in industrial processes, the team of researchers - including collaborators from Harvard University and the University of Ottawa - set out to identify the best reticular frameworks (e.g., metal organic frameworks, covalent organic frameworks) for use in the process. Such frameworks, which can be thought of as tailored molecular "sponges", form via the self-assembly of molecular building blocks into different arrangements and represent a new family of crystalline porous materials that have been proven to be promising in addressing many technology challenges (e.g., clean energy, sensoring, biomedicine, etc.)

"We built an automated materials discovery platform that generates the design of various molecular frameworks, significantly reducing the time required to identify the optimal materials for use in this particular process," says Zhenpeng Yao, a postdoctoral fellow in the Departments of Chemistry and Computer Science in the Faculty of Arts & Science at U of T, and lead author of the study. "In this demonstrated employment of the platform, we discovered frameworks that are strongly competitive against some of the best-performing materials used for CO2 separation known to date."

The perennial challenges in addressing CO2 separation and other problems like greenhouse gas reduction and vaccine development, however, are the unpredictable amount of time and extensive trial-and-error efforts required in the pursuit of such new materials. The occasionally infinite combinations of molecular building blocks available in the construction of chemical compounds can mean the exhaustion of significant amounts of time and resources before a breakthrough is made.

"Designing reticular materials is particularly challenging, as they bring the hard aspects of modeling crystals together with those of modeling molecules in a single problem," says senior coauthor Alán Aspuru-Guzik, Canada 150 Research Chair in Theoretical Chemistry in the Departments of Chemistry and Computer Science at U of T and Canada CIFAR AI Chair at the Vector Institute. "This approach to reticular chemistry exemplifies our emerging focus at U of T of accelerating materials development by means of artificial intelligence. By using an AI model that can 'dream' or suggest novel materials, we can go beyond the traditional library-based screening approach."

The researchers focused on the development of metal-organic frameworks (MOFs) that are now considered the ideal absorbing material for the removal of CO2 from flue gas and other combustion processes.

"We began with the construction of a large number of MOF structures on the computer, simulated their performance using molecular-level modeling, and built a training pool applicable to the chosen application of CO2 separation," said study co-author Randall Snurr, the John G. Searle Professor and chair of the Department of Chemical & Biological Engineering in the McCormick School of Engineering at Northwestern University. "In the past, we would have screened through the pool of candidates computationally and reported the top candidates. What's new here is that the automated materials discovery platform developed in this collaborative effort is more efficient than such a "brute force" screening of every material in a database. Perhaps more importantly, the approach uses machine learning algorithms to learn from the data as it explores the space of materials and actually suggests new materials that were not originally imagined."

The researchers say the model shows great prediction and optimization capability in the design of novel reticular frameworks, particularly in combination with already known candidates in specific functions, and that the platform is fully customizable in its application to address many contemporary technology challenges.

Credit: 
University of Toronto

GridTape: An automated electron microscopy platform

How are networks of neurons connected to make functional circuits? This has been a long standing question in neuroscience. To answer this fundamental question, researchers from Boston Children's Hospital and Harvard Medical School developed a new way to study these circuits and in the process learn more about the connections between them.

"Neural networks are extensive, but the connections between them are really small," says Wei-Chung Allen Lee, PhD, of the F.M. Kirby Neurobiology Center at Boston Children's and Harvard Medical School. "So, we have had to develop techniques to see them in extremely high-resolution over really large areas and volumes." To do so, his team developed an improved process for large-scale electron microscopy (EM) -- a technique first developed in the 1950s using accelerated electrons beams to visualize extremely small structures.

"But the problem with electron microscopy is that because it provides such high image resolution, it has been difficult to study whole neural circuits," says Lee, whose lab is interested in learning how neural circuits underlie function and behavior. "To improve the technique, we developed an automated system to image at high-resolution, but at the scale to encompass neuronal circuits." A paper describing this work was published in Cell.

GridTape: Automated, Faster, Cheaper Electron Microscopy Technique

Traditional EM requires hand collection of thousands of tissue samples onto a grid. The tissue is sliced in 40 nanometer-thick sections, about a thousand times thinner than a human hair. The technique automates collection of the samples, assigning a barcode to each section, and applying them to a conveyor belt that could then be fed through an electron microscope like a movie projector. An advantage of the technique is that every neuron is labeled in each tissue section.

"As the electrons pass through each section, we can image each neurons in fine detail," Lee explains. "And because all of the sections are labeled with a barcode we know exactly where each of these sections comes from so we can reconstruct the circuits."

"This new technique allows us to do electron microscopy faster and in an automated way, with high quality, yet at a reasonable price," says Lee.

In their paper, the team provides GridTape instrumentation designs and software to make large-scale EM accessible and affordable to the larger scientific research community.

Fruit fly spinal cord: a case study

The team used their GridTape method to study the ventral nerve cord of the Drosophila melanogaster fruit fly, which is similar to the spinal cord. It contains all the circuits the fly uses to move its limbs. Their goal: create a comprehensive map of the neuronal circuits that control motor function.

"By applying this method to the entire nerve cord, we were able to reconstruct all of its motor neurons, as well as a large population of sensory neurons," says Lee.

In the process, they discovered a specific kind of sensory neuron in the fly thought to detect changes in load, like body weight. "These neurons are very large, relatively rare in number, and they make direct connections onto motor neurons of the same type on both sides of the body," says Lee. "We believe this may be a circuit that helps stabilize body position."

From this work, the team created a map of more than 1,000 motor and sensory neuron reconstructions available on an open registry. "It allows anyone in the world to access this data set and look at any neuron that they're interested in and ask who they're connected to," says Lee.

Future applications

Now with the ability to map ever larger neural circuits, Lee believes this technique could be useful for studying neuronal circuits in larger brains and testing predictions about neural function and behavior. His team is now studying the technique in mice and other researchers in the UK and Japan are applying the technique across multiple animal systems.

In addition, the technology has broader potential uses where large numbers of samples need to be imaged at a very high resolution.

"So, in principle, various forms of electron microscopy could be advanced by using this technique if people need to generate lots and lots of data," says Lee, including DNA sequencing using EM, or cryogenic EM to solve protein structures.

Credit: 
Boston Children's Hospital