Tech

Comparisons of organic and conventional agriculture need to be better, say researchers

image: The environmental effects of agriculture and food are hotly debated. But the most widely used method of analysis often tends to overlook vital factors, such as biodiversity, soil quality, pesticide impacts and societal shifts, and these oversights can lead to wrong conclusions on the merits of intensive and organic agriculture. This is according to a trio of researchers writing in the journal Nature Sustainability.

Image: 
Yen Strandqvist/Chalmers

The environmental effects of agriculture and food are hotly debated. But the most widely used method of analysis often tends to overlook vital factors, such as biodiversity, soil quality, pesticide impacts and societal shifts, and these oversights can lead to wrong conclusions on the merits of intensive and organic agriculture. This is according to a trio of researchers writing in the journal Nature Sustainability.

The most common method for assessing the environmental impacts of agriculture and food is Life Cycle Assessment (LCA). Studies using this method sometimes claim that organic agriculture is actually worse for the climate, because it has lower yields, and therefore uses more land to make up for this. For example, a recent study in Nature Communications that made this claim was widely reported by many publications, including the BBC and others.

But according to three researchers from France, Denmark and Sweden, presenting an analysis of many LCA studies in the journal Nature Sustainability, this implementation of LCA is too simplistic, and misses the benefits of organic farming.

"We are worried that LCA gives too narrow a picture, and we risk making bad decisions politically and socially. When comparing organic and intensive farming, there are wider effects that the current approach does not adequately consider," says Hayo van der Werf of the French National Institute of Agricultural Research.

Biodiversity, for example, is of vital importance to the health and resilience of ecosystems. But globally, it is declining, Intensive agriculture has been shown to be one of the main drivers of negative trends such as insect and bird decline. Agriculture occupies more than one-third of global land area, so any links between biodiversity losses and agriculture are hugely important.

"But our analysis shows that current LCA studies rarely factor in biodiversity, and consequently, they usually miss that wider benefit of organic agriculture," says Marie Trydeman Knudsen from Aarhus University, Denmark. "Earlier studies have already shown that organic fields support biodiversity levels approximately 30% higher than conventional fields."

Usage of pesticides is another factor to consider. Between 1990 and 2015, pesticide use worldwide has increased 73%. Pesticide residues in the ground and in water and food can be harmful to human health, terrestrial and aquatic ecosystems, and cause biodiversity losses. Organic farming, meanwhile, precludes the use of synthetic pesticides. But few LCA studies account for these effects.

Land degradation and lower soil quality resulting from unsustainable land management is also an issue - again, something rarely measured in LCA studies. The benefits of organic farming practices such as varied crop rotation and the use of organic fertilisers are often overlooked in LCA studies.

Crucially, LCA generally assesses environmental impacts per kilogram of product. This favours intensive systems that may have lower impacts per kilogram, while having higher impacts per hectare of land.

"LCA simply looks at the overall yields. Of course, from that perspective, it's true that intensive farming methods are indeed more effective. But this is not the whole story of the larger agroecosystem. A diverse landscape with smaller fields, hedgerows and a variety of crops gives other benefits - greater biodiversity, for example," says Christel Cederberg of Chalmers University of Technology, Sweden.

LCA's product-focused approach also fails to capture the subtleties of smaller, diverse systems which are more reliant on ecological processes, and adapted to local soil, climate and ecosystem characteristics. LCA needs a more fine-grained approach.

"We often look at the effects at the global food chain level, but we need to be much better at considering the environmental effects at the local level," says Marie Trydeman Knudsen.
The researchers note in their study that efforts are being made in this area, but much more progress is needed.

A further key weakness is when hypothetical "indirect effects" are included, such as assuming that the lower yields of organic agriculture lead to increased carbon dioxide emissions, because more land is needed. For example, another prominent study - from a researcher also based at Chalmers University of Technology - suggested that organic agriculture was worse for the climate, because the requirement for more land leads indirectly to less forest area. But accounting for these indirect effects is problematic.

"For example, consider the growing demand for organic meat. Traditional LCA studies might simply assume that overall consumption of meat will remain the same, and therefore more land will be required. But consumers who are motivated to buy organic meat for environmental and ethical reasons will probably also buy fewer animal-based products in the first place. But hardly any studies into this sort of consumer behaviour exist, so it is very difficult to account for these types of social shifts now," says Hayo van der Werf.

"Current LCA methodology and practice is simply not good enough to assess agroecological systems such as organic agriculture. It therefore needs to be improved and integrated with other environmental assessment tools to get a more balanced picture" says Christel Cederberg.

Credit: 
Chalmers University of Technology

Global warming influence on extreme weather events has been frequently underestimated

A new Stanford study reveals that a common scientific approach of predicting the likelihood of future extreme weather events by analyzing how frequently they occurred in the past can lead to significant underestimates - with potentially significant consequences for people's lives.

Stanford climate scientist Noah Diffenbaugh found that predictions that relied only on historical observations underestimated by about half the actual number of extremely hot days in Europe and East Asia, and the number of extremely wet days in the U.S., Europe and East Asia.

The paper, published March 18 in Science Advances, illustrates how even small increases in global warming can cause large upticks in the probability of extreme weather events, particularly heat waves and heavy rainfall. The new results analyzing climate change connections to unprecedented weather events could help to make global risk management more effective.

"We are seeing year after year how the rising incidence of extreme events is causing significant impacts on people and ecosystems," Diffenbaugh said. "One of the main challenges in becoming more resilient to these extremes is accurately predicting how the global warming that's already happened has changed the odds of events that fall outside of our historical experience."

A changing world

For decades, engineers, land-use planners and risk managers have used historical weather observations from thermometers, rain gauges and satellites to calculate the probability of extreme events. Those calculations - meant to inform projects ranging from housing developments to highways - have traditionally relied on the assumption that the risk of extremes could be assessed using only historical observations. However, a warming world has made many extreme weather events more frequent, intense and widespread, a trend that is likely to intensify, according to the U.S. government.

Scientists trying to isolate the influence of human-caused climate change on the probability and/or severity of individual weather events have faced two major obstacles. There are relatively few such events in the historical record, making verification difficult, and global warming is changing the atmosphere and ocean in ways that may have already affected the odds of extreme weather conditions.

Predicted versus observed extreme weather

In the new study, Diffenbaugh, the Kara J. Foundation professor at Stanford's School of Earth, Energy & Environmental Sciences, revisited previous extreme event papers he and his colleagues had published in recent years. Diffenbaugh wondered if he could use the frequency of record-setting weather events from 2006 to 2017 to evaluate the predictions his group had made using data from 1961 to 2005. He found in some cases the actual increase in extreme events was much larger than what had been predicted.

"When I first looked at the results, I had this sinking feeling that our method for analyzing these extreme events could be all wrong," said Diffenbaugh, who is also the Kimmelman Family senior fellow in the Stanford Woods Institute for the Environment. "As it turned out, the method actually worked very well for the period that we had originally analyzed - it's just that global warming has had a really strong effect over the last decade."

Interestingly, Diffenbaugh also found that climate models were able to more accurately predict the future occurrence of record-setting events. While acknowledging that climate models still contain important uncertainties, Diffenbaugh says the study identifies the potential for new techniques that incorporate both historical observations and climate models to create more accurate, robust risk management tools.

"The good news," Diffenbaugh said, "is that these new results identify some real potential to help policymakers, engineers and others who manage risk to integrate the effects of global warming into their decisions."

Credit: 
Stanford University

Stanford engineers create shape-changing, free-roaming soft robot

image: Overhead view of the isoperimetric robot grasping and handling a basketball.

Image: 
Farrin Abbott/Stanford News Service

Advances in soft robotics could someday allow robots to work alongside humans, helping them lift heavy objects or carrying them out of danger. As a step toward that future, Stanford University researchers have developed a new kind of soft robot that, by borrowing features from traditional robotics, is safe while still retaining the ability to move and change shape.

"A significant limitation of most soft robots is that they have to be attached to a bulky air compressor or plugged into a wall, which prevents them from moving," said Nathan Usevitch, a graduate student in mechanical engineering at Stanford. "So, we wondered: What if we kept the same amount of air within the robot all the time?"

From that starting point, the researchers ended up with a human-scale soft robot that can change its shape, allowing it to grab and handle objects and roll in controllable directions. Their invention is described in a paper published March 18 in Science Robotics.

"The casual description of this robot that I give to people is Baymax from the movie Big Hero 6 mixed with Transformers. In other words, a soft, human-safe robot mixed with robots that can dramatically change their shape," said Usevitch.

A combination of many robots

The simplest version of this squishy robot is an inflated tube that runs through three small machines that pinch it into a triangle shape. One machine holds the two ends of the tube together; the other two drive along the tube, changing the overall shape of the robot by moving its corners. The researchers call it an "isoperimetric robot" because, although the shape changes dramatically, the total length of the edges - and the amount of air inside - remains the same.

The isoperimetric robot is a descendent of three types of robots: soft robots, truss robots and collective robots. Soft robots are lightweight and compliant, truss robots have geometric forms that can change shape and collective robots are small robots that work together, making them particularly strong in the face of single-part failures.

"We're basically manipulating a soft structure with traditional motors," said Sean Follmer, assistant professor of mechanical engineering and co-senior author of the paper. "It makes for a really interesting class of robots that combines many of the benefits of soft robots with all of the knowledge we have about more classic robots."

To make a more complex version of the robot, the researchers simply attach several triangles together. By coordinating the movements of the different motors, they can cause the robot to perform different behaviors, such as picking up a ball by engulfing it on three sides or altering the robot's center of mass to make it roll.

"A key understanding we developed was that to create motion with a large, soft pneumatic robot, you don't actually need to pump air in and out," said Elliot Hawkes, assistant professor of mechanical engineering at the University of California, Santa Barbara and co-senior author of the paper. "You can use the air you already have and just move it around with these simple motors; this method is more efficient and lets our robot move much more quickly."

From outer space to your living room

The field of soft robotics is relatively young, which means people are still figuring out the best applications for these new creations. Their safe-but-sturdy softness may make them useful in homes and workplaces, where traditional robots could cause injury. Squishy robots are also appealing as tools for disaster response.

Other exciting possibilities for the isoperimetric robot could lie off-planet. "This robot could be really useful for space exploration - especially because it can be transported in a small package and then operates untethered after it inflates," said Zachary Hammond, a graduate student in mechanical engineering at Stanford and co-lead author of the paper, with Usevitch. "On another planet, it could use its shape-changing ability to traverse complicated environments, squeezing through tight spaces and spreading over obstacles."

For now, the researchers are experimenting with different shapes for their supple robot and considering plopping it in water to see if it can swim. They are also exploring even more new soft robot types, each with their own features and benefits.

"This research highlights the power of thinking about how to design and build robots in new ways," said Allison Okamura, professor of mechanical engineering and co-author of the paper. "The creativity of robot design is expanding with this type of system and that's something we'd really like to encourage in the robotics field."

Credit: 
Stanford University

Increasingly mobile sea ice risks polluting Arctic neighbors

image: Sea ice at the North Pole in 2015.

Image: 
Christopher Michel

The movement of sea ice between Arctic countries is expected to significantly increase this century, raising the risk of more widely transporting pollutants like microplastics and oil, according to new research from CU Boulder.

The study in the American Geophysical Union journal Earth's Future predicts that by mid-century, the average time it takes for sea ice to travel from one region to another will decrease by more than half, and the amount of sea ice exchanged between Arctic countries such as Russia, Norway, Canada and the United States will more than triple.

Increased interest in off-shore Arctic development, as well as shipping through the Central Arctic Ocean, may increase the amount of pollutants present in Arctic waters. And contaminants in frozen ice can travel much farther than those in open water moved by ocean currents.

"This means there is an increased potential for sea ice to quickly transport all kinds of materials with it, from algae to oil," said Patricia DeRepentigny, doctoral candidate in the Department for Atmospheric and Oceanic Sciences. "That's important to consider when putting together international laws to regulate what happens in the Arctic."

Historically, floating masses of Arctic sea ice could survive for up to 10 years: building up layers, lasting through each summer and not moving very far during any given year. As the climate warms, however, that pattern has been changing.

While overall, the sea ice cover is thinning - and melting entirely across vast regions in the summer - the area of new ice formed during winter is actually increasing, particularly along the Russian coastline and soon in the Central Arctic Ocean. This thinner ice can move faster in the increasingly open waters of the Arctic, delivering the particles and pollutants it carries to waters of neighboring states.

"Ice moves faster, but as the climate warms, it doesn't have as much time as before to travel before it melts," said DeRepentigny. "Because of that, we really see that it's the regions that are directly downstream of each country's waters that are going to be most affected."

Different emissions scenarios

In a previous study, DeRepentigny and her colleagues examined the movement of Arctic sea ice from the instrumental record starting in 1979, when the first continuous satellite observations began. That study was the first to document an increase in the amount of sea ice being transported from one region to another over the last four decades.

"That was really eye opening," said DeRepentigny. "The follow-up question then was: How is this going to play out in the future? It opened a really big box of new questions."

So the researchers used a global climate model, together with the Sea Ice Tracking Utility (SITU) - which DeRepentigny helped develop - to track sea ice from where it forms to where it ultimately melts during the 21st century.

The researchers considered two different emissions scenarios: the more extreme "business as usual" scenario, which predicts warming of 4 to 5 degrees Celsius by 2100, and a warming scenario limited to 2 degrees Celsius, inspired by the Paris Agreement. They then modeled how the sea ice will behave in both these scenarios at the middle and the end of the century.

In three of these four situations - including both mid-century predictions - the movement of sea ice between Arctic countries increased.

But in the high emissions scenario at the end of the century, they found countries could end up dealing more with their own ice and its contaminants, than ice from their neighbors. This is because with 4 degrees or more of warming in 2100, the majority of sea ice that freezes during winter will melt each spring in the same region where it was formed.

Russia and the Central Arctic

Russia's exclusive economic zone and the Central Arctic Ocean are two places the researchers expect more ice to form, becoming major "exporters" of ice to other regions in the Arctic.

An exclusive economic zone (EEZ) is an area extending 200 nautical miles from the coastline, over which a state has special rights regarding fishing, shipping, and industrial activities like offshore oil drilling. Five countries have exclusive economic zones in the Arctic Ocean: Canada, the United States, Russia, Norway and Denmark (Greenland).

DeRepentigny and her colleagues found that the amount of ice originating from Russia that then melts in another exclusive economic zone doubles by mid-century.

However, the Central Arctic in the middle of the Arctic Ocean is a place where no country has exclusive economic rights. Due to the Arctic Ocean being more ice free in summers, this will become an attractive shipping route - especially because ships don't need to get permission from another country to travel through it.

"That has several implications," said DeRepentigny. "Who's responsible for contaminants and materials that melt in the Central Arctic or get exported out of the Central Arctic into different countries? It's no longer just a national question."

Credit: 
University of Colorado at Boulder

Model simulator helps researchers map complex physics phenomena

To understand the behavior of quantum particles, imagine a pinball game - but rather than one metal ball, there are billions or more, all ricocheting off each other and their surroundings.

Physicists have long tried to study this interactive system of strongly correlated particles, which could help illuminate elusive physics phenomena like high-temperature superconductivity and magnetism.

One classic method is to create a simplified model that can capture the essence of these particle interactions. In 1963, physicists Martin Gutzwiller, Junjiro Kanamori and John Hubbard - working separately - proposed what came to be called the Hubbard model, which describes the essential physics of many interacting quantum particles. The solution to the model, however, only exists in one dimension. For decades, physicists have tried to realize the Hubbard model in two or three dimensions by creating quantum simulators that can mimic it.

A Cornell-led collaboration has successfully created such a simulator using ultrathin monolayers that overlap to make a moiré pattern. The team then used this solid-state platform to map a longstanding conundrum in physics: the phase diagram of the triangular lattice Hubbard model.

Their paper, "Simulation of Hubbard Model Physics in WSe2/WS2 Moiré Superlattices," was published March 18 in Nature. The lead author is postdoctoral associate Yanhao Tang.

The project is led by Kin Fai Mak, associate professor of physics in the College of Arts and Sciences and the paper's co-senior author along with Jie Shan, professor of applied and engineering physics in the College of Engineering. Both researchers are members of the Kavli Institute at Cornell for Nanoscale Science, and they came to Cornell through the provost's Nanoscale Science and Molecular Engineering (NEXT Nano) initiative. Their shared lab specializes in the physics of atomically thin quantum materials.

Their lab partnered with co-author Allan MacDonald, a physics professor at the University of Texas at Austin, who in 2018 theorized a Hubbard model simulator would be possible by stacking two atomic monolayers of semiconductors, the sort of materials Mak and Shan have been studying for a decade.

"What we have done is take two different monolayers of this semiconductor, tungsten disulfide (WS2) and tungsten diselenide (WSe2), which have a lattice constant that is slightly different from each other. And when you put one on top of the other, you create a pattern called a moiré superlattice." Mak said.

The moiré superlattice looks like a series of interlocking hexagons, and in each juncture - or site - in the crosshatch pattern, the researchers place an electron. These electrons are usually trapped in place by the energy barrier between the sites. But the electrons have enough kinetic energy that, occasionally, they can hop over the barrier and interact with neighboring electrons.

"If you don't have this interaction, everything is actually well understood and sort of boring," said Mak. "But when the electrons hop around and interact, that's very interesting. That's how you can get magnetism and superconductivity."

Because electrons have a negative charge and repel each other, these ensuing interactions become increasingly complicated when there are so many of them in play - hence the need for a simplified system to understand their behavior.

"We can control the occupation of the electron at each site very precisely," Mak said. "We then measure the system and map out the phase diagram. What kind of magnetic phase is it? How do the magnetic phases depend on the electron density?"

So far, the researchers have used the simulator to make two significant discoveries: observing a Mott insulating state, and mapping the system's magnetic phase diagram. Mott insulators are materials that should behave like metals and conduct electricity, but instead function like insulators - phenomena that physicists predicted the Hubbard model would demonstrate. The magnetic ground state of Mott insulators is also an important phenomena the researchers are continuing to study.

While there are other quantum simulators, such as one that uses cold atom systems and an artificial lattice created by laser beams, Mak says his team's simulator has the distinct advantage of being a "true many-particle simulator" that can easily control - or tune - particle density. The system can also reach much lower effective temperatures and assess the thermodynamic ground states of the model. At the same time, the new simulator is not as successful at tuning the interactions between electrons when they share the same site.

"We want to invent new techniques so that we can also control the on-site repulsion of two electrons," Mak said. "If we can control that, we will have a highly tunable Hubbard model in our lab. We may then obtain the complete phase diagram of the Hubbard model."

Credit: 
Cornell University

New telescope design could capture distant celestial objects with unprecedented detail

image: A new multi-field hypertelescope design could image multiple stars at once with high resolution. Hypertelescopes use large arrays of mirrors with space between them. The multi-field design could be incorporated into the hypertelescope prototype being tested in the Alps (pictured).

Image: 
Antoine Labeyrie, Collège de France and Observatoire de la Cote d'Azur

WASHINGTON -- Researchers have designed a new camera that could allow hypertelescopes to image multiple stars at once. The enhanced telescope design holds the potential to obtain extremely high-resolution images of objects outside our solar system, such as planets, pulsars, globular clusters and distant galaxies.

"A multi-field hypertelescope could, in principle, capture a highly detailed image of a star, possibly also showing its planets and even the details of the planets' surfaces," said Antoine Labeyrie, emeritus professor at the Collège de France and Observatoire de la Cote d'Azur, who pioneered the hypertelescope design. "It could allow planets outside of our solar system to
be seen with enough detail that spectroscopy could be used to search for evidence of photosynthetic life."

In The Optical Society's (OSA) journal Optics Letters, Labeyrie and a multi-institutional group of researchers report optical modeling results that verify that their multi-field design can substantially extend the narrow field-of-view coverage of hypertelescopes developed to date.

Making the mirror larger

Large optical telescopes use a concave mirror to focus light from celestial sources. Although larger mirrors can produce more detailed pictures because of their reduced diffractive spreading of the light beam, there is a limit to how large these mirrors can be made. Hypertelescopes are designed to overcome this size limitation by using large arrays of mirrors, which can be spaced widely apart.

Researchers have previously experimented with relatively small prototype hypertelescope designs, and a full-size version is currently under construction in the French Alps. In the new work, researchers used computer models to create a design that would give hypertelescopes a much larger field of view. This design could be implemented on Earth, in a crater of the moon or even on an extremely large scale in space.

Building a hypertelescope in space, for example, would require a large flotilla of small mirrors spaced out to form a very large concave mirror. The large mirror focuses light from a star or other celestial object onto a separate spaceship carrying a camera and other necessary optical components.

"The multi-field design is a rather modest addition to the optical system of a hypertelescope, but should greatly enhance its capabilities," said Labeyrie. "A final version deployed in space could have a diameter tens of times larger than the Earth and could be used to reveal details of extremely small objects such as the Crab pulsar, a neutron star believed to be only 20 kilometers in size."

Expanding the view

Hypertelescopes use what is known as pupil densification to concentrate light collection to form high-resolution images. This process, however, greatly limits the field of view for hypertelescopes, preventing the formation of images of diffuse or large objects such as a globular star cluster, exoplanetary system or galaxy.

The researchers developed a micro-optical system that can be used with the focal camera of the hypertelescope to simultaneously generate separate images of each field of interest. For star clusters, this makes it possible to obtain separate images of each of thousands of stars simultaneously.

The proposed multi-field design can be thought of as an instrument made of multiple independent hypertelescopes, each with a differently tilted optical axis that gives it a unique imaging field. These independent telescopes focus adjacent images onto a single camera sensor.

The researchers used optical simulation software to model different implementations of a multi-field hypertelescope. These all provided accurate results that confirmed the feasibility of multi-field observations.

Incorporating the multi-field addition into hypertelescope prototypes would require developing new components, including adaptive optics components to correct residual optical imperfections in the off-axis design. The researchers are also continuing to develop alignment techniques and control software so that the new camera can be used with the prototype in the Alps. They have also developed a similar design for a moon-based version.

Credit: 
Optica

Ball-and-chain inactivation of ion channels visualized by cryo-electron microscopy

image: Calcium-gated potassium channel MthK in closed, open and inactivated states, from left to right. Channel structure (blue), with one subunit removed for clarity; calcium ions (yellow); potassium ions (purple); membrane (grey); N-terminal inactivation peptide (red). The location of the peptide in the inactivated channel was identified in the structural analysis, whereas the location shown in closed and open channels is hypothetical.

Image: 
Image courtesy of Dr. Crina Nimigean.

Ion channels, which allow potassium and sodium ions to flow in and out of cells, are crucial in neuronal 'firing' in the central nervous system and for brain and heart function. These channels use a "ball-and-chain" mechanism to help regulate their ion flow, according to a new study led by Weill Cornell Medicine scientists.

The study, published March 18, 2020, in Nature, confirms a long-standing hypothesis about ion channels, and represents a key advance in the understanding of the basic biological processes at work in most cells.

The direct imaging of the ball-and-chain mechanism, using electron-microscopy techniques, can also provide a new angle to design drugs that target it to improve ion channel function. Ion channel abnormalities have been linked to a long list of disorders including epilepsies, heart arrhythmias, schizophrenia and diabetes.

"Scientists have been trying to get an atomic-scale picture of this mechanism since the 1970s, and now that we have it at last, it can become an important drug target," said senior author Dr. Crina Nimigean, an associate professor of physiology and biophysics in anesthesiology at Weill Cornell Medicine.

Many types of ion channels, including those necessary for neuronal signaling and the beating of the heart, will physically open, allowing a flow of ions in or out of the cell, when a certain stimulus is applied. However, in order to switch ion flow on and off with high enough frequencies to meet the demands of neurons, heart muscle cells and other cell types, some ion channels need an additional, on-the-fly mechanism to stop ion flow--even when the stimulus is still present and the channel structure is in principle in the "open" state.

Researchers in the field have suspected since 1973, based on biochemical experiments, that this on-the-fly mechanism resembles a bathtub plug on a chain, or "ball-and-chain" structure. But confirming this directly with atomic-scale imaging methods has been a formidable challenge. This is due chiefly to the complexity of these channels in mammals and the difficulty of reconstructing them, for imaging purposes, in a cell-membrane-like environment where they are normally connected to other cell membrane components.

"Nobody knew exactly how this process actually looks and works--does the "ball" block the opening of the channel, or actually go in and plug the pore, or alternatively, alter the conformation of the channel indirectly?" said Dr. Nimigean.

She and her colleagues were able to overcome this challenge by imaging a potassium ion channel from Methanobacterium thermoautotrophicum, a bacteria-like species found at deep-sea geothermal vents. Its "MthK" channel is known to be structurally similar to the mammalian "BK" potassium channel that is crucial for the proper function of neurons and many other cell types--yet MthK has key simplifications that make it easier to image.

With low-temperature electron microscopy (cryo-EM), which bounces electrons instead of light off objects to make atomic-resolution images of them, the scientists obtained pictures of the MthK channel when it was switched open by calcium and switched closed. The pictures revealed that even when the MthK channel is in the calcium-activated, "open" state, the pathway through which ions flow was plugged by a flexible element that sticks into the pore of the channel structure.

The scientists confirmed the function of this plug mechanism by showing that when the 'ball-and-chain' was deleted genetically, the flow of potassium ions through the calcium-activated MthK channel was no longer regulated.

Dr. Nimigean and her colleagues now are planning to explore how this mechanism might be targeted therapeutically.

"Different classes of potassium channels in human cells are very similar in their channel structures. So a drug that blocks a particular channel will tend to affect other potassium channels and thus could have many unwanted side effects," she said. "However, understanding and then targeting this ball-and-chain structure that we were able to image could allow us to therapeutically modulate potassium channels with much more specificity."

Credit: 
Weill Cornell Medicine

Novel system allows untethered high-quality multi-player VR

image: Purdue University researchers have created a new approach to VR that will allow multiple players to interact with the same VR app on smartphones and provide new opportunities for education, health care and entertainment.

Image: 
Purdue University/Y. Charlie Hu

WEST LAFAYETTE, Ind. - Virtual reality headsets and application programs for VR are not gaining traction with users because of a chicken-and-egg dilemma, lack of VR content and slow market penetration of custom-made VR units.

Now, Purdue University researchers have created a new approach to VR that allows multiple players to interact with the same VR game on smartphones and provides new opportunities for enterprise, education, health care and entertainment applications.

The Purdue VR system, called Coterie, uses a novel way to manage the challenging task of rendering high-resolution virtual scenes to satisfy the stringent quality-of-experience (QoE) of VR. Those include high frame rate and low motion-to-photon latency, which is the delay between the movement of the user's head or game controller and the change of the VR device's display reflecting the user's movement. The new approach enables 4K-resolution VR on commodity mobile devices and allows up to 10 players to interact in the same VR application at a time.

"We have worked to create VR technology that someone can use on a typical smartphone with a Wi-Fi connection," said Y. Charlie Hu, the Michael and Katherine Birck Professor of Electrical and Computer Engineering, who led the Purdue team. "Our solution not only allows multiple players to participate in a VR game at the same time, but also provides a better and more cost-effective option for single-player use."

The technology is detailed in a paper published in ASPLOS 2020, an international conference for interdisciplinary systems research, intersecting computer architecture, hardware and emerging technologies, programming languages and compilers, operating systems and networking.

One reason for the heavy computational workload of high-resolution VR apps is the constant need to render updates to both the foreground interactions with the players and the background environment in the virtual world.

"The heavy load simply cannot be handled by even high-end smartphones alone," Hu said.

VR apps using Coterie split up this heavy rendering task between the smartphone and an edge server over WiFi in a way that drastically reduces the load on the smartphone while allowing the sub frames rendered on both to be merged into the final frame within 16ms, satisfying the VR QoE.

Hu said this approach not only reduces the network requirement so multiple players can share the same WiFi, but also reduces the power draw and computation demand on each mobile device and provides a better user experience.

"Our technology opens the door for enterprise applications such as employee training, collaboration and operations, health care applications such as surgical training, as well as education and military applications," Hu said. "You could have multiple doctors and health care professionals interacting in a VR operating room."

Credit: 
Purdue University

Late Cretaceous dinosaur-dominated ecosystem

image: This mural was originally made for a recent Royal Ontario Museum exhibit about a fossil ankylosaur named Zuul crurivastator. That fossil is found within a couple of meters stratigraphically/temporally of the site described in this paper. The last author on the study, David Evans, is the dinosaur curator at the Royal Ontario Museum and was also involved in the description of Zuul and design of that exhibit.

Image: 
Image Danielle Dufault, Royal Ontario Museum.

Boulder, Colo., USA: A topic of considerable interest to paleontologists is how dinosaur-dominated ecosystems were structured, how dinosaurs and co-occurring animals were distributed across the landscape, how they interacted with one another, and how these systems compared to ecosystems today. In the Late Cretaceous (~100-66 million years ago), North America was bisected into western and eastern landmasses by a shallow inland sea. The western landmass (Laramidia) contained a relatively thin stretch of land running north-south, which was bordered by that inland sea to the east and the rising Rocky Mountains to the west. Along this ancient landscape of warm and wet coastal plains comes an extremely rich fossil record of dinosaurs and other extinct animals.

Yet, from this record, an unexpected pattern has been identified: Most individual basins preserve an abundant and diverse assemblage of dinosaur species, often with multiple groups of co-occurring large (moose- to elephant-sized) herbivorous species, yet few individual species occur across multiple putatively contemporaneous geological formations (despite them often being less than a few hundred kilometers apart). This is in fairly stark contrast to the pattern seen in modern terrestrial mammal communities, where large-bodied species often have very extensive, often continent-spanning ranges. It has therefore been suggested that dinosaurs (and specifically large herbivorous dinosaurs) were particularly sensitive to environmental differences over relatively small geographic distances (particularly with respect to distance from sea level), and may have even segregated their use of the landscape between more coastal and inland sub-habitats within their local ranges.

In their new study published in Geology, Thomas Cullen and colleagues sought to test some of these hypotheses as part of their broader research reconstructing the paleoecology of Late Cretaceous systems.

One of the methods they're using to do that is stable isotope analysis. This process measures differences in the compositions of non-decaying (hence, "stable") isotopes of various common elements, as the degree of difference in these compositions in animal tissues and in the environment have known relationships to various factors such as diet, habitat use, water source, and temperature. So the team applied these methods to fossilized teeth and scales from a range of animals, including dinosaurs, crocodilians, mammals, bony fish, and rays, all preserved together from a relatively small region over a geologically short period of time in sites called vertebrate microfossil bonebeds.

By analyzing the stable carbon and oxygen isotope compositions of these fossils they were able to reconstruct their isotopic distributions in this ecosystem--a proxy for their diets and habitat use. They found evidence of expected predator-prey dietary relationships among the carnivorous and herbivorous dinosaurs and among aquatic reptiles like crocodilians and co-occurring fish species.

Critically, says Cullen, "What we didn't see was evidence for large herbivorous dinosaurs segregating their habitats, as the hadrosaurs, ceratopsians, and ankylosaurs we sample all had strongly overlapping stable carbon and oxygen ranges. If some of those groups were making near-exclusive use of certain parts of the broader landscape, such as ceratopsians sticking to coastal environments and hadrosaurs sticking to more inland areas, then we should see them grouping distinctly from each other. Since we didn't see that, that suggests they weren't segregating their resource use in this manner. It's possible they were doing so in different ways though, such as by feeding height segregation, or shifting where in the landscape they go seasonally, and our ongoing research is investigating some of these possibilities."

Another important part of their study was comparing the fossil results to an environmentally similar modern environment in order to examine how similar they are ecologically. For a modern comparison, they examined the animal communities of the Atchafalaya River Basin of Louisiana, the largest contiguous wetland area in the continental U.S. The landscape of this area is very similar to their Cretaceous system, as are many elements of the plant and animal communities (not including the non-avian dinosaurs, of course).

From their comparisons, the team found that the Cretaceous system was similar to the Louisiana one in having a very large amount of resource interchange between the aquatic and terrestrial components of the ecosystem, suggesting that fairly diverse/mixed diets were common, and food being obtained from both terrestrial and aquatic sources was the norm. They also found that habitat use differences among the herbivorous mammals in the Louisiana system was more distinct than among those large herbivorous dinosaurs in the Cretaceous system, lending further evidence to their results about their lack of strict habitat use preferences.

Lastly, the team used modified oxygen stable isotope temperature equations to estimate mean annual temperature ranges for both systems (with the Louisiana one being a test of the accuracy of the method, as they could compare their results to directly measured water and air temperatures). The team found that in their Late Cretaceous ecosystem in Alberta, mean annual temperature was about 16-20 degrees C, a bit cooler than modern day Louisiana, but much warmer than Alberta today, reflecting the hotter greenhouse climate that existed globally about 76 million years ago.

Characterizing how these ecosystems were structured during this time, and how these systems changed across time and space, particularly with respect to how they responded to changes in environmental conditions, may be of great importance for understanding and predicting future ecosystem responses under global climate change. The team's research continues and should reveal much more about the food webs and ecology of the dinosaurs and other organisms that inhabited these ancient landscapes.

Credit: 
Geological Society of America

NASA analyzes tropical cyclone Herold's water vapor concentration

image: On Mar. 18 at 5:40 a.m. EDT (0940 UTC), NASA's Aqua satellite passed over Tropical Cyclone Herold, located in the Southern Indian Ocean. Aqua found highest concentrations of water vapor (brown) and coldest cloud top temperatures were south of the center.

Image: 
Credits: NASA/NRL

When NASA's Aqua satellite passed over the Southern Indian Ocean on Mar. 18, it gathered water vapor data that showed wind shear was adversely affecting Tropical Cyclone Herold.

In general, wind shear is a measure of how the speed and direction of winds change with altitude. Tropical cyclones are like rotating cylinders of winds. Each level needs to be stacked on top each other vertically in order for the storm to maintain strength or intensify. Wind shear occurs when winds at different levels of the atmosphere push against the rotating cylinder of winds, weakening the rotation by pushing it apart at different levels. Strong wind shear from the northwest is battering Herold and pushing the strongest storms away from the center of circulation. Northwesterly winds affecting the storm are estimated between 25 and 30 knots (29 to 35 mph /46 to 56 kph)

What Water Vapor Reveals

NASA's Aqua satellite passed over Tropical Cyclone Herold on Mar. 18 at 5:40 a.m. EDT (0940 UTC) and the Moderate Resolution Imaging Spectroradiometer or MODIS instrument gathered water vapor content and temperature information. The MODIS data showed highest concentrations of water vapor and coldest cloud top temperatures were pushed about 48 nautical miles southeast of the center of circulation.

MODIS data showed coldest cloud top temperatures were as cold as or colder than minus 70 degrees Fahrenheit (minus 56.6 degrees Celsius) in those storms. Storms with cloud top temperatures that cold have the capability to produce heavy rainfall.

Water vapor analysis of tropical cyclones tells forecasters how much potential a storm has to develop. Water vapor releases latent heat as it condenses into liquid. That liquid becomes the clouds and thunderstorms that make up a tropical cyclone. Temperature is important when trying to understand how strong storms can be. The higher the cloud tops, the colder and the stronger the storms.

Herold's Status

On Wednesday, March 18, 2020 at 5 a.m. EDT (0900 UTC), the Joint Typhoon Warning Center or JTWC noted that Herold's maximum sustained winds had dropped significantly since the previous 24 hours and the storm weakened from hurricane-force to tropical storm force. Maximum sustained winds were near 55 knots. Herold was centered near latitude 22.7 degrees south and longitude 66.1 degrees east, about 516 nautical miles east-southeast of Port Louis, Mauritius. Herold was moving to the southeast.

JTWC forecasters expect Herold to continue moving southeast and further away from land areas while continuing to weaken. Forecasters noted that the storm is becoming subtropical but could dissipate within a day or two before it completes that transition.

What is a Sub-tropical Storm?

According to the National Oceanic and Atmospheric Administration, a sub-tropical storm is a low-pressure system that is not associated with a frontal system and has characteristics of both tropical and extratropical cyclones. Like tropical cyclones, they are non-frontal that originate over tropical or subtropical waters, and have a closed surface wind circulation about a well-defined center.

Unlike tropical cyclones, subtropical cyclones derive a significant proportion of their energy from baroclinic sources (atmospheric pressure), and are generally cold-core in the upper troposphere, often being associated with an upper-level low-pressure area or an elongated area or trough of low pressure.

NASA's Aqua satellite is one in a fleet of NASA satellites that provide data for hurricane research.

Tropical cyclones/hurricanes are the most powerful weather events on Earth. NASA's expertise in space and scientific exploration contributes to essential services provided to the American people by other federal agencies, such as hurricane weather forecasting.

Credit: 
NASA/Goddard Space Flight Center

How people investigate -- or don't -- fake news on Twitter and Facebook

image: Participants had various reactions to encountering a fake post: Some outright ignored it, some took it at face value, some investigated whether it was true, and some were suspicious of it but then chose to ignore it.

Image: 
Franziska Roesner/University of Washington

Social media platforms, such as Facebook and Twitter, provide people with a lot of information, but it's getting harder and harder to tell what's real and what's not.

Researchers at the University of Washington wanted to know how people investigated potentially suspicious posts on their own feeds. The team watched 25 participants scroll through their Facebook or Twitter feeds while, unbeknownst to them, a Google Chrome extension randomly added debunked content on top of some of the real posts. Participants had various reactions to encountering a fake post: Some outright ignored it, some took it at face value, some investigated whether it was true, and some were suspicious of it but then chose to ignore it. These results have been accepted to the 2020 ACM CHI conference on Human Factors in Computing Systems.

"We wanted to understand what people do when they encounter fake news or misinformation in their feeds. Do they notice it? What do they do about it?" said senior author Franziska Roesner, a UW associate professor in the Paul G. Allen School of Computer Science & Engineering. "There are a lot of people who are trying to be good consumers of information and they're struggling. If we can understand what these people are doing, we might be able to design tools that can help them."

Previous research on how people interact with misinformation asked participants to examine content from a researcher-created account, not from someone they chose to follow.

"That might make people automatically suspicious," said lead author Christine Geeng, a UW doctoral student in the Allen School. "We made sure that all the posts looked like they came from people that our participants followed."

The researchers recruited participants ages 18 to 74 from across the Seattle area, explaining that the team was interested in seeing how people use social media. Participants used Twitter or Facebook at least once a week and often used the social media platforms on a laptop.

Then the team developed a Chrome extension that would randomly add fake posts or memes that had been debunked by the fact-checking website Snopes.com on top of real posts to make it temporarily appear they were being shared by people on participants' feeds. So instead of seeing a cousin's post about a recent vacation, a participant would see their cousin share one of the fake stories instead.

The researchers either installed the extension on the participant's laptop or the participant logged into their accounts on the researcher's laptop, which had the extension enabled. The team told the participants that the extension would modify their feeds -- the researchers did not say how -- and would track their likes and shares during the study -- though, in fact, it wasn't tracking anything. The extension was removed from participants' laptops at the end of the study.

"We'd have them scroll through their feeds with the extension active," Geeng said. "I told them to think aloud about what they were doing or what they would do if they were in a situation without me in the room. So then people would talk about 'Oh yeah, I would read this article,' or 'I would skip this.' Sometimes I would ask questions like, 'Why are you skipping this? Why would you like that?'"

Participants could not actually like or share the fake posts. On Twitter, a "retweet" would share the real content beneath the fake post. The one time a participant did retweet content under the fake post, the researchers helped them undo it after the study was over. On Facebook, the like and share buttons didn't work at all.

After the participants encountered all the fake posts -- nine for Facebook and seven for Twitter -- the researchers stopped the study and explained what was going on.

"It wasn't like we said, 'Hey, there were some fake posts in there.' We said, 'It's hard to spot misinformation. Here were all the fake posts you just saw. These were fake, and your friends did not really post them,'" Geeng said. "Our goal was not to trick participants or to make them feel exposed. We wanted to normalize the difficulty of determining what's fake and what's not."

The researchers concluded the interview by asking participants to share what types of strategies they use to detect misinformation.

In general, the researchers found that participants ignored many posts, especially those they deemed too long, overly political or not relevant to them.

But certain types of posts made participants skeptical. For example, people noticed when a post didn't match someone's usual content. Sometimes participants investigated suspicious posts -- by looking at who posted it, evaluating the content's source or reading the comments below the post -- and other times, people just scrolled past them.

"I am interested in the times that people are skeptical but then choose not to investigate. Do they still incorporate it into their worldviews somehow?" Roesner said. "At the time someone might say, 'That's an ad. I'm going to ignore it.' But then later do they remember something about the content, and forget that it was from an ad they skipped? That's something we're trying to study more now."

While this study was small, it does provide a framework for how people react to misinformation on social media, the team said. Now researchers can use this as a starting point to seek interventions to help people resist misinformation in their feeds.

"Participants had these strong models of what their feeds and the people in their social network were normally like. They noticed when it was weird. And that surprised me a little," Roesner said. "It's easy to say we need to build these social media platforms so that people don't get confused by fake posts. But I think there are opportunities for designers to incorporate people and their understanding of their own networks to design better social media platforms."

Credit: 
University of Washington

An advance in molecular moviemaking shows how molecules respond to two photons of light

image: A diffraction pattern made by X-rays scattering off an iodine molecule into an detector at SLAC National Accelerator Laboratory. Hundreds of these patterns from the lab's X-ray free-electron laser were strung together to create a "molecular movie" showing how the molecules responded in unexpected ways when hit with two photons of light at once. Scientists say this new approach should work with bigger and more complex molecules, too.

Image: 
Bucksbaum group/PULSE Institute

Over the past few years, scientists have developed amazing tools - "cameras" that use X-rays or electrons instead of ordinary light ¬- to take rapid-fire snapshots of molecules in motion and string them into molecular movies.

Now scientists at the Department of Energy's SLAC National Accelerator Laboratory and Stanford University have added another twist: By tuning their lasers to hit iodine molecules with two photons of light at once instead of the usual single photon, they triggered totally unexpected phenomena that were captured in slow-motion movies just trillionths of a second long.

The first movie they made with this approach, described March 17 in Physical Review X, shows how the two atoms in an iodine molecule jiggle back and forth, as if connected by a spring, and sometimes fly apart when hit by intense laser light. The action was captured by the lab's Linac Coherent Light Source (LCLS) hard X-ray free-electron laser. Some of the molecules' responses were surprising and others had been seen before with other techniques, the researchers said, but never in such detail or so directly, without relying on advance knowledge of what they should look like.

Preliminary looks at bigger molecules that contain a variety of atoms suggest they can also be filmed this way, the researchers added, yielding new insights into molecular behavior and filling a gap where previous methods fall short.

"The picture we got this way was very rich," said Philip Bucksbaum, a professor at SLAC and Stanford and investigator with the Stanford PULSE Institute, who led the study with PULSE postdoctoral scientist Matthew Ware. "The molecules gave us enough information that you could actually see atoms move over distances of less than an angstrom - which is about the width of two hydrogen atoms - in less than a trillionth of a second. We need a very fast shutter speed and high resolution to see this level of detail, and right now those are only possible with a hard X-ray free-electron laser like LCLS."

Double-barreled photons

Iodine molecules are a favorite subject for this kind of investigation because they're simple - just two atoms connected by a springy chemical bond. Previous studies, for instance with SLAC's "electron camera," have probed their response to light. But until now those experiments have been set up to initiate motion in molecules using single photons, or particles of light.

In this study, researchers tuned the intensity and color of an ultrafast infrared laser so that about a tenth of the iodine molecules would interact with two photons of light - enough to set them vibrating, but not enough to strip off their electrons.

Each hit was immediately followed by an X-ray laser pulse from LCLS, which scattered off the iodine's atomic nuclei and into a detector to record how the molecule reacted. By varying the timing between the light and X-ray pulses, scientists created a series of snapshots that were combined into a stop-action movie of the molecule's response, with frames just 50 femtoseconds, or millionths of a billionth of a second, apart.

The researchers knew going in that hitting the iodine molecules with more than one photon at a time would provoke what's known as a nonlinear response, which can veer off in surprising directions. "We wanted to look at something more challenging, stuff we could see that might not be what we planned," as Bucksbaum put it. And that in fact is what they found.

Unexpected vibes

The results revealed that the light's energy did set off vibrations, as expected, with the two iodine molecules rapidly approaching and pulling away from each other. "It's a really big effect, and of course we saw it," Bucksbaum said.

But another, much weaker type of vibration also showed up in the data, "a process that's weak enough that we hadn't expected to see it," he said. "That confirms the discovery potential of this technique."

They were also able to see how far apart the atoms were and which way they headed at the very start of each vibration - either compressing or extending the bond between them - as well as how long each type of vibration lasted.

In just a few percent of the molecules, the light pulses sent the iodine atoms flying apart rather than vibrating, shooting off in opposite directions at either fast or slow speeds. As with the vibrations, the fast flyoffs were expected, but the slow ones were not.

Bucksbaum said he expects that chemists and materials scientists will be able to make good use of these techniques. Meanwhile, his team and others at the lab will continue to focus on developing tools to see more and more things going on in molecules and understand how they move. "That's the goal here," he said. "We're the cinematographers, not the writers, producers or actors. The value in what we do is to enable all those other things to happen, working in partnership with other scientists."

Credit: 
DOE/SLAC National Accelerator Laboratory

New COVID-19 info for gastroenterologists and patients

Bethesda, Maryland (March 18, 2020) -- A paper published today in Clinical Gastroenterology and Hepatology by clinicians at Icahn School of Medicine at Mount Sinai outlines key information gastroenterologists and patients with chronic digestive conditions need to know about COVID-19, or coronavirus.

Coronavirus is of particular concern for patients with inflammatory bowel disease (IBD) who may take immunosuppression drugs. The paper, published in a journal of the American Gastroenterological Association, provides clear guidance:

Patients on immunosuppression drugs for IBD should continue taking their medications. The risk of disease flare far outweighs the chance of contracting coronavirus. These patients should also follow CDC guidelines for at-risk groups: avoid crowds and limit travel.

"This is a rapidly evolving area with new information emerging on a daily basis," says Ryan Ungaro, MD, MS, assistant professor of medicine at Icahn School of Medicine at Mount Sinai. "While COVID-19 is a significant global public health concern, it is important to keep its risks in perspective and stay up-to-date on current research and recommendations in order to provide our patients with the most accurate advice."

Credit: 
American Gastroenterological Association

To prevent tick encounters, where you dump your leaves matters

image: While many homeowners heed the advice to clear their lawns of fallen leaves in autumn to avoid creating tick-friendly habitat in high-use areas, a new study on tick abundance in leaf litter says raking or blowing leaves just out to the forest edge is not enough. In fact, dumping leaves where grass meets woods may inadvertently create an ideal habitat for blacklegged ticks (Ixodes scapularis, adult female shown here).

Image: 
Flickr user Lennart Tange, CC BY 2.0

Annapolis, MD; March 18, 2020--If you cleared fallen leaves from your lawn last fall, did you deposit them along the edge of your lawn, where grass meets woods? If you did, you might have unwittingly created an ideal habitat for blacklegged ticks.

In areas of the United States where ticks that carry Lyme disease-causing bacteria are prevalent, residential properties often intermingle with forested areas, and ticks thrive in the "edge habitats" where lawn and woods meet. While many homeowners heed the advice to clear their lawns of fallen leaves in autumn to avoid creating tick-friendly habitat in high-use areas, a new study on tick abundance in leaf litter says raking or blowing leaves just out to the forest edge is not enough.

"Our study showed that the common fall practice of blowing or raking leaves removed from lawns and landscaping to the immediate lawn/woodland edges can result in a three-fold increase in blacklegged tick numbers in these areas the following spring," says Robert Jordan, Ph.D., research scientist at the Monmouth County (New Jersey) Mosquito Control Division and co-author of the study published today in the Journal of Medical Entomology.

Instead, Jordan and co-author Terry Schulze, Ph.D., an independent medical entomologist, suggest homeowners either take advantage of municipal curbside leaf pickup (if available), compost their leaves, or remove leaves to a location further into the woods or further away from high-use areas on their property. "The thing homeowners need to keep in mind is that accumulations of leaves and other plant debris provide ideal host-seeking and survival conditions for immature blacklegged ticks," says Jordan.

In their new study, Jordan and Schulze set up test plots on three residential properties in Monmouth County, New Jersey, in the fall of 2017 and 2018. Each property had plots at both the forest edge and deeper within the wooded area. Some edge plots were allowed to accumulate leaves naturally, while others received additional leaves via periodic raking or leaf blowing. These "managed" edge plots resulted in leaf-litter depths two to three times that of the natural edge and forest plots.

The researchers then compared the presence of nymphal (juvenile) blacklegged ticks (Ixodes scapularis) and lone star ticks (Amblyomma americanum) in the test plots the following spring. In both years, the results for lone star tick nymphs were inconsistent, but the number of blacklegged tick nymphs in the managed edge plots was approximately three times that of the natural edge and forest plots.

"While we expected to see more ticks along lawn edges with deeper leaf-litter accumulation, we were surprised about the magnitude of the increase in ticks that resulted from leaf blowing or raking," Jordan says.

Fallen leaves provide blacklegged ticks with suitable habitat via higher humidity and lower temperatures within the leaf litter, as well as protection from exposure over winter. Previous research, meanwhile, has shown that people more commonly encounter ticks on their own properties than in parks or natural areas. And that, Jordan says, is a major reason why he and Schulze have been evaluating a variety of residential tick-prevention strategies in recent years. Landscape management is an important--and affordable--strategy to keep ticks at bay, he says.

"On properties with considerable leaf fall, the best option would be complete removal of leaves from areas most frequently used--such as lawns, outdoor seating areas, and in and around play sets," Jordan says. "If this is not possible or practical, leaf piles should be placed in areas least frequently used. Where neither of these options is possible, or where leaf fall is minimal, mulching in place may be a good option, since this encourages rapid decomposition of leaves, which may reduce habitat suitability for ticks."

Credit: 
Entomological Society of America

Shedding light on optimal materials for harvesting sunlight underwater

image: This image shows an organic solar cell, which are likely candidates for underwater applications as they can be made water resistant and perform excellently in low-light conditions.

Image: 
Allison Kalpakci

There may be many overlooked organic and inorganic materials that could be used to harness sunlight underwater and efficiently power autonomous submersible vehicles, report researchers at New York University. Their research, publishing March 18 in the journal Joule, develops guidelines for optimal band gap values at a range of watery depths, demonstrating that various wide-band gap semiconductors, rather than the narrow-band semiconductors used in traditional silicon solar cells, are best equipped for underwater use.

"So far, the general trend has been to use traditional silicon cells, which we show are far from ideal once you go to a significant depth since silicon absorbs a large amount of red and infrared light, which is also absorbed by water--especially at large depths," says Jason A. Röhr, a postdoctoral research associate in Prof. André D. Taylor's Transformative Materials and Devices laboratory at the Tandon School of Engineering at New York University and an author on the study. "With our guidelines, more optimal materials can be developed."

Underwater vehicles, such as those used to explore the abyssal ocean, are currently limited by onshore power or inefficient on-board batteries, preventing travel over longer distances and periods of time. But while solar cell technology that has already taken off on land and in outer space could give these submersibles more freedom to roam, the watery world presents unique challenges. Water scatters and absorbs much of the visible light spectrum, soaking up red solar wavelengths even at shallow depths before silicon-based solar cells would have a chance to capture them.

Most previous attempts to develop underwater solar cells have been constructed from silicon or amorphous silicon, which each have narrow band gaps best suited for absorbing light on land. However, blue and yellow light manages to penetrate deep into the water column even as other wavelengths diminish, suggesting semiconductors with wider band gaps not found in traditional solar cells may provide a better fit for supplying energy underwater.

To better understand the potential of underwater solar cells, Röhr and colleagues assessed bodies of water ranging from the clearest regions of the Atlantic and Pacific oceans to a turbid Finnish lake, using a detailed-balance model to measure the efficiency limits for solar cells at each location. Solar cells were shown to harvest energy from the sun down to depths of 50 meters in Earth's clearest bodies of water, with chilly waters further boosting the cells' efficiency.

The researchers' calculations revealed that solar cell absorbers would function best with an optimum band gap of about 1.8 electronvolts at a depth of two meters and about 2.4 electronvolts at a depth of 50 meters. These values remained consistent across all water sources studied, suggesting the solar cells could be tailored to specific operating depths rather than water locations.

Röhr notes that cheaply produced solar cells made from organic materials, which are known to perform well under low-light conditions, as well as alloys made with elements from groups three and five on the periodic table could be ideal in deep waters. And while the substance of the semiconductors would differ from solar cells used on land, the overall design would remain relatively similar.

"While the sun-harvesting materials would have to change, the general design would not necessarily have to change all that much," says Röhr. "Traditional silicon solar panels, like the ones you can find on your roof, are encapsulated to prohibit damage from the environment. Studies have shown that these panels can be immersed and operated in water for months without sustaining significant damage to the panels. Similar encapsulation methods could be employed for new solar panels made from optimal materials."
Now that they have uncovered what makes effective underwater solar cells tick, the researchers plan to begin developing optimal materials.

"This is where the fun begins!" says Röhr. "We have already investigated unencapsulated organic solar cells which are highly stable in water, but we still need to show that these cells can be made more efficient than traditional cells. Given how capable our colleagues around the world are, we are sure that we will see these new and exciting solar cells on the market in the near future."

Credit: 
Cell Press