Tech

Chip-based device opens new doors for augmented reality and quantum computing

video: The researchers showed that the chip-based optical phased array can steer blue light over a 50-degree field of view.

Image: 
Min Chul Shin and Aseema Mohanty, Columbia University

WASHINGTON -- Researchers have designed a new chip-based device that can shape and steer blue light with no moving parts. The device could greatly reduce the size of light projection components used for augmented reality and a variety of other applications.

"Our blue phased array platform can rapidly and precisely reconfigure visible light for many emerging applications, spanning holographic displays, quantum information processing and biological sensing and stimulation," said research team leader Michal Lipson from Columbia University. "It paves the way for chip-scale light projection across the entire visible range with a large field of view and can miniaturize the current bulky optical systems."

Lipson and colleagues describe the new device in The Optical Society (OSA) journal Optics Letters. It is the first chip-scale optical phased array (OPA) operating at blue wavelengths using a silicon nitride platform. OPAs function like reconfigurable lenses by enabling arbitrary reconfigurations of 3D light patterns.

The new OPA was developed as part of a DARPA-funded project that aims to create a lightweight, low power head-mounted display that projects visible information onto the retina with extremely high resolution and a large field of view. This type of augmented display isn't possible today because the light projection components used to shape and steer light are bulky and have a limited field of view.

Operating in the visible

OPAs offer an alternative to bulky light projection devices but are typically made using silicon, which can only be used with near-infrared wavelengths. Blue wavelengths require OPAs made from a semiconductor material such as silicon nitride that operates at visible wavelengths. However, fabrication and material challenges have made a practical blue OPA difficult to achieve.

The researchers recently optimized silicon nitride fabrication processes to overcome this challenge. In the new work, they applied this new platform to create a chip-based OPA.

"Smaller wavelengths scatter more, resulting in higher light loss if the device fabrication is not perfect," said Min Chul Shin, co-first author of the paper. "Therefore, demonstrating an OPA that operates at blue wavelengths means we can achieve this across the entire visible range."

Using the new blue light OPAs, the researchers demonstrated beam steering over a 50-degree field of view. They also showed the potential benefits of this type of platform for image projection by generating 2D images of letters.

"All of the chips we have tested worked well," said Aseema Mohanty, co-first author of the paper. "Large-scale integration of this system can be accomplished using today's lithography techniques. Thus, this new platform introduces a platform for fully reconfigurable chip-scale 3D volumetric light projection across the entire visible range."

Applications from computing to biology

The new blue OPA could be useful for trapped ion quantum computers, which require lasers in the visible spectral range for micron-scale optical stimulation. Trapped ion quantum computers are among the most promising practical designs for quantum computing, an emerging technology expected to be significantly faster than traditional computing.

The new chip-based devices could also be used for optogenetics, which uses visible light to control neurons and other cells in living tissue. For example, the devices could be used to make an implantable device to stimulate light-sensitive tags on neurons in animal models of disease.

The researchers plan to further optimize the OPA's electrical power consumption because low-power operation is crucial for lightweight head-mounted augmented reality displays and optogenetic applications.

Credit: 
Optica

'Domiciled' feeding studies will lead to new discoveries in human nutrition science

The public's confusion around what constitutes a healthy diet is related in part to nutrition studies that do not standardize the foods consumed by research participants or participants' adherence to their diet programs, argues Kevin Hall in this Perspective. According to Hall, domiciled feeding studies - which involve subjects being housed and fed in comfortable yet controlled facilities - could greatly improve the understanding of dietary influences on health. While consensus exists regarding fundamental aspects of what constitutes a healthy diet, consensus quickly erodes when discussing more detailed questions of optimal human nutrition. Recently, whether modern nutrition science is up to the task of answering related questions has been questioned. Exacerbating the issue, says Hall, is the lack of carefully controlled studies to explore it. "...imagine trying to develop a new drug without being confident that investigators could administer quantities of the drug to subjects or objectively measure its dose-response," he writes. Many randomized diet studies - often hinged upon subjective, participant-reported metrics, and which rarely verify or directly monitor the consumption of on- or off-study foods - do not reveal the effects of consuming different diets, but rather, the effects of people following differing diet advice. The results of such studies "conflate diet adherence with the effects of the diet," he says. Hall argues that we need to facilitate more human nutrition studies where subjects can comfortably reside continuously at a research facility, thereby allowing investigators to control and objectively measure their food intake. Well-designed domiciled feeding studies could help elucidate the fundamental mechanisms by which known diet changes affect us all, providing a bevy of information on the complex interactions between diet changes, the microbiota and host physiology, for example, as well as providing standardized diet biomarkers and test technology that can be used in longitudinal diet studies to assess diet-disease associations. Hall does also note limitations of these studies, such that "long-term nutrition studies in free-living people will always be required."

Credit: 
American Association for the Advancement of Science (AAAS)

Hayabusa2's big 'impact' on understanding asteroid Ryugu's age and surface cohesion

After an explosive device on the Hayabusa2 spacecraft fired a copper cannonball a bit larger than a tennis ball into the near-Earth asteroid Ryugu, creating an artificial impact crater on it, researchers understand more about the asteroid's age and composition, they say. The results may inform efforts to make surface age estimations of other rubble-pile asteroids. When Hayabusa2 launched an orbital bombardment upon the tiny world of Ryugu - a rocky, nearby asteroid orbiting between Earth and Mars - it was to expose the pristine subsurface material for remote sensing and sampling. The impact produced by Hayabusa2's Small Carry-on Impactor (SCI) blasted a nearly 10-meter-wide hole into Ryugu's rubble-pile surface. The result was an artificial crater and accompanying plume of ejected material, which was captured in detail by the spacecraft's cameras. The number and size of craters that pepper asteroids like Ryugu can be used in further studies to determine the age and properties of asteroid surfaces. However, these studies require a general understanding of the nature of how such craters form - the crater scaling laws - which are often derived from laboratory experiments or numerical simulations. Here, Masahiko Arakawa and colleagues use observations from the SCI impact experiment to test crater scaling laws and characterize the surface of Ryugu. Arakawa et al. describe the artificial impact crater as semicircular with an elevated rim, a central pit, and an asymmetric pattern of ejecta, perhaps due to a large boulder buried near the impact location. The results suggest that the crater grew in the gravity-dominated regime, where crater size is limited by the local gravitational field, a finding that has implications for surface age estimations of rubble-pile asteroids, the authors say. Based on existing models, two estimates of the surface age of Ryugu have been obtained, and the authors' conclusions of how the SCI crater was formed support the younger age estimate, they say. What's more, Arakawa et al. suggest that the surface of Ryugu is composed of a cohesionless material similar to loose sand.

Credit: 
American Association for the Advancement of Science (AAAS)

Nurse practitioner clinical settings key to delivery of patient-centered care

PHILADELPHIA (March 18, 2020) - It's long been understood that care that respects and integrates the wants, needs, and preferences of patients results in higher ratings of satisfaction and improved health outcomes. Yet, several barriers still often impede the delivery of patient-centered care. A new study from the University of Pennsylvania School of Nursing (Penn Nursing) shows that organizational supports for nurse practitioners (NPs) can enhance their ability to deliver patient-centered care.

By surveying more than 1,700 NPs in four states, the investigators learned that NPs who work in good practice environments were significantly more likely to integrate patient preferences into care compared to those working in a mixed or poor environment. Good practice environments included adequate resources and better relationships with healthcare team members and administration. In these environments, NPs worked more autonomously and were enabled to guide care that was tailored and relevant to the individual, family, or community, the research shows.

"The delivery of patient-centered care is dependent on each healthcare team member practicing to the top of their license," says Penn Nursing's J. Margo Brooks Carthon, PhD, RN, FAAN, Associate Professor and lead investigator of the study. "Organizations that constrain NP practice fall counter to the innovative models of care delivery that emphasize maximizing the value of team members across all levels of the delivery system."

Addressing organizational culture represents an actionable strategy that can help facilitate NP delivery of patient-centered care. "We emphasize the organizational supports offered in NP clinical settings because, unlike the size or location of practices, clinical environments represent modifiable aspects of healthcare organizations," says Brooks Carthon.

Credit: 
University of Pennsylvania School of Nursing

Nature-inspired green energy technology clears important development hurdle

image: A sample of the solar fuel tile material, made by atomic layer deposition at Berkeley Lab's Molecular Foundry.

Image: 
Marilyn Sargent/Berkeley Lab

Scientist Heinz Frei has spent decades working toward building an artificial version of one of nature's most elegant and effective machines: the leaf.

Frei, and many other researchers around the world, seek to use photosynthesis - the sunlight-driven chemical reaction that green plants and algae use to convert carbon dioxide (CO2) into cellular fuel - to generate the kinds of fuel that can power our homes and vehicles. If the necessary technology could be refined past theoretical models and lab-scale prototypes, this moonshot idea, known as artificial photosynthesis, has the potential to generate large sources of completely renewable energy using the surplus CO2 in our atmosphere.

With their latest advance, Frei and his team at the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) are now closing in on this goal. The scientists have developed an artificial photosynthesis system, made of nanosized tubes, that appears capable of performing all the key steps of the fuel-generating reaction.

Their latest paper, published in Advanced Functional Materials, demonstrates that their design allows for the rapid flow of protons from the interior space of the tube, where they are generated from splitting water molecules, to the outside, where they combine with CO2 and electrons to form the fuel. That fuel is currently carbon monoxide, but the team is working toward making methanol. Fast proton flow, which is essential for efficiently harnessing sunlight energy to form a fuel, has been a thorn in the side of past artificial photosynthesis systems.

Now that the team has showcased how the tubes can perform all the photosynthetic tasks individually, they are ready to begin testing the complete system. The individual unit of the system will be small square "solar fuel tiles" (several inches on a side) containing billions of the nanoscale tubes sandwiched between a floor and ceiling of thin, slightly flexible silicate, with the tube openings piercing through these covers. Frei is hopeful that his group's tiles could be the first to address the major hurdles still facing this type of technology.

"There are two challenges that have not yet been met," said Frei, who is a senior scientist in Berkeley Lab's Biosciences Area. "One of them is scalability. If we want to keep fossil fuels in the ground, we need to be able to make energy in terawatts - an enormous amount of fuel. And, you need to make a liquid hydrocarbon fuel so that we can actually use it with the trillions of dollars' worth of existing infrastructure and technology."

He noted that once a model meeting these requirements is made, building a solar fuel farm out of many individual tiles could proceed quickly. "We, as basic scientists, need to deliver a tile that works, with all questions about its performance settled. And engineers in industry know how to connect these tiles. When we've figured out square inches, they'll be able to make square miles."

How it works

Each tiny (about 0.5 micrometer wide), hollow tube inside the tile is made of three layers: an inner layer of cobalt oxide, a middle layer of silica, and an outer layer of titanium dioxide. In the inner layer of the tube, energy from sunlight delivered to the cobalt oxide splits water (in the form of moist air that flows through the inside of each tube), producing free protons and oxygen.

"These protons easily flow through to the outer layer, where they combine with carbon dioxide to form carbon monoxide now - and methanol in a future step - in a process enabled by a catalyst supported by the titanium dioxide layer," said Won Jun Jo, a postdoctoral fellow and first author of the paper. "The fuel gathers in the space between tubes, and can be easily drained out for collection."

Importantly, the middle layer of the tube wall keeps the oxygen produced from water oxidation in the interior of the tube, and blocks the carbon dioxide and the evolving fuel molecules on the outside from permeating into the interior, thereby separating the two very incompatible chemical reaction zones.

This design mimics actual living photosynthetic cells, which separate oxidation and reduction reactions with organic membrane compartments inside the chloroplast. Similarly in line with nature's original blueprint, the team's membrane tubes allow the photosynthetic reaction to occur over a very short distance, minimizing the energy loss that occurs as ions travel and preventing unintended chemical reactions that would also lower the system's efficiency.

"This work is part of Berkeley Lab's commitment to contribute solutions to the urgent energy challenges posed by climate change," said Frei. "The interdisciplinary nature of the task requires the breadth of expertise and major facilities unique to Berkeley Lab. In particular, the nanofabrication and imaging capabilities of the Molecular Foundry are essential for synthesizing and characterizing the ultrathin layers and making square-inch-sized arrays of hollow nanotubes."

Credit: 
DOE/Lawrence Berkeley National Laboratory

BU astrophysicist and collaborators reveal a new model of our heliosphere

image: Is this what the heliosphere looks like? New research suggests so. The size and shape of the magnetic "force field" that protects our solar system from deadly cosmic rays has long been debated by astrophysicists.

Image: 
Image courtesy of Opher, et. al

The heliosphere is a vast region, extending more than twice as far as Pluto. It casts a magnetic “force field” around all the planets, deflecting charged particles that would otherwise muscle into the solar system and even tear through DNA. However, the heliosphere, despite its name, is not actually a sphere. Space physicists have long compared its shape to a comet, with a round “nose” on one side and a long tail extending in the opposite direction.

In 2015, using a new computer model and data from the Voyager 1 spacecraft, Merav Opher, professor of astronomy and researcher at Boston University's Center for Space Physics, and her coauthor James Drake of the University of Maryland came to a different conclusion: they proposed that the heliosphere is actually shaped like a crescent--not unlike a freshly baked croissant, in fact. In this "croissant" model, two jets extend downstream from the nose rather than a single fade-away tail. "That started the conversation about the global structure of the heliosphere," says Opher.

Then, two years after the "croissant" debate began, readings from the Cassini spacecraft, which orbited Saturn from 2004 until 2017, suggested yet another vision of the heliosphere. By timing particles echoing off the boundary of the heliosphere and correlating them with ions measured by the twin Voyager spacecraft, Cassini scientists concluded that the heliosphere is actually very nearly round and symmetrical: neither a comet nor a croissant, but more like a beach ball. Their result was just as controversial as the croissant. "You don't accept that kind of change easily," says Tom Krimigis, who led experiments on both Cassini and Voyager. "The whole scientific community that works in this area had assumed for over 55 years that the heliosphere had a comet tail."

Now, Opher, Drake, and colleagues Avi Loeb of Harvard University and Gabor Toth of the University of Michigan have devised a new three-dimensional model of the heliosphere that could reconcile the "croissant" with the beach ball. Their work was published in Nature Astronomy.

Unlike most previous models, which assumed that charged particles within the solar system all hover around the same average temperature, the new model breaks the particles down into two groups. First are charged particles coming directly from the solar wind. Second are what space physicists call "pickup" ions. These are particles that drifted into the solar system in an electrically neutral form--because they aren't deflected by magnetic fields, neutral particles can "just walk right in," says Opher--but then had their electrons knocked off.

The New Horizons spacecraft, which is now exploring space beyond Pluto, has revealed that these particles become hundreds or thousands of times hotter than ordinary solar wind ions as they are carried along by the solar wind and sped up by its electric field. But it was only by modeling the temperature, density and speed of the two groups of particles separately that the researchers discovered their outsized influence on the shape of the heliosphere.

That shape, according to the new model, actually splits the difference between a croissant and a sphere. While the new model looks very different from the classic comet model, the two may actually be more similar than they appear, says Opher, depending on exactly how you define the edge of the heliosphere. Think of transforming a grayscale photo to black and white: The final image depends a lot on exactly which shade of gray you pick as the dividing line between black and white.

So why worry about the shape of the heliosphere, anyway? Researchers studying exoplanets--planets around other stars--are keenly interested in comparing our heliosphere with those around other stars. Could the solar wind and the heliosphere be key ingredients in the recipe for life? "If we want to understand our environment we'd better understand all the way through this heliosphere," says Loeb, Opher's collaborator from Harvard.

And then there's the matter of those DNA-shredding interstellar particles. Researchers are still working on what, exactly, they mean for life on Earth and on other planets. Some think that they actually could have helped drive the genetic mutations that led to life like us, says Loeb. "At the right amount, they introduce changes, mutations that allow an organism to evolve and become more complex," he says. But the dose makes the poison, as the saying goes. "There is always a delicate balance when dealing with life as we know it. Too much of a good thing is a bad thing," says Loeb.

When it comes to data, though, there's rarely too much of a good thing. And while the models seem to be converging, they are still limited by a dearth of data from the solar system's outer reaches. That is why researchers like Opher are hoping to stir NASA to launch a next-generation interstellar probe that will cut a path through the heliosphere and directly detect pickup ions near the heliosphere's periphery. So far, only the Voyager 1 and Voyager 2 spacecrafts have passed that boundary, and they launched more than 40 years ago, carrying instruments of an older era that were designed to do a different job. Mission advocates based at Johns Hopkins University Applied Physics Laboratory say that a new probe could launch some time in the 2030s and start exploring the edge of the heliosphere 10 or 15 years after that.

"With the Interstellar Probe we hope to solve at least some of the innumerous mysteries that Voyagers started uncovering," says Opher. And that, she thinks, is worth the wait.

Credit: 
Boston University

Increasingly mobile sea ice risks polluting Arctic neighbors

image: This image shows sediment-rich sea ice in the Transpolar Drift Stream. A crane lowers two researchers from the decks of the icebreaker RV Polarstern to the surface of the ice to collect samples.

Image: 
Photo R. Stein, Alfred Wegener Institute

The movement of sea ice between Arctic countries is expected to significantly increase this century, raising the risk of more widely transporting pollutants like microplastics and oil between neighbouring coastal states, according to new research from McGill University in collaboration with University of Colorado Boulder, Columbia University, and Arizona State University.

The study in the American Geophysical Union journal Earth's Future predicts that by mid-century, the average time it takes for sea ice to travel from one region to another will decrease by more than half, and the amount of sea ice exchanged between Arctic countries such as Russia, Norway, Canada, and the United States will more than triple.

Increased interest in off-shore Arctic development, as well as shipping through the Central Arctic Ocean, may increase the amount of pollutants present in Arctic waters. And contaminants in ice can travel much faster than those in open water moved by ocean currents.

"This means there is an increased potential for sea ice to quickly transport all kinds of materials with it, from algae to oil," says researcher Patricia DeRepentigny from University of Boulder Colorado. "That's important to consider when putting together international laws to regulate what happens in the Arctic."

Historically, floating masses of Arctic sea ice could survive for up to 10 years: building up layers, lasting through each summer and mostly melting locally with a small fraction being transported to other regions. As the climate warms, however, that pattern has been changing.

While overall, the sea ice cover is thinning - and melting entirely across vast regions in the summer - the area of new ice formed during winter is actually increasing, particularly along the Russian coastline and soon in the Central Arctic Ocean. This thinner ice can move faster in the increasingly open waters of the Arctic, delivering the particles and pollutants it carries to waters of neighbouring states.

"In a warmer climate, the faster moving ice can travel longer distances and melt in Arctic peripheral seas belonging to other neighbouring states or even survive the following summer and be carried in regions across the Arctic Ocean where it melts", says Bruno Tremblay, Associate Professor in the Department of Atmospheric and Oceanic Sciences at McGill University.

Different emissions scenarios

In a previous study, the authors examined the movement of Arctic sea ice from the instrumental surface temperature record starting in 1979, when the first continuous satellite observations began. That study was the first to document an increase in the amount of sea ice being transported from one region to another over the last four decades. This work has been undertaken by former graduate student of Tremblay, Patricia De Repentigny, lead author of this study.

"The trend was particularly clear looking at the pre-2000 and post-2000 observational records when a clear acceleration of the sea ice decline was apparent," said Tremblay. "The natural extension of this work, given that global climate models reproduce this observed trend, is to look at the future of ice exchange between neighbouring states of the Arctic." For instance, Svalbard (an archipelago part of the kingdom of Norway) will see an increasingly large amount of sea ice formed along the Russian coastline and melting in its coastal waters.

The researchers used a global climate model, together with the Sea Ice Tracking Utility (SITU), which was developed by the team, to track sea ice from where it forms to where it ultimately melts during the 21st century.

The researchers considered two different emissions scenarios: the more extreme "business as usual" scenario, which predicts warming of 4 to 5 degrees Celsius by 2100, and a warming scenario limited to 2 degrees Celsius, inspired by the Paris Agreement. They then modeled how the sea ice will behave in both these scenarios at the middle and the end of the century.

In three of these four situations - including both mid-century predictions - the movement of sea ice between Arctic countries increased.

But in the high emissions scenario at the end of the century, they found countries could end up dealing more with their own ice and its contaminants, than ice from their neighbours. This is because with 4 degrees or more of warming in 2100, most sea ice that freezes during winter will melt each spring in the same region where it was formed.

Russia and the Central Arctic

Russia's exclusive economic zone and the Central Arctic Ocean are two places the researchers expect more ice to form, becoming major "exporters" of ice to other regions in the Arctic.

An exclusive economic zone (EEZ) is an area extending 200 nautical miles from the coastline, over which a state has special rights regarding fishing, shipping, and industrial activities like offshore oil drilling. Five countries have exclusive economic zones in the Arctic Ocean: Canada, the United States, Russia, Norway and Denmark (Greenland).

The researchers found that the amount of ice originating from Russia that then melts in another exclusive economic zone doubles by mid-century.

However, the Central Arctic in the middle of the Arctic Ocean is a place where no country has exclusive economic rights. Due to the Arctic Ocean being more ice free in summers, this will become an attractive shipping route - especially because ships don't need to get permission from another country to travel through it.

"The implications of this study are clear. Faster moving sea ice brings countries closer together, and local coastal pollution or pollution transported by rivers from far inland locations can have an impact on the coastal environment of even distant countries. The protection of one's environment in the far north therefore must rely on the protection of the environment of all Arctic states," says Tremblay.

Credit: 
McGill University

Maize, not metal, key to native settlements' history in NY

ITHACA, N.Y. - New Cornell University research is producing a more accurate historical timeline for the occupation of Native American sites in upstate New York, based on radiocarbon dating of organic materials and statistical modeling.

The results from the study of a dozen sites in the Mohawk Valley were recently published in the online journal PLoS ONE by Sturt Manning, professor of classical archaeology; and John Hart, curator in the research and collections division of the New York State Museum in Albany.

The findings, Manning said, are helping to refine our understanding of the social, political and economic history of the Mohawk Valley region at the time of early European intervention.

The work is part of the Dating Iroquoia Project, involving researchers from Cornell, the University of Georgia and the New York State Museum, and supported by the National Science Foundation.

The new paper continues and expands upon research on four Iroquoian (Wendat) sites in southern Ontario, published by the project team in 2018. Using similar radiocarbon dating and statistical analysis methods, the 2018 findings also impacted timelines of Iroquoian history and European contact.

"The Mohawk case was chosen because it is an iconic series of indigenous sites and was subject to one of the first big dating efforts in the 1990s," said Manning. "We have now examined a southern Iroquois (Haudenosaunee) case as well as a northern Iroquois (Wendat) case, and we again find that the previous dating scheme is flawed and needs revision."

The Mohawk and Hudson river valleys were key inland routes for Europeans entering the region from the coast in the 16th and early 17th centuries. Colonization of the new world enriched Europe - Manning has described this period as "the beginning of the globalized world" - but brought disease and genocide to indigenous peoples, and their history during this time is often viewed in terms of trade and migration.

The standard timeline created for historical narratives of indigenous settlement, Manning noted, has largely been based on the presence or absence of types of European trade goods - e.g., metal items or glass beads. Belying this Eurocentric colonial lens, trade practices differed from one native community to another, and not all of them accepted contact with, or goods from, European settlers.

To clarify the origins of metal goods found in the upstate New York settlements, the team used portable X-ray fluorescence (pXRF) analysis to determine whether copper artifacts were of native or European origin. They then also re-assessed the dates of the sites using radiocarbon dating coupled with Bayesian statistical analysis.

Bayesian analysis, Manning explained, is "a statistical method that integrates prior knowledge in order to better define the probability parameters around a question or unknown. In this case, archaeological and ethno-historic information was combined with data from a large set of radiocarbon dates in order to estimate occupation dates for a set of Mohawk villages across the 13th to early 17th centuries."

The focus was on the period from the late 15th to the early 17th century, he said, or "the long 16th century of change in the northeast."

The results "add to a growing appreciation of the interregional variations in the circulation and adoption patterns of European goods in northeastern North America in the 16th to earlier 17th centuries," Manning said.

In previous indigenous site studies, where artifacts indicated trade interactions, researchers might assume "that trade goods were equally available, and wanted, all over the region," and that different indigenous groups shared common trade practices, he said.

Direct radiocarbon dating of organic matter, such as maize kernels, tests those assumptions and removes the colonial lens, allowing an independent timeframe for historical narratives, Manning said.

At several major Iroquois sites lacking close European connections, independent radiocarbon studies indicate substantially different date ranges from the previous estimates based on trade goods.

"The re-dating of a number of Iroquoian sites also raises questions about the social, political and economic history of indigenous communities from the 14th to the 17th centuries," Manning said. "For example ... a shift to larger and fortified communities, and evidence of increased conflict," was previously thought to have occurred around the mid-15th century.

But the radiocarbon findings from some larger sites in Ontario and their cultivated maize fields ¬- 2,000 acres or more in some instances - date the sites from the mid-16th to the start of the 17th century, he said.

"However, as this New York state study shows, other areas had their own and differing trajectories. Thus with direct dating we start to see real, lived, histories of communities, and not some imposed generic assessment," Manning said. "The emerging new and independent timeframe for northeast North America will now form the basis of a wider indigenous history," Manning said, "free from a Eurocentric bias, with several past assumptions open for an overdue rethink."

Credit: 
Cornell University

We're getting better at wildlife conservation, AI study of scientific abstracts suggests

image: This image shows a sea otter raft in Elkhorn Slough.

Image: 
© Monterey Bay Aquarium, photo by Jessica Fujii

Researchers are using a kind of machine learning known as sentiment analysis to assess the successes and failures of wildlife conservation over time. In their study, appearing March 19 in Patterns--a new open access data science journal from Cell Press--the researchers assessed the abstracts of more than 4,000 studies of species reintroduction across four decades and found that, generally speaking, we're getting better and better at reintroducing species to the wild. They say that machine learning could be used in this field and others to identify the best techniques and solutions from among the ever-growing volume of scientific research.

"We wanted to learn some lessons from the vast body of conservation biology literature on reintroduction programs that we could use here in California as we try to put sea otters back into places they haven't roamed for decades," says senior author Kyle Van Houtan (@kylevanhoutan), chief scientist at Monterey Bay Aquarium. "But what sat in front of us was millions of words and thousands of manuscripts. We wondered how we could extract data from them that we could actually analyze, and so we turned to natural language processing."

Natural language processing is a kind of machine learning that analyzes strings of human language to extract useable information, essentially allowing a computer to read documents like a human. Sentiment analysis, which the researchers used in this paper, looks more specifically at a trained set of words that have been assigned a positive or negative emotional value in order to assess the positivity or negativity of the text overall.

The researchers used the database Web of Science to identify 4,313 species reintroduction studies published from 1987 to 2016 with searchable abstracts. Then they used several "off-the-shelf" sentiment analysis lexicons--meaning that the words in them had already been assigned a sentiment score based on things like movie and restaurant reviews--to build a model that could give each abstract an overall score. "We didn't have to train the models, so after running them for a few hours we all of a sudden had all these results at our disposal," says Van Houtan. "The scores gave us a trend over time, and we could query the results to see what the sentiment was associated with studies on pandas or on California condors or coral reefs."

The trends they saw suggested greater conservation success. "Over time, there's a lot less uncertainty in the assessment of sentiment in the studies, and we see reintroduction projects become more successful--and that's a big takeaway," he says. "Looking at thousands of studies, it seems like we're getting better at it, and that's encouraging."

"If we are going to maximize our conservation dollars, then we need to be able to quickly assess what works and what doesn't," says study co-author Lucas Joppa, Chief Environmental Officer at Microsoft. "Machine learning, and natural language processing in particular, has the ability to sift through results and shine a light on success stories that others can learn from."

To ensure their results were accurate, the researchers looked at the most common indicators of positive sentiment (and therefore conservation success) in their results and found words like "success," "protect," "growth," "support," "help," and "benefit"; words that indicated negative sentiment were ones like "threaten," "loss," "risk," "threat," "problem," and "kill." These words aligned with what they, as long-time conservation biologists, would typically use to indicate success and failure in their own studies. They also found that trends described by the sentiment analysis for specific reintroduction programs known to be successes or failures (like the reintroduction of the California condor) matched the known outcomes.

The researchers say that off-the-shelf sentiment analysis worked surprisingly well for them, likely because many words used in conservation biology are part of our everyday lexicons and were therefore accurately coded with the appropriate sentiment. In other fields, they think more work would need to be done to develop and train a model that could accurately code the sentiment of more technical, field-specific language and syntax. Another constraint, they say, is that only a limited number of the papers they sought to analyze were open access, which meant they had to assess abstracts rather than full papers. "We're really just scratching the surface here, but this is definitely a step in the right direction," says Van Houtan.

Still, they do think this is a technique that can and should be applied more broadly in both conservation biology and other fields to make sense of the vast amounts of research that's now being conducted and published. "So much local conservation work goes unnoticed by the global conservation community, and this paper shows how machine learning can help close that information gap," says Joppa.

"Many of these techniques have been in use for over a decade in commercial settings, but we're hoping to translate them into settings like ours to combat climate change or plastic pollution or to promote endangered species conservation," Van Houtan says. "There's a plethora of data that's right at our fingertips, but it's this sleeping giant because it isn't properly curated or organized, which makes it challenging to analyze. We want to connect people with ideas, capacity, and technical solutions they might not otherwise encounter so we can bring some progress to these seemingly intractable problems."

Credit: 
Cell Press

Seductive details inhibit learning

PULLMAN, Wash. - When teachers use a funny joke, a cat video or even background music in their lessons, it can keep students from understanding the main content.

These so-called "seductive details," information that is interesting but irrelevant, can be detrimental to learning, according to a meta-analysis by Washington State University researchers published in the journal Educational Psychology Review recently. The analysis of 58 studies involving more than 7,500 students found that those who learned with seductive details performed lower on learning outcome measures than those who learned without the extraneous information.

"If you have an irrelevant piece of information, and it is something that is interesting, students tend to perform worse," said Kripa Sundar, the lead author on the paper that is based on her dissertation from WSU's College of Education. "There are multiple hypotheses on why that happens, but the simplest is that students' attention is now diverted toward that irrelevant information, and they're spending too much time trying to understand what that seductive detail is instead of the content matter."

Sundar and her co-author Olusola Adesope, WSU professor of educational psychology, found that the effect was worse when the seductive detail was placed next to informative and relevant diagrams, or when it was constant, such as a static joke or image on a screen. Including seductive details was also more detrimental on paper than in digital formats, and more prominent in certain subjects, such as social studies and natural sciences.

The analysis supports the coherence principle in multimedia learning which recommends that all relevant information needs to be placed together and unnecessary information should be excluded.

Good detail that helps engage students is still important, Sundar said. It's just important that those details are pertinent to the topic.

"This does not mean that learning shouldn't be fun," she said. "We just might need to exert a little more effort into thinking how we can make the learning activity itself a lot more engaging and interesting in a way that contributes toward the educational objective."

Humans tend to connect details to big concepts, so good details that teachers include can be helpful in having students recollect a certain idea, but if detail is included that is not useful but very alluring, it can potentially trigger a different line of thought.

For example, if, during a science lesson about how lightening forms, the teacher talks about how a freak lightning strike killed 16 people at a church in Rwanda in 2018, the students can easily be derailed by that very specific, dramatic story.

The study also calls for further research into this phenomenon. Even though the analysis was broad, Sundar noted that most studies used short learning sessions of only six to 12 minutes when a typical class is 55 minutes long.

She pointed to two other aspects for further investigation: the role of prior knowledge of a subject, which may allow a learner to better distinguish relevant from irrelevant information, and the potential positive effect seductive details may have on students' emotions. For instance, something distracting like a joke or music might lessen the anxiety many people feel about learning math.

"There may be some trade-offs between the potential emotional benefit and the detrimental effects of seductive details that we're seeing on learning," said Sundar. "Understanding that would enable us to make strong recommendations for practice because teachers are teaching children, and they're human."

Adesope pointed out that the findings only reflect cognitive outcomes and further research should look into other aspects that may balance these effects.

"We are currently investigating the degree to which the hypothesized emotional benefits may compensate for the cognitive disadvantages of learning with seductive details," said Adesope.

Credit: 
Washington State University

The power of light for internet of underwater things

image: This 1.5-meter-long experimental setup was used to test the effectiveness of a submerged temperature sensor to charge and transmit instructions to a solar panel.

Image: 
© 2020 Filho et al.

A system that can concurrently transmit light and energy to underwater energy devices is under development at KAUST. Self-powered internet of underwater things (IoUT) that harvest energy and decode information transferred by light beams can enhance sensing and communication in the seas and oceans. KAUST researchers are now solving some of the many challenges to this technology being employed in such harsh and dynamic environments1.

"Underwater acoustic and radio wave communications are already in use, but both have huge drawbacks. Acoustic communication can be used over large distances but lacks stealth (making it detectable by a third party) and can only access a small bandwidth," explains master's student Jose Filho. "Furthermore, radio waves lose their energy in seawater, which limits their use in shallow depths. They also require bulky equipment and lots of energy to run," he explains.

"Underwater optical communication provides an enormous bandwidth and is useful for reliably transmitting information over several meters," says co-first author Abderrahmen Trichili. "KAUST has conducted some of the first tests of high-bit-rate underwater communication, setting records2 on the distance and capacity of underwater transmission in 2015."

Led by Khaled Salama, Filho, Trichili and team are investigating the use of simultaneous lightwave information and power transfer (SLIPT) configurations for transmitting energy and data to underwater electronic devices.

"SLIPT can help charge devices in inaccessible locations where continuous powering is costly or not possible," explains Filho.

In one experiment, the KAUST team was able to charge and transmit instructions across a 1.5-meter-long water tank to a solar panel on a submerged temperature sensor. The sensor recorded temperature data and saved it on a memory card, later transmitting it to a receiver when information in the light beam instructed it to do so.

In another experiment, the battery of a camera submerged at the bottom of a tank supplied with Red Sea water was charged via its solar panel within an hour and a half by a partially submerged, externally powered laser source. The fully charged camera was able to stream one-minute-long videos back to the laser transmitter.

"These demonstrations were the first stand-alone devices to harvest energy, decode information and perform a particular function--in this case temperature sensing and video streaming," says Salama.

The KAUST team is now working on the deployment of underwater SLIP configurations. They are finding ways to overcome the effects of turbulence on underwater reception and looking into the use of ultraviolet light for transmissions that face underwater obstructions. They are also developing smart underwater optical positioning algorithms that could help locate relay devices set up to extend the communication ranges of IoUT devices.

Their and others' research in the field could ultimately lead to the deployment of self-powered underwater sensors for tracking climate change effects on coral reefs, detecting seismic activity and monitoring oil pipelines. It could also lead to the development of small autonomous robots for more accurate and extensive underwater search and rescue operations.

Credit: 
King Abdullah University of Science & Technology (KAUST)

Bone analyzes tell about kitchen utensils in the Middle Ages

Clay pots? Wooden spoons? Copper pots? Silver forks? What materials has man used for making kitchen utensils throughout history? A new study now sheds light on the use of kitchen utensils made of copper.

At first thought, you would not expect hundreds of years old bones from a medieval cemetery to be able to tell you very much - let alone anything about what kinds of kitchen utensils were used to prepare food.

But when you put such a bone in the hands of Professor Kaare Lund Rasmussen, University of Southern Denmark, the bone begins to talk about the past.

A warehouse full of bones

- For the first time, we have succeeded in tracing the use of copper cookware in bones. Not in isolated cases, but in many bones over many years, and thus we can identify trends in historical use of copper in the household, he explains.

The research team has analyzed bones from 553 skeletons that are between 1200 and 200 years old. They all come from nine, now abandoned cemeteries in Jutland, Denmark and Northern Germany. The skeletons are today kept at Schloss Gottorf in Schleswig, Germany and at the University of Southern Denmark.

Some of the bones examined are from Danish cities such as Ribe and Haderslev, while others are from small rural communities, such as Tirup and Nybøl.

Your body needs copper

The element copper can be traced in bones if ingested. Copper is needed for the body to function; it is, among other things, involved in a number of metabolic processes, such as the function of the immune system - so without copper, the individual would not be able to live.

The need for copper is usually met through the food we eat and most of us probably never think about this.

It is different with the high concentrations of copper now revealed to have been ingested by our predecessors in the Viking Age and the Medieval Times. Much of this copper must have come from the kitchen utensils with which the daily meals were prepared, the researchers believe.

How did the copper get into the body?

One possibility is that the copper pots were scraped by metal knives, releasing copper particles, and that these particles were ingested with the food.

Or maybe copper was dissolved and mixed with food, if the pot was used for storing or cooking acidic foods.

- The bones show us that people consumed tiny portions of copper every day throughout their lives. We can also see that entire cities have been doing this for hundreds of years. In Ribe, the inhabitants did this for 1000 years, says Kaare Lund Rasmussen.

Who ate the copper?

Apparently, the copper intake was at no time so great that it became toxic. But the researchers can't say for sure.

However, they can with certainty say that some people never ingested copper enough for it to be traceable in the bones. Instead, they ate food prepared in pots made of other materials.

These people lived in the countryside. The bones reveal that inhabitants in the small villages of Tirup and Nybøl did not prepare their food in copper pots.

Rely less on written sources

But how do these findings go with historical accounts and pictures of copper cookware used in in country kitchens?

- A copper pot in a country kitchen may have been so unusual that the owner would tell everybody about it and maybe even write it down. However, such an account should not lead to the conclusion that copper cookware was commonly used in the countryside. Our analyzes show the opposite, says Kaare Lund Rasmussen.

Contrary, the use of copper pots was evident in the towns of Ribe, Horsens, Haderslev and Schleswig.

1000 years of constant copper ingestion

- The cities were dynamic communities and homes of rich people who could acquire copper items. Wealthy people probably also lived in the countryside, but they did not spend their money on copperware, concludes Kaare Lund Rasmussen.

208 of the skeletons originate from a cemetery in Ribe, covering a period of 1000 years from AD 800 to AD 1800, spanning from the Viking Age over the Middle Ages to recent times.

- These skeletons show us there was a continuous exposure of copper throughout the period. Thus, for 1000 years, the inhabitants consumed copper via their daily diet.

Mercury in Tycho Brahe's beard

Professor Kaare Lund Rasmussen has performed several chemical analyzes of historical and archaeological artifacts.

Among other things, he has analyzed a hair from the Danish Renaissance astronomer Tycho Brahe's beard and found that the he did not die from mercury poisoning, as hard-nosed rumors would otherwise know.

In turn, Tycho Brahe was exposed to large amounts of gold until two months before his death - perhaps as a result of his alchemist life, perhaps because he ate and drank from gold-plated service.

Credit: 
University of Southern Denmark

Pembrolizumab shows promise for some advanced, hard-to-treat rare cancers

image: This is Aung Naing, M.D.

Image: 
The University of Texas MD Anderson Cancer Center

HOUSTON -- A study conducted by researchers at The University of Texas MD Anderson Cancer Center demonstrated acceptable toxicity and anti-tumor activity in patients with four types of advanced, hard-to-treat rare cancers. Study findings were published in the March 17 online issue of the Journal for ImmunoTherapy of Cancer.

The open-label, Phase II study followed 127 patients who had advanced rare cancers: squamous cell carcinoma of the skin (cSCC), carcinoma of unknown primary (CUP), adrenocortical carcinoma (ACC), and paraganglioma-pheochromocytoma. Patients received 200 milligrams of the immunotherapy treatment pembrolizumab administered every three weeks between August 2016 and July 2018. All patients had tumors that had progressed on standard therapies.

"Our findings that pembrolizumab has a favorable toxicity profile and anti-tumor activity in patients with these rare cancers supports further evaluation in these populations," said Aung Naing, M.D., associate professor of Investigational Cancer Therapeutics. "Finding solutions for treatment is vital given that patients with advanced rare cancers have poor prognosis and few treatment options."

Rare cancers are defined by the American Cancer Society as those with an incidence of fewer than six cases per 100,000 people per year. CUP is a type of cancer in which the primary cancer site is not always known, but has spread to other areas within the body, while ACC occurs when malignant cells form in the outer layer of the adrenal glands. Paraganglioma-pheochromocytoma are tumors formed in nerve-like cells near the adrenal glands (pheochromocytomas) and near blood vessels or nerves in the head, neck, chest, abdomen, and pelvis. cSCC is the second most common type of skin cancer and is treatable in early stages, but harder to treat if in advanced stages.

The primary objective of the study was to find the proportion of patients who were alive and progression-free (non-progression rate) at 27 weeks on treatment with pembrolizumab. The median non-progression rate at that time was 28% for 127 patients with advanced rare cancers. Complete response, partial response or stable disease after four months was observed in 38% of the patients. Non-progression rates for each cancer group were: 36% for cSCC, 33% for CUP, 31% for ACC, and 43% for paraganglioma-pheochromocytoma. Treatment-related adverse events occurred in 52% of patients, with the most common side effects being fatigue and rash, with six deaths reported that were unrelated to treatment.

"Studies such as this one are key since rare cancers collectively accounted for 13% of all new cancer diagnoses and 25% of all cancer-related deaths in adults in 2017," said Naing. "The five-year survival rate is 15% to 20% lower than for more common cancers. The poor outcomes associated with rare cancers have been attributed to difficulty or delay in diagnosis, limited access to centers with expertise such as MD Anderson, and limited therapeutic options."

Naing added that, despite the significant burden and aggressive nature of these diseases, research that could lead to development and approval of new therapies are few. However, MD Anderson has the patient volume and research resources that uniquely positions its researchers to conduct this work.

"Findings from our study support further investigation to confirm the clinical activity of pembrolizumab in advanced rare cancers, and to identify immune signatures predictive of response to treatment,"said Naing.

Credit: 
University of Texas M. D. Anderson Cancer Center

NASA finds little strength left in Tropical Cyclone Herold

image: On March 19, 2020, the MODIS instrument that flies aboard NASA's Aqua satellite gathered infrared data on Herold. MODIS showed minimal associated strong convection (rising air that forms the thunderstorms that make up tropical cyclones). Strongest thunderstorms had cloud top temperatures (blue) as cold as minus 50 degrees Fahrenheit (minus 45.5 Celsius).

Image: 
NASA/NRL

Wind shear pushed former Tropical Cyclone Herold apart and infrared imagery from NASA's Aqua satellite showed the system with very little strength remaining.

NASA's Aqua satellite uses infrared light to analyze the strength of storms by providing temperature information about the system's clouds. The strongest thunderstorms that reach high into the atmosphere have the coldest cloud top temperatures.

On March 19, 2020, the Moderate Resolution Imaging Spectroradiometer or MODIS instrument that flies aboard NASA's Aqua satellite gathered infrared data on Herold. Animated multispectral satellite imagery showed Herold's low-level circulation center is partially exposed and there is minimal associated strong convection (rising air that forms the thunderstorms that make up tropical cyclones). Strongest thunderstorms had cloud top temperatures as cold as minus 50 degrees Fahrenheit (minus 45.5 Celsius).
At 5 a.m. EDT (0900 UTC), the center of Tropical Storm Herold was located near latitude 25.4 degrees south and longitude 71.1 degrees east, about 827 nautical miles east-southeast of Port Louis, Mauritius. Maximum sustained winds were near 46 mph (40 knots/74 kph).

Vertical wind shear, that is winds outside of a tropical cyclone at different heights in the atmosphere (the troposphere), push against a tropical cyclone and tear it apart. Herold is in an area of high wind shear blowing between 25 to 30 knots (29 to 35 mph/46 to 56 kph)

The Joint Typhoon Warning Center (JTWC) forecast notes that Herold will continue moving southeast and weaken rapidly because it is moving into an area of increased vertical wind shear speeds and cooler sea surface temperatures. The storm will dissipate within 24 hours.

Tropical cyclones/hurricanes are the most powerful weather events on Earth. NASA's expertise in space and scientific exploration contributes to essential services provided to the American people by other federal agencies, such as hurricane weather forecasting.

Credit: 
NASA/Goddard Space Flight Center

Advances in genetic, geospatial techniques aid efforts to fend off invasive insects

image: The Mediterranean fruit fly (Ceratitis capitata, adult shown at left) and gypsy moth (Lymantria dispar, larva shown at right) are two invasive species among many for which new advances in genetic techniques and geospatial analysis are driving invasive-species management. Two new special collections in the Annals of the Entomological Society of America, published in a partnership between ESA and the National Invasive Species Council, feature more than a dozen research articles showcasing both cutting-edge technologies and practical implications for management of invasive insects and arthropods.

Image: 
Fruit fly: USDA ARS Photo Unit, USDA Agricultural Research Service, Bugwood.org. Gypsy moth: Karla Salp, Washington State Department of Agriculture, Bugwood.org.

Annapolis, MD; March 19, 2020--In the fight to protect native ecosystems from invasive insects and related arthropod species, promising new tools are arising from rapid advances on a pair of research fronts: genetic analysis and geospatial technology.

At ports of entry in the United States, for instance, fruit flies hitchhiking in cargo can now be identified to species with DNA barcoding. And fine-scale environmental data combined with knowledge of insect lifecycles is putting simple maps that forecast pest emergence in the hands of forest managers across the country. These examples and more are showcased in a pair of new special collections in the Annals of the Entomological Society of America, published in a partnership between ESA and the National Invasive Species Council (NISC).

"We sought papers that exemplified both cutting-edge technologies and practical implications for improved management," says Stanley Burgiel, Ph.D., NISC executive director. "New and innovative solutions are needed to address the continued and growing challenges posed by invasive species."

The first of the two collections, "Geospatial Analysis of Invasive Insects," was published February 11, with seven articles showcasing applications of geographic tools for modeling movements and potential ranges of invasive insects such as wood-boring beetles, aphids, gypsy moth (Lymantria dispar), and the Mediterranean fruit fly (Ceratitis capitata). Included is a profile of the success of the USA National Phenology Network's "Pheno Forecasts" mapping tool, the result of "advanced work being done in a collaborative fashion across multiple federal agencies and academia," says Jeffrey Morisette, Ph.D., chief scientist at NISC and co-editor of the geospatial analysis collection with Kevin Macaluso, Ph.D., chair of the Department of Microbiology & Immunology at the University of South Alabama.

"The paper also describes a consultative mode of engagement as a way to continuously improve these products so that they are more useful for the forest pest management community," says Morisette. "Both the incorporation of insect phenology for predictions and the consultative engagement provide insight on forest pest management strategies." (Also see "USA National Phenology Network Aids Management of Pest Insects With Life-Stage Forecast Maps" on ESA's Entomology Today blog.)

The second collection, "Advanced Genetic Analysis of Invasive Arthropods," is published today, with nine articles detailing new genetic techniques for detecting and tracking invasive species such as apple maggot fly (Rhagoletis pomonella), cattle fever tick (Rhipicephalus microplus), coconut rhinoceros beetle (Oryctes rhinoceros), and crazy ants (Nylanderia spp.). One article reports on research showing the potential for DNA testing using high-throughput sequencing to allow faster and more cost-effective species identification of fruit fly samples intercepted during baggage inspections at various airports and border crossings.

"Because fruit flies are often detected at an immature larval stage, identification can be extremely challenging," Burgiel says. "This work suggests that these technologies could be an important tool to support species-level identification from high-volume occurrences of visually ambiguous collections at points of entry."

Burgiel served as co-editor for the genetic analysis collection with Keith Gaddis, Ph.D., deputy program scientist for the Biological Diversity and Ecological Forecasting programs at NASA. "Genetic tools have streamlined the ability to detect invasive species presence across multiple possible sources, where we previously required months of investigation to detect presence in a single sample," Gaddis says. "Using genetic tools, even a microscopic fragment of tissue from an invasive species can be used to identify the presence of that species."

Gaddis notes that scientists are only scratching the surface of what genetic and geospatial techniques may allow in the effort to fend off invasive arthropods. "This as a unique time where genetic and Earth observation data are advancing at an extremely rapid pace," he says. "These advances have made analyses that were feasibly or financially impossible a decade ago accessible to an increasingly broad scientific and societal audience."

Calling attention to these advances is exactly what brought NISC and ESA together to publish these research collections. "We hope that the collections enable greater adoption of these tools for better detection and more informed response actions by those working to prevent, eradicate, and control invasive species," Burgiel says. "At the same time, we hope the research community continues to build on these methodologies for more sophisticated, accurate, and easier methods."

Credit: 
Entomological Society of America