Tech

Single atom-thin platinum makes a great chemical sensor

image: "Atomically thin platinum could be useful for ultra-sensitive and fast electrical detection of chemicals. We have studied the case of platinum in great detail, but other metals like palladium produce similar results", says Samuel Lara Avila, Associate Professor at the Quantum Device Physics Laboratory and one of the authors of the article.

Image: 
JO Yxell/Chalmers University of Technology

Researchers at Chalmers University of Technology, Sweden, together with colleagues from other universities, have discovered the possibility to prepare one-atom thin platinum for use as a chemical sensor. The results were recently published in the scientific journal Advanced Material Interfaces.

"In a nutshell, we managed to make a metal layer just one-atom thick - sort of a new material. We found that this atomically-thin metal is super sensitive to its chemical environment. Its electrical resistance changes significantly when it interacts with gases,", explains Kyung Ho Kim, postdoc at the Quantum Device Physics Laboratory at the Department of Microtechnology and Nanoscience at Chalmers, and lead author of the article.

The essence of the research is the development of 2D materials beyond graphene.

"Atomically thin platinum could be useful for ultra-sensitive and fast electrical detection of chemicals. We have studied the case of platinum in great detail, but other metals like palladium produce similar results", says Samuel Lara Avila, Associate Professor at the Quantum Device Physics Laboratory and one of the authors of the article.

The researchers used the sensitive chemical-to-electrical transduction capability of atomically thin platinum to detect toxic gases at the parts-per-billion level. They demonstrated this with detection of benzene, a compound that is carcinogenic even at very small concentrations, and for which no low-cost detection apparatus exists.

"This new approach, using atomically thin metals, is very promising for future air-quality monitoring applications", says Jens Eriksson, Head of the Applied sensor science unit at Linköping University and a co-author of the paper.

Credit: 
Chalmers University of Technology

New treatments for deadly lung disease could be revealed by 3D modeling

A 3D bioengineered model of lung tissue built by University of Michigan researchers is poking holes in decades worth of flat, Petri dish observations into how the deadly disease pulmonary fibrosis progresses.

The causes of pulmonary fibrosis are not fully understood, but the condition is marked by scar tissue that forms inside the lungs. That scar tissue stiffens the walls of the lungs' air sacs, called alveoli, or, at advanced stages, can completely fill the alveolar spaces. Both scenarios make breathing difficult and decrease the amount of oxygen entering the bloodstream. Often the condition is irreversible, eventually causing lung failure and death.

Some clinicians are concerned that critically ill COVID-19 patients may develop a form of pulmonary fibrosis after a long stay in the ICU.

Researchers are searching for better treatments. While they've managed to find some drugs that relieve symptoms or slow the progression in practice, they haven't been able to reliably replicate those results in today's 2D lab models. So they don't understand how or why those drugs are working, and they can't always predict which compounds will make a difference. The new research from U-M takes a step in that direction, and it starkly demonstrates how prior approaches have been ineffective.

The team showed that in some 2D models, drugs that are already known to be effective in treatment do not produce test results that show efficacy. Their 3D tissue engineered model of fibrotic lung tissue, however, shows that those drugs work.

Before their tests on drugs, they first performed studies to understand how tissue stiffness drives the appearance of myofibroblasts--cells that correlate with the progression of scarring.

"Even in cells from the same patient, we saw different outcomes," said Daniel Matera, a doctoral candidate and research team member. "When we introduced stiffness into the 2D testing environment, it activated myofibroblasts, essentially creating scar tissue. When we introduced that same kind of stiffness into our 3D testing environment, it prevented or slowed the activation of myofibroblasts, stopping or slowing the creation of scar tissue."

With the majority of pulmonary fibrosis research relying on 2D testing, he said, many have believed the high lung stiffness in patients is what should be targeted by treatments. U-M's research indicates that targeting stiffness alone may not hinder disease progression in patients, even if it works in a Petri dish.

To find effective treatments, researchers first screen libraries of pharmaceutical compounds. Today, they typically do that on cells cultured on flat plastic or hydrogel surfaces, but these settings often do a poor job of recreating what happens in the human body.

Brendon Baker, assistant professor in the U-M Department of Biomedical Engineering, and his team took a tissue engineering approach. They reconstructed 3D lung interstitium, or connective tissue, the home of fibroblasts and location where fibrosis begins. Their goal was to understand how mechanical cues from lung tissue affect fibroblast behavior and disease progression.

"Recreating the 3D fibrous structure of the lung interstitium allowed us to confirm effective drugs that wouldn't be identified as hits in traditional screening settings," Baker said.

At the center of the pulmonary fibrosis mystery is the fibroblast, a cell found in the lung interstitium that is crucial to healing but, paradoxically, can also drive disease progression. When activated, after an injury or when disease is present, they become myofibroblasts. Regulated properly, they play an important role in wound healing, but when misregulated, they can drive chronic disease. In the case of pulmonary fibrosis, they cause the stiffening of lung tissue that hampers breathing.

"Our lung tissue model looks and behaves similarly to what we have observed when imaging real lung tissue," Baker said. "Patient cells within our model can actively stiffen, degrade or remodel their own environment just like they do in disease."

Credit: 
University of Michigan

NASA-NOAA satellite helps confirm Teddy now a record-setting tropical storm

image: This nighttime image from NASA-NOAA's Suomi NPP satellite revealed a more organized Tropical Depression 20 helping confirm it had become Tropical Storm Teddy in the Central Atlantic Ocean around midnight on Sept. 14.

Image: 
NASA Worldview, Earth Observing System Data and Information System (EOSDIS)

NASA-NOAA's Suomi NPP satellite provided an infrared image of Tropical Depression 20 in that helped confirm it organized and strengthened into Tropical Storm Teddy. Teddy, which has broken a hurricane season record, is expected to become a major hurricane later in the week, according to the National Hurricane Center (NHC).

Tropical Depression 20 formed late on Saturday, Sept. 12 in the Central North Atlantic Ocean, about 2,030 miles (3,265 km) east of the Northern Leeward Islands. It maintained tropical depression status until this morning, Sept. 14, when infrared satellite data helped confirm it had strengthened and organized. NHC reported this makes Tropical Storm Teddy the earliest 19th named storm, besting the unnamed tropical storm on October 4, 2005.

NASA's Night-Time View of Elida's Intensification

The Visible Infrared Imaging Radiometer Suite (VIIRS) instrument aboard Suomi NPP provided a nighttime image of Tropical Depression 20. The nighttime image, taken around midnight on Sept. 14, revealed that Tropical Depression 20 had become more organized helping confirm that it had become Tropical Storm Teddy in the Central Atlantic Ocean. The image was created using the NASA Worldview application at NASA's Goddard Space Flight Center in Greenbelt, Md.

NHC Senior Hurricane Forecaster Stacy Stewart noted, "Earlier ASCAT [scatterometer that measures wind speed] data indicated peak winds of 33 knots in the northwestern quadrant of the depression. Since then, convection has increased and so have the various satellite intensity estimates. The initial intensity is increased to 35 knots [40 mph] based on the ASCAT data, and satellite estimates of 35 knots from TAFB [NOAA's Tropical Analysis and Forecast Branch] and 38 knots from University of Wisconsin-Madison-CIMSS SATCON." The CIMSS Satellite Consensus (SATCON) product blends tropical cyclone intensity estimates derived from multiple objective algorithms to produce an ensemble estimate of intensity for current tropical cyclones worldwide.

Teddy's Status on Sept. 14

At 5 a.m. EDT (0900 UTC) on Sept. 14, the center of Tropical Storm Teddy was located near latitude 13.4 degrees north and longitude 40.4 degrees west. Teddy is located about 1,405 miles (2,260 km) east of the Lesser Antilles. Teddy is moving toward the west-northwest near 14 mph (22 kph). Maximum sustained winds have increased to near 40 mph (65 kph) with higher gusts. The estimated minimum central pressure is 1004 millibars.

Teddy's Forecast

A continued west-northwestward motion is expected for the next day or two followed by a turn toward the northwest by mid-week. Additional strengthening is anticipated, and Teddy is forecast to become a hurricane in a couple of days.

Large swells generated by Tropical Storm Teddy are expected to reach the Lesser Antilles and the northeastern coast of South America on Wednesday. These swells are likely to cause life-threatening surf and rip current conditions.

Credit: 
NASA/Goddard Space Flight Center

New method to design diamond lattices and other crystals from microscopic building blocks

image: Petr Sulc is a researcher at the Biodesign Center for Molecular Design and Biomimetics and ASU's School of Molecular Sciences (SMS).

Image: 
The Biodesign Institute at Arizona State University

An impressive array of architectural forms can be produced from the popular interlocking building blocks known as LEGOS®. All that is needed is a child's imagination to construct a virtually infinite variety of complex shapes.

In a new study appearing in the journal Physical Review Letters, researchers describe a technique for using LEGO®-like elements at the scale of a few billionths of a meter. Further, they are able to cajole these design elements to self-assemble, with each LEGO® piece identifying its proper mate and linking up in a precise sequence to complete the desired nanostructure.

While the technique described in the new study is simulated on computer, the strategy is applicable to self-assembly methods common to the field of DNA nanotechnology. Here, the equivalent of each LEGO® piece consists of a nanostructures made out of DNA, the famous molecular repository of our genetic code. The four nucleotides making up DNA--commonly labelled A, C, T & G-- stick to one another according to a reliable rule: A nucleotides always pair with Ts and C nucleotides with Gs.

Using base-pairing properties allows researchers like Petr Sulc, corresponding author of the new study, to design DNA nanostructures that can take shape in a test tube, as if on autopilot.

"The possible number of ways how to design interactions between the building blocks is enormous, something what is called a 'combinatorial explosion'" Sulc says. "It is impossible to individually check every possible building block design and see if it can self-assemble into the desired structure. In our work, we provide a new general framework that can efficiently search the space of possible solutions and find the one which self-assembles into the desired shape and avoids other undesired assemblies."

Sulc is a researcher at the Biodesign Center for Molecular Design and Biomimetics and ASU's School of Molecular Sciences (SMS). He is joined by his colleague Lukáš Kroc along with international collaborators Flavio Romano and John Russo from Italy.

The new technique marks an important stepping stone in the rapidly-developing field of DNA nanotechnology, where self-assembled forms are finding their way into everything from nanoscale tweezers to cancer-hunting DNA robots.

Despite impressive advances, construction methods relying on molecular self-assembly have had to contend with unintended bondings of building material. The challenges grow with the complexity of the intended design. In many cases, researchers are perplexed as to why certain structures self-assemble from a given set of elementary building blocks, as the theoretical foundations of these processes are still poorly understood.

To confront the problem, Sulc and colleagues have invented a clever color-coding system that manages to restrict the base pairings to only those appearing in the design blueprint for the final structure, with alternate base-pairings forbidden.

The process works through a custom-designed optimization algorithm, where the correct color code for self-assembly of the intended form produces the target structure at an energy minimum, while excluding competing structures.

Next, they put the system to work, using computers to design two crystal forms of great importance to the field of photonics: pyrochlore and cubic diamond. The authors note that this innovative method is applicable to any crystal structure.

To apply their theoretical framework, Sulc has started a new collaboration with professors Hao Yan and Nick Stephanopoulos, his colleagues at Biodesign and SMS. Together, they aim to experimentally realize some of the structures that they were able to design in simulations.

"While the obvious application of our framework is in DNA nanotechnology, our approach is general, and can be also used for example to design self-assembled structures out of proteins," Sulc says.

Credit: 
Arizona State University

Magnetic field with the edge!

video: Note the start of the magnetic field at the interface between the beam and plasma. The field deep inside the plasma starts much later.

Image: 
Atul Kumar

A team of Indian and Japanese physicists have overturned the six-decade old notion that the giant magnetic field in a high intensity laser produced plasma evolves from the small, nanometre scale in the bulk plasma [1]. They show that instead the field actually originates at macroscopic scales defined by the boundaries of the electron beam that is propagating in the plasma. The new mechanism seeks to alter our understanding of magnetic fields in astrophysical scenarios and laser fusion and may help in the design of the next generation high energy particle sources for imaging and therapies.

Giant magnetic fields billion times that of the earth, exist in the hot, dense plasma in astrophysical systems like neutron stars [2]. Basic electromagnetism established from the times of Oersted and Faraday tells us that it is the current in a system that causes magnetic fields. In a plasma there are two currents, one a forward propagating one and an opposite, mitigating current induced by the forward one itself. If the currents are equal and overlapped in space, there is no net magnetic field. However, small fluctuations in the plasma can separate them and lead to an instability that grows with time. Indeed, for decades it has been believed that the giant fields arise from the interaction of opposing currents inside the bulk plasma via the famous Weibel instability [3], at scales much smaller than the beams themselves. The magnetic field is then said to spread out to macroscopic space via what is called an inverse cascade, in a 'bottom up' fashion.

In contrast, the India-Japan team shows that the field actually originates at the boundary of the current beam that is at macroscopic length scales and moves inwards to smaller scales (top down!). And the magnitude of this field is much larger than that caused by Weibel and other instabilities. The team christens the mechanism leading to this magnetic field 'finite beam mechanism' to indicate the crucial role of the finite size of the current beam in this mode. They show that radiation leaks out of the edges of the current destabilizing the beam and causing the magnetic field. There is clear evidence for this mode in their laser experiments and computer simulations.

Why has this new mode been missed in all the computer simulations over the past many decades? The authors point out that this is due to the assumptions of homogeneity and infinite extent typical of all simulations. However, real physical system have boundaries and the physics there leads to several interesting effects -- examples are the focusing of charged particles by the fringe fields at the end of capacitor plates, the famous Casimir effect that leads to attraction between the plates due to quantum effects, and the surface propagating electromagnetic modes known as surface plasmons, quite popular in nano-optics and near field microscopies.

Caution! Tread carefully at the edge.

References:

[1] A. Pukhov, Strong field interaction of laser radiation, Rep. Prog. Phys. 66 (2003) 47-101

[2] For example, see https://imagine.gsfc.nasa.gov/science/objects/neutron_stars1.html

[3] E. S. Weibel, Spontaneously Growing Transverse Waves in a Plasma Due to an Anisotropic Velocity Distribution, Phys. Rev. Lett. 2, 83 (1959)

Credit: 
Tata Institute of Fundamental Research

NASA catches development of eastern Atlantic's tropical storm Vicky

image: On Sept. 14 at 0329 UTC (Sept. 13 at 11:29 a.m. EDT) NASA's Aqua satellite analyzed a low-pressure area in the eastern Atlantic Ocean using the Atmospheric Infrared Sounder or AIRS instrument. AIRS found coldest cloud top temperatures as cold as or colder than (purple) minus 63 degrees Fahrenheit (minus 53 degrees Celsius) around the center of circulation, as the storm was forming into Tropical Depression 21.

Image: 
NASA JPL/Heidar Thrastarson

NASA's Aqua satellite analyzed a low-pressure area in the far eastern Atlantic Ocean, and it showed the system becoming more organized. Soon after Aqua passed overhead, the low became Tropical Depression 21. Hours later, the storm strengthened into Tropical Storm Vicky.

Infrared Imagery Revealed a Consolidating System

One of the ways NASA researches tropical cyclones is using infrared data that provides temperature information. The AIRS instrument aboard NASA's Aqua satellite captured a look at those temperatures in the developing low-pressure area and gave insight into the size of the storm and its rainfall potential.

Cloud top temperatures provide information to forecasters about where the strongest storms are located within a tropical cyclone. Tropical cyclones do not always have uniform strength, and some sides are stronger than others. The stronger the storms, the higher they extend into the troposphere, and the colder the cloud temperatures. NASA provides that data to forecasters at NOAA's National Hurricane Center or NHC so they can incorporate in their forecasting.

On Sept. 14 at 0329 UTC (Sept. 13 at 11:29 a.m. EDT) NASA's Aqua satellite analyzed the low-pressure area using the Atmospheric Infrared Sounder or AIRS instrument. AIRS found coldest cloud top temperatures as cold as or colder than minus 63 degrees Fahrenheit (minus 53 degrees Celsius) around the center of circulation. NASA research has shown that cloud top temperatures that cold indicate strong storms that have the capability to create heavy rain.

Forecasters looked at wind speeds in the storm to determine that it had strengthened into a tropical storm. The Advanced Scatterometer (ASCAT) winds products are processed by NOAA/NESDIS utilizing measurements from the scatterometer instrument aboard the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT) Metop satellites.

U.S. Navy Hurricane Specialist Dave Roberts at NOAA's National Hurricane Center in Miami, Fla. noted, "A METOP A/B ASCAT scatterometer pass over the cyclone showed a large swath of winds in the northeast quadrant on the order of 35 to 39 knots. Deep convection in that region of the cyclone continues to increase as well as near the center of circulation. Accordingly, the initial intensity is raised to 40 knots, making this the twentieth named storm of the season."

Vicky's Status of Sept. 14

At 11 a.m. EDT (1500 UTC), Vicky was centered about 350 miles (565 km) west-northwest of the Cabo Verde Islands. The center was near latitude 18.7 degrees north and longitude 28.5 degrees west. Vicky was moving toward the northwest near 6 mph (9 kph) and this motion is forecast to continue into this afternoon. A turn toward the northwest is forecast tonight, with a west-northwestward motion expected on Tuesday and Wednesday.

Maximum sustained winds have increased to near 45 mph (75 kph) with higher gusts.  Little change in strength is expected during the next day or so. The estimated minimum central pressure is 1002 millibars.

Vicky's Forecast

NHC expects Vicky will be a short-lived tropical cyclone as increasing southwesterly wind shear (winds from the outside of the tropical cyclone that batter and weaken them) is expected to quickly weaken Vicky to a depression in a couple days, and the system is expected to degenerate to a remnant low  pressure area on Thursday, Sept. 17.

The AIRS instrument is one of six instruments flying on board NASA's Aqua satellite, launched on May 4, 2002.

NASA Researches Tropical Cyclones

Hurricanes/tropical cyclones are the most powerful weather events on Earth. NASA's expertise in space and scientific exploration contributes to essential services provided to the American people by other federal agencies, such as hurricane weather forecasting.

For more than five decades, NASA has used the vantage point of space to understand and explore our home planet, improve lives and safeguard our future. NASA brings together technology, science, and unique global Earth observations to provide societal benefits and strengthen our nation. Advancing knowledge of our home planet contributes directly to America's leadership in space and scientific exploration.

Credit: 
NASA/Goddard Space Flight Center

Toxic metals can affect student health performance, say scientists from RUDN university

image: A team of medics and ecologists from RUDN University measured the concentrations of heavy metals in the bodies of first-year university students from different countries of the world. The results of the screening helped the scientists establish a connection between a region of residence and the level of toxic metal in organism. According to the team, increased heavy metal concentrations in the bodies of students from Africa and Latin America can have a negative impact on their health and performance.

Image: 
RUDN Univeristy

A group of medical and environmental researchers from RUDN University evaluated the level of heavy metals in the organism of first-year university students from different countries of the world. The results of the screening helped the scientists to reveal a relationship between a region of residence and the level of toxic metal in organism. According to their opinion, increased heavy metal levels in the organism of students from Africa and Latin America can have a negative impact on their health and performance. The results of the study were published in the Environmental Science and Pollution Research journal.

The group of heavy metals contains over 40 elements, the most poisonous of which are cadmium, lead, mercury, arsenic and nickel. The main source of heavy metals is industrial facilities: lead is used to produce batteries and electric cables, cadmium is an element of anti-corrosive coatings and electrodes, and semiconductor materials called arsenides are based on arsenic. Heavy metal compounds pollute water, soil, and air and from there get into the human body. RUDN medics and ecologists studied the concentration of heavy metals in the hair and urine of students from 48 countries and analyzed the effect of pollution on their health.

"The concentrations of heavy metals in human organism is an indicator of general pollution levels in respective regions. Our goal was to identify arsenic, cadmium, mercury, and lead markers in the samples taken from the students of RUDN University. RUDN has the highest rate of foreign students in Russia, and they predominantly come from Asia, Africa, and Latin America," said professor Anatoly Skalny, a Head of the Department of Medical Elementology, RUDN University.

Of the 274 participants of the screening, 65 represented Russia, 57 came from Asian countries, 84 were born in the Middle East, 40 were from Africa, and 28 from Latin America. The researchers evaluated the levels of arsenic, cadmium, mercury, and lead in the urine and hair of first-year students who arrived to Moscow from their regions shortly before the beginning of the study. The measurements were performed using highly sensitive inductively coupled plasma mass spectrometry, a method that allows determining the metal content in biological samples in negligible (trace) amounts.

The highest levels of cadmium and lead were found in the samples taken from African and Latin American students. The latter also had the highest concentration of mercury in their hair. As for urine samples, Middle Eastern and Latin American students had the highest mercury levels, and African students - the highest levels of lead. According to the researchers' opinion, it might be due to the fact that Latin American countries are largely involved in electronic waste processing and artisanal gold mining, while a lot of heavy industrial facilities are located in the Middle East.

The results of the study indicate a risk of heavy metal poisoning that could have a negative impact on the health and performance of the students.

"High levels of heavy metals induce toxic effects and interfere with adaptive reactions. In addition to the high levels of psychological stress that foreign students live under, increased heavy metal exposure may result in higher incidence of diseases in their first-year of studies. In the future, we plan to evaluate the effect of heavy metals on the health and performance of RUDN University students," said Anatoly Kirichuk, PhD, associate Professor of the Department of forensic ecology with a course in human ecology.

Credit: 
RUDN University

CCNY engineer Xi Chen and partners create new shape-changing crystals

image: A tripeptide crystal with aqueous pores can reversibly deform in response to humidity changes.

Image: 
Image courtesy: Tong Wang

Imagine harnessing evaporation as a source of energy or developing next generation actuators and artificial muscles for a broad array of applications. These are the new possibilities with the creation by an international team of researchers, led by The City College of New York's Xi Chen and his co-authors at the CUNY Advanced Science Research Center, of shape-changing crystals that enable energy transfer from evaporation to mechanical motion. Entitled: "Mechanistic insights of evaporation-induced actuation in supramolecular crystals," the study appears in the journal "Nature Materials."

Different from traditional crystals that are usually stiff and brittle, the new crystals have the ability to change their shapes, enabled by their molecular architectures. The crystals are comprised of a pattern of small pores that is interspersed with connecting flexible domains that are repeated throughout the crystal structure. The pores that run throughout the crystals strongly bind to water molecules.

"When evaporation causes water to be removed from the pores, this results in a forceful deformation of the entire crystal through a network-like connection. The resulting shape-change is reversed when water vapor is reintroduced," said Chen, the corresponding author of the research and an assistant professor, chemical engineering, in CCNY's Grove School of Engineering. "Our peptide crystals allow the direct observation of water-material interactions at the molecular level by using existing crystallographic, spectroscopic and computational methods. The revealed actuation mechanisms are applicable more generally for the deigns of materials or structures that efficiently harness evaporation."

Materials that drive these motions are known as water-responsive or humidity-responsive materials. These materials, that swell and contract in response to changes in humidity, could directly and efficiently convert energy from evaporation into mechanical motions. This new field opens up possibilities for accessing untapped water evaporation as a source of energy as well as developing better actuators and artificial muscles for modern engineering systems.

Credit: 
City College of New York

Florida State-led team offers new rules for algae species classification

image: Florida State University Assistant Professor of Biological Science Sophie McCoy surveys the algal community underneath a canopy of "sea spaghetti" (Himanthalia elongata) at Cape Cornwall, England.

Image: 
Photo by Sophie McCoy.

FSU Assistant Professor of Biological Science Sophie McCoy and her team are proposing formal definitions for algae species and subcategories for the research community to consider: They are recommending algae be classified first by DNA and then by other traits.

The work, which includes collaborations with Stacy Krueger-Hadfield, assistant professor of biology at the University of Alabama at Birmingham, and Nova Mieszkowska, a research fellow at the Marine Biological Association in the United Kingdom, was published this week in the Journal of Phycology .

"Algal species should evolve separately from other lineages, so that's DNA-based, but we should also take into account differences in their ecology, such as what they look like or their role in the environment," McCoy said.

The article was published as a perspective rather than offering definitive answers, and the team hopes the larger scientific community will comment on it and start an important conversation.

Algae matter more than most people realize because the organisms make about half of the oxygen in the world, McCoy said. Humanity depends on algae, as does the entire food web of the ocean.

Scientists have established ways to define animal species, such as determining an organism's ability to produce viable offspring that can subsequently reproduce. For instance, a horse and a donkey can create a mule, but a mule cannot reproduce. That helps classify horses and donkeys as separate species. But that type of categorization doesn't work well for algae because it has unique and complex life stages and very often interbreeds with other algal species.

"Rather than having a 'species tree,' like a family tree, algae have more of a web," McCoy said.

That intricacy has made it difficult to formalize categories to classify algae species. Some scientists might classify offspring of two algal species as a distinct new species while others would not. Or some might classify algae species by discrete DNA while others classify by physical characteristics.

"We aren't all using the same rules, so are we actually looking at different breeds or populations and then artificially calling them species?" McCoy said. "Depending on how we apply these rules, the number of species could go way up or way down."

The International Union for Conservation of Nature Red List of Threatened Species is the world's most comprehensive inventory of the global conservation status of biological species. The IUCN red list helps scientists evaluate a species' extinction risk. So, how a species is defined changes the perception of biodiversity and conservation, she said.

Beyond conservation, catastrophes -- from algal blooms in waterways to the destruction of coral reefs -- could be mitigated by discussing and clarifying algal species classification. McCoy said some of the mysteries surrounding this type of growth are likely related to a lack of uniform identification.

"If we are mistakenly separating or grouping species, we're just not going to understand how different types of algae are responding to pollution or climate change," she said.

This philosophical change in what it means to be a species is a starting point for McCoy and the team. In addition to starting a conversation, she plans to conduct research that builds on the concept over the next year.

Credit: 
Florida State University

Mayo Clinic and TGen ID potential targets for the most-deadly form of pancreatic cancer

PHOENIX and SCOTTSDALE, Ariz. -- Sept. 14, 2020 -- A team of researchers led by Mayo Clinic and the Translational Genomics Research Institute (TGen), an affiliate of City of Hope, has identified specific potential therapeutic targets for the most aggressive and lethal form of pancreatic cancer.

In what is believed to be the most comprehensive analysis of adenosquamous cancer of the pancreas (ASCP), the Mayo Clinic and TGen team identified, in preclinical models, therapeutic targets for this extremely fast-moving and deadly form of pancreatic cancer, and identified already available cancer inhibitors originally designed for other types of cancer, according to a study published today in Cancer Research, a journal of the American Association for Cancer Research (AACR).

"The rarity of ASCP, the scarcity of tissue samples suitable for high resolution genomic analyses, and the lack of validated preclinical models, has limited the study of this particularly deadly subtype of pancreatic cancer," said Dr. Daniel Von Hoff, Distinguished Professor and TGen's Physician-In-Chief, considered one of the nation's foremost authorities on pancreatic cancer, and one of the study's authors. "We need entirely new possible approaches for our patients with ASCP."

Pancreatic ductal adenocarcinoma (PDAC) is the most common form of pancreatic cancer, which this year is projected to kill nearly 57,600 Americans, making it the nation's third leading cause of cancer-related death, according to the American Cancer Society. Among pancreatic cancer patients, a small percentage (less than 4%) are diagnosed with ASCP, a particularly aggressive form of pancreatic cancer.

"ASCP currently has no effective therapies. Unlike PDAC, ASCP is defined by the presence of more than 30% squamous (skin-like) epithelial cells in the tumor. The normal pancreas does not contain squamous cells," said the study's senior author, Michael Barrett, Ph.D., who holds a joint research appointment at Mayo Clinic and TGEN.

"Our study has shown that ASCPs have novel 'hits' (mutations and deletions) in genes that regulate tissue development and growth superimposed on the common mutational 'landscape' of a typical PDAC. As a consequence, cells within the tumor have the ability to revert to a stem-cell-like state that includes changes in cell types and appearance, and the activation of signaling pathways that drive the aggressive nature of ASCP,' said Dr. Barrett.

While this activated aggressive stem-cell-like state is very resistant to current therapies for pancreatic cancer, Dr. Barrett said, the study has shown that ASCP can be targeted by drugs currently in clinical use for other cancers as well as non-cancer related conditions.

Using multiple cancer analysis methods and platforms -- including: flow cytometry, copy number analysis, whole exome sequencing, variant calling and annotation, ATAC-seq, immunofluorescence, immunohistochemistry, single cell sequencing, and organoid cultures and treatments -- the Mayo Clinic and TGen research team conducted what is believed to be the most in-depth analysis of ASCP tissue samples.

Researchers identified multiple mutations and genomic variants that are common to both PDAC and the more aggressive ASCP. "Of significant interest," the study says, the team also identified two potential therapeutic targets unique to ASCP genomes: FGFR signaling, including an FGFR1-ERLIN2 gene fusion, and a pancreatic cancer stem cell regulator known as RORC.

These data provide a unique description of the ASCP genomic and epigenomic landscape and identify candidate therapeutic targets for this lethal cancer, the study says.

Using organoids, which are laboratory cultures derived from samples of patient tumors, researchers tested the activity and functional significance of candidate therapeutic targets. According to the study: "Specifically, organoids carrying the FGFR1-ERLIN2 fusion show a significant response to pharmacological FGFR inhibition," providing new candidate targets for developing therapies for patients with ASCP.

In addition, the study says, "To our knowledge, this is the first study to apply DNA content sorting to the genomic analysis of ASCP," a method that purifies the cancer DNA from other cells and parts of cells, thereby eliminating any biological "noise" that might impede the precision of the analysis.

Using an interrogation tool known as ATAC-seq, researchers also identified RORC as another distinguishing feature of ASCP.

"Of significant interest will be clinical trials with FGFR and RORC inhibitors that include correlative studies of genomic and epigenomic lesions in both ASCP and PDAC," the study concludes.

Also contributing to this study -- Genomic and Epigenomic Landscaping Defines New Therapeutic Targets for Adenosquamous Carcinoma of the Pancreas -- were: Virginia G Piper Cancer Center at HonorHealth; University of California, San Diego School of Medicine; Salk Institute for Biological Studies; Memorial Sloan Kettering Institute; and University of Nebraska Medical Center (UNMC).

Credit: 
The Translational Genomics Research Institute

Predicting the slow death of lithium-ion batteries

Batteries fade as they age, slowly losing power and storage capacity.

As in people, aging plays out differently from one battery to another, and it's next to impossible to measure or model all of the interacting mechanisms that contribute to decline. As a result, most of the systems used to manage charge levels wisely and to estimate driving range in electric cars are nearly blind to changes in the battery's internal workings.

Instead, they operate more like a doctor prescribing treatment without knowing the state of a patient's heart and lungs, and the particular ways that environment, lifestyle, stress and luck have ravaged or spared them. If you've kept a laptop or phone for enough years, you may have seen where this leads firsthand: Estimates of remaining battery life tend to diverge further from reality over time.

Now, a model developed by scientists at Stanford University offers a way to predict the true condition of a rechargeable battery in real-time. The new algorithm combines sensor data with computer modeling of the physical processes that degrade lithium-ion battery cells to predict the battery's remaining storage capacity and charge level.

"We have exploited electrochemical parameters that have never been used before for estimation purposes," said Simona Onori, assistant professor of energy resources engineering in Stanford's School of Earth, Energy & Environmental Sciences (Stanford Earth). The research appears Sept. 11 in the journal IEEE Transactions on Control Systems Technology.

The new approach could help pave the way for smaller battery packs and greater driving range in electric vehicles. Automakers today build in spare capacity in anticipation of some unknown amount of fading, which adds extra cost and materials, including some that are scarce or toxic. Better estimates of a battery's actual capacity will enable a smaller buffer.

"With our model, it's still important to be careful about how we are using the battery system," Onori explained. "But if you have more certainty around how much energy your battery can hold throughout its entire lifecycle, then you can use more of that capacity. Our system reveals where the edges are, so batteries can be operated with more precision."

The accuracy of the predictions in this model - within 2 percent of actual battery life as gathered from experiments, according to the paper - could also make it easier and cheaper to put old electric car batteries to work storing energy for the power grid. "As it is now, batteries retired from electric cars will vary widely in their quality and performance," Onori said. "There has been no reliable and efficient method to standardize, test or certify them in a way that makes them competitive with new batteries custom-built for stationary storage."

Dropping old assumptions

Every battery has two electrodes - the cathode and the anode - sandwiching an electrolyte, usually a liquid. In a rechargeable lithium-ion battery, lithium ions shuttle back and forth between the electrodes during charging and discharging. An electric car may run on hundreds or thousands of these small battery cells, assembled into a big battery pack that typically accounts for about 30 percent of the total vehicle cost.

Traditional battery management systems typically rely on models that assume the amount of lithium in each electrode never changes, said lead study author Anirudh Allam, a PhD student in energy resources engineering. "In reality, however, lithium is lost to side reactions as the battery degrades," he said, "so these assumptions result in inaccurate models."

Onori and Allam designed their system with continuously updated estimates of lithium concentrations and a dedicated algorithm for each electrode, which adjusts based on sensor measurements as the system operates. They validated their algorithm in realistic scenarios using standard industry hardware.

On the road

The model relies on data from sensors found in the battery management systems running in electric cars on the road today. "Our algorithm can be integrated into current technologies to make them operate in a smarter fashion," Onori said. In theory, many cars already on the road could have the algorithm installed on their electronic control units, she said, but the expense of that kind of upgrade makes it more likely that automakers would consider the algorithm for vehicles not yet in production.

The team focused their experiments on a type of lithium-ion battery commonly used in electric vehicles (lithium nickel manganese cobalt oxide) to estimate key internal variables such as lithium concentration and cell capacity. But the framework is general enough that it should be applicable to other kinds of lithium-ion batteries and to account for other mechanisms of battery degradation.

"We showed that our algorithm is not just a nice theoretical work that can run on a computer," she said. "Rather, it is a practical, implementable algorithm which, if adopted and used in cars tomorrow, can result in the ability to have longer-lasting batteries, more reliable vehicles and smaller battery packs."

Credit: 
Stanford University

Human activities promote disease-spreading mosquitoes; more study needed for prevention

image: Researchers collect mosquitoes in Kruger National Park in South Africa to gauge abundance of disease-spreading species.

Image: 
Courtesy Dr. Brianna Beechler, Oregon State University

CORVALLIS, Ore. -- Disease-spreading mosquitoes may be more likely to occupy areas impacted by human activities like pesticide use and habitat destruction, than they are areas less disturbed by humans, a recent Oregon State University study found.

Working in a national park in South Africa, researchers found a significant difference in the abundance and species composition of mosquitoes inside the park versus densely populated areas outside the park, with the species known to spread diseases such as malaria and Zika virus more common in the human-impacted areas outside the park.

"People care a lot about what environment a lion needs to succeed in; we've researched that extensively. But people don't do that with mosquitoes. We don't understand them as a group of species and how their ecology differs between species," said study co-author Dr. Brianna Beechler, a disease ecologist and assistant professor of research in Oregon State University's Carlson College of Veterinary Medicine.

To find disease mitigation strategies for vector-borne diseases, which are diseases that spread via parasites like mosquitoes and ticks, mosquitoes are an obvious target, Beechler said. But scientists don't yet understand mosquitoes well enough to specifically target the species that cause disease.

"All we can do is reduce mosquitoes overall, but what may be more effective is to reduce certain species by modifying their habitats," she said.

To compare how mosquitoes fared inside Kruger National Park versus in densely populated areas, researchers looked at five "pressures" wrought by human presence: organophosphate pesticide abundance; eutrophication, which is the over-mineralization of water that leads to widespread algae growth; population density; ungulate biomass, which includes domestic animals like cattle and wild animals like impala and buffalo; and vegetation loss.

Human populations affect mosquito habitat and breeding patterns in a sort of domino effect. For example, pesticide use spreads into ponds and other small bodies of water, killing the fish and removing the natural predators that would otherwise eat mosquito larvae and keep the insect population low.

During South Africa's wet season in 2016-17, researchers trapped 3,918 female mosquitoes from 39 different species both inside and outside the national park.

Mosquito abundance was nearly three times higher outside the park -- in areas dominated by humans --than inside the park. And there was a significant difference in the species composition of mosquitoes, with the species known to spread diseases (like dengue, West Nile virus, chikungunya, yellow fever and Zika virus) more common outside the park than inside.

"It seems to suggest that disease-carrying mosquito species certainly did better in human-altered environments," Beechler said, though she noted it's hard to ascertain at this stage why that is. More study is needed to understand the ecological requirements of different mosquito species.

There are some success stories with current mosquito mitigation strategies. Beechler cited a technique in the Caribbean where residents are encouraged to introduce fish into any standing water in their vicinity, so the fish can eat mosquito larvae before they have time to hatch. And several countries have experimented with releasing clusters of sterile mosquitoes into the wild, so they will eat and take up resources but are not able to reproduce.

"But none of those are targeted at disease-transmitting mosquitoes versus non-disease-transmitting mosquitoes," she said. "It's just, 'All mosquitoes are created equal.'"

Vector-borne disease is not currently a pressing issue in Oregon, though there are several different species of mosquito in the state and some have been known to spread West Nile virus. However, mosquitoes carrying malaria, Zika and chikungunya have all been pushing into new territory in recent years.

"With climate change, mosquito distributions are likely to change, and disease distributions are likely to change," Beechler said. "So it'd be nice to know how to target those species before that happens."

Credit: 
Oregon State University

Some but not all US metro areas could grow all needed food locally, estimates study

image: A new modeling study finds that urban centers in green could feed themselves with cultivated cropland located within an average distance of 250 kilometers (155 miles), but urban centers in yellow, orange and red would need to draw from wider areas - 250 kilometers or more.

Image: 
Tufts University

BOSTON (Sept. 14. 2020, 9:00 a.m. EDT)--Some but not all U.S. metro areas could grow all the food they need locally, according to a new study estimating the degree to which the American food supply could be localized based on population, geography, and diet.

The modeling study, led by Christian Peters at the Gerald J. and Dorothy R. Friedman School of Nutrition Science and PolicyGerald J. and Dorothy R. Friedman School of Nutrition Science and Policy at Tufts University, is published today in Environmental Science & Technology.

The model estimates whether 378 metropolitan areas could meet their food needs from local agricultural land located within 250 kilometers (155 miles). Local potential was estimated based on seven different diets, including the current typical American diet.

The results suggest:

* Metro centers in the Northwest and interior of the country have the greatest potential for localization.

* Large portions of the population along the Eastern Seaboard and the southwest corner of the U.S. would have the least potential for localization.

* Surplus land existed under all diet scenarios, raising questions about the best use of land for meeting health, environmental, and economic goals.

"Not everyone lives near enough agricultural land to have an entirely local or even regional food supply. Most cities along the Eastern Seaboard and in the southwest corner of the U.S. could not meet their food needs locally, even if every available acre of agricultural land was used for local food production. Yet, many cities in the rest of the country are surrounded by ample land to support local and regional food systems," said Peters, senior author and associate professor at the Friedman School, whose research focuses on sustainability science.

Peters and his team also modeled seven different diets to estimate whether dietary changes could make a difference in the potential to produce sufficient food for a metro area. The diets ranged from the current typical American diet, which is high in meat, to vegan. Reducing animal products in the diet increased the potential to produce all food locally, up to a point. Diets with less than half the current consumption of meat supported similar levels of localization potential, whether omnivore or vegetarian. Consumption of meat (beef, pork, chicken and turkey) for the baseline typical American diet was estimated at roughly five ounces per day.

"There would be different ways to do it. Imagine, if we cut back to fewer than two and a half ounces per day by serving smaller portions of meat and replacing some meat-centric entrees with plant-based alternatives, like lentils, beans and nuts. More diverse sources of protein could open new possibilities for local food. Nutrition research tells us that there could be some health benefits, too," said corresponding author Julie Kurtz, who was a master's degree student at the Friedman School at the time of the study.

Under all the diet scenarios, the model projected the United States having a surplus of land for meeting domestic food needs. In the current American agricultural system, some farmland is used for biofuels and export crops. The researchers point out that if metro centers focused on eating locally, many agricultural areas would face new questions about local land use priorities.

"It would be important to make sure policies for supporting local or regional food production benefit conservation and create opportunities for farmers to adopt more sustainable practices. Policies should also recognize the capacity of the natural resources in a given locale or region--and consider the supply chain, including capacity for food processing and storage," Peters said.

Economic efficiency for food production was beyond the scope of the analysis. Also, the study is based on current conditions and does not consider how future climate change may affect future agricultural potential.

Credit: 
Tufts University, Health Sciences Campus

Hitchhiking seeds pose substantial risk of nonnative plant invasions

image: With backpack vacuums, the research team went looking for nonnative plant seeds on air-intake grilles of refrigerated shipping containers - and found thousands of them.

Image: 
Rima Lucardi, USFS

Seeds that float in the air can hitchhike in unusual places - like the air-intake grille of a refrigerated shipping container. A team of researchers from the USDA Forest Service, Arkansas State University, and other organizations recently conducted a study that involved vacuuming seeds from air-intake grilles over two seasons at the Port of Savannah, Georgia.

The viability of such seeds is of significant interest to federal regulatory and enforcement agencies, and the project required a shared stewardship approach. Imported refrigerated shipping containers are inspected by the U.S. Customs & Border Protection, Agriculture Program (Department of Homeland Security). The research team worked closely with this agency, as well as the USDA Animal and Plant Health Inspection Service, and the Georgia Ports Authority.

Their findings were recently published in the journal Scientific Reports. Seeds from 30 plant taxa were collected from the air-intake grilles, including seeds of wild sugarcane (Saccharum spontaneum), a grass on the USDA Federal Noxious Weed List.

Federal noxious weeds pose immediate, significant threats to agriculture, nursery, and forestry industries. Although a lovely grass and useful in its native range, wild sugarcane has the potential to join cogongrass, stiltgrass, and other nonnative species that have become extremely widespread in the U.S.

"During the two shipping seasons, we estimate that over 40,000 seeds from this species entered the Garden City Terminal at the Port of Savannah," says Rima Lucardi, a Forest Service researcher and lead author of the project. "This quantity of incoming seeds is more than sufficient to cause introduction and establishment of this nonnative invader, even If the escape rate from the shipping containers is limited."

To estimate the chance that seeds would survive and establish in the U.S., Lucardi and her colleagues analyzed and modeled viable seeds from four plant taxa. All are prolific seed producers, wind-pollinated and wind-dispersed, and able to persist in a wide range of environmental conditions and climates.

The researchers propose several possible strategies for reducing risk to native ecosystems and agricultural commodities. For example, in lieu of labor-intensive vacuuming of air-intake grilles, a liquid pre-emergent herbicide could potentially be applied to containers while in port. Prevention and best management practices, from the farm to the store, reduce the probability of nonnative seeds establishing in the U.S. Inspection for exterior seeds hitching a ride on shipping containers at their points-of-origin or stops along the way would also reduce risk of invasion.

Preventing nonnative plant invasions is much more cost-effective in the long run than trying to manage them once they have spread and become widely established. "Investment in the prevention and early detection of nonnative plant species with known negative impacts results in nearly a 100-fold increase in economic return when compared to managing widespread nonnatives that can no longer be contained," says Lucardi.

Credit: 
USDA Forest Service ‑ Southern Research Station

Halogen bonding: a powerful tool for constructing supramolecular co-crystalline materials

image: Co-crystallization of 1,4-DITFB with diverse halogen-bonding acceptors.

Image: 
©Science China Press

Since Lehn's famous definition of supramolecular chemistry, supramolecular synthesis as a rapidly growing field is still in its formative stage, which is an important method to construct new multifunctional material systems. Compared with the traditional and complex synthetic method of covalent bond, organic co-crystal strategy based on noncovalent bond will be more characteristic and advantageous in the construction of multicomponent supramolecular functional materials. Co-crystal not only retains the inherent property of every component, but also shows more novel physicochemical properties through synergistic effects between different components, which is helpful to realize the multifunction of materials. The co-crystallization process is greatly related to molecular recognition and supramolecular self-assembly between components, which are driven by non-covalent interactions, for example, halogen bonds, hydrogen bonds, π-π stacking, van der Waals forces and so forth. Thereinto, halogen-bonding strength spans over a wide range from 5 to 180 kJ/mol, which allows them to prevail over analogous hydrogen bond in recognition processes. It has become a research hotspot to construct new multifunctional material systems based on halogen bond in the field of supramolecular chemistry and materials.

One common synthetic approach to achieve halogen-bonding co-crystallization systems is to utilize halogen-bonding donors and acceptors with complementary functional groups. As far as halogen-bonding donors are concerned, the halogen-bonding strength increases in the order of Cl?Br?I depending on their electronegativity, and can be further enhanced by introducing electron-withdrawing groups such as the fluorine atom. For this reason, 1,4-DITFB was envisaged as an ideal halogen-bonding donor and has a great potential in the design and synthesis of new multicomponent supramolecular co-crystalline materials.

Recently, Huang's group (Xue-Hua Ding, Yong-Zheng Chang, Chang-Jin Ou, Jin-Yi Lin, Ling-Hai Xie and Wei Huang) have summarized the wide use of 1,4-DITFB in the construction of halogen-bonding multicomponent supramolecular assemblies through co-crystallizing with a variety of halogen-bonding acceptors in the range from neutral Lewis bases (nitrogen-containing compounds, N-oxides, chalcogenides, aromatic hydrocarbons and organometallic complexes) to anions (halide ions, thio/selenocyanate ions and tetrahedral oxyanions). The examples reviewed here illustrate halogen bonds have a vital role in the co-crystallizing processes, exhibiting a wide diversity of impressive supramolecular architectures (for instance, dimers, trimers, tetramers, pentamers, heptamers, thirteen-molecule finite chains, 1D infinite chains, highly undulated infinite ribbons, 2D 3D networks). Many interesting physicochemical properties are found in these co-crystals, such as fluorescence, phosphorescence, magnetism, dielectric and nonlinear optical property, as well as liquid crystal and supramolecular gel. In addition, some co-crystals are able to be applied in different photoelectric devices, e.g. optical waveguide, laser, optical logic gate and memory.

In this review, the authors give an overview of the research progress on 1,4-DITFB, concentrate on the structures of multicomponent supramolecular complexes and their related physicochemical properties, explore the "structure-assembly-property" relationship, highlight typical examples, and point out the main possible directions that remain to be developed in this field. From the perspectives of supramolecular chemistry and crystal engineering, the complexes presented here should provide important information for further design and investigation on this fascinating class of halogen-bonding multicomponent supramolecular materials, and offer useful references for expanding the application of organic co-crystals in the field of optoelectronics.

Credit: 
Science China Press