Tech

Male bladder cancer vulnerability could lead to a new treatment approach

A protein variant common in malignant bladder tumor cells may serve as a new avenue for treating bladder cancer. A multi-institution study led by UC Davis Comprehensive Cancer Center researchers found that targeting androgen receptors - a type of protein that is crucial for the function of testosterone - may destroy cancer cells.

The study, published in the April 28, 2021 issue of Cancer Letters is important because, for the first time, it showed that a newly identified form of the protein is commonly expressed in bladder tumors and depleting this protein caused the cells to die.

The research focused on androgens, hormones instrumental in male sexual development and growth of the prostate. One type of androgen is testosterone, a hormone that stimulates development of male sexual characteristics.

Androgen receptors are cellular proteins found in the tissues of several organs. These receptors enable the hormones to trigger certain responses in the body. Androgen hormones bind to the receptors, triggering expression of ribonucleic acids, which convert information stored in DNA into proteins.

Prostate cancer treatments include androgen deprivation therapy intended to block androgen receptors from binding to androgens, thwarting growth of malignant cells. Could bladder cancer be treated the same way?

"There is evidence that reducing androgen receptors leads to destruction of tumor cells -- including those in the bladder," said Maria Mudryj, senior author of the study and vice chair of education and outreach at the UC Davis School of Medicine Department of Microbiology and Immunology.

AR proteins could be key to understanding bladder cancer in men.

"Earlier studies of patients with bladder cancer showed that androgen receptors are more abundant in tumor tissue than in normal tissue, and in bladder tumors of males than in females," Mudryj said.

The findings correlated with a study of men who had bladder cancer and were treated with drugs called 5α-reductase inhibitors, a class of drugs with antiandrogenic effects. Fewer men who had this treatment died, which suggests that the strategy of AR suppression may be effective for some patients.

Researchers don't yet know, however, if testing patients for presence of the AR variants could be used to determine which patients would benefit from specific treatments. Additional studies are needed.

"Further research may uncover vulnerabilities that could be exploited to design new therapeutic strategies for effective treatment of bladder malignancies," Mudryj said.

Credit: 
University of California - Davis Health

FSU researchers develop tool to track marine litter polluting the ocean

video: A brief example of a virtual tool developed by the Center for Ocean-Atmospheric Prediction Studies to track marine litter. The colored lines show the path of debris in the ocean.

Image: 
Courtesy of the Center for Ocean-Atmospheric Prediction Studies

In an effort to fight the millions of tons of marine litter floating in the ocean, Florida State University researchers have developed a new virtual tool to track this debris.

Their work, which was published in Frontiers in Marine Science, will help provide answers to help monitor and deal with the problem of marine litter.

Eric Chassignet, director of the Center for Ocean-Atmospheric Prediction Studies and professor in the Department of Earth, Ocean and Atmospheric Science.

"Marine litter is found around the world, and we do not fully understand its impact on the ocean ecosystem or human health," said Eric Chassignet, director of FSU's Center for Ocean-Atmospheric Prediction Studies (COAPS) and the paper's lead author. "That's why it's important to learn more about this problem and develop effective ways to mitigate it."

Marine litter is a big problem for the Earth's oceans. Animals can get entangled in debris. Scientists have found tiny pieces of plastic inside fish, turtles and birds -- litter that blocks digestive tracts and alters feeding behavior, altering growth and reproduction. Most of that marine litter is mismanaged plastic waste, which is of particular concern because plastics remain in the ocean for a long time.

Understanding where marine litter goes once it's in the ocean is a big part of understanding the issue and helping individual countries and the international community to develop plans to deal with the problem. The United Nations, which funded this work, is trying to mitigate the impact of mismanaged plastic waste, and this work can inform their policies and regulations.

Take the so-called Great Pacific Garbage Patch, a cluster of marine debris in the Pacific Ocean, for example. Tracking marine litter will help answer questions about whether it is growing larger and questions about how much plastic is breaking down or sinking to the bottom of the ocean. The virtual tool also shows how countries around the world are connected.

"Knowing where the marine litter released into the ocean by a given country goes and the origin of the litter found on the coastline of a given country are important pieces of information for policymakers," Chassignet said. "For example, it can help policymakers determine where to focus their efforts for dealing with this problem."

The tracking tool uses worldwide mismanaged plastic waste data as inputs for its model. The model uses data about ocean and air currents to track marine debris starting from 2010. Fire up the website and you can watch as colorful lines swirl across the Earth's oceans. It looks pretty -- until you realize it is tracking litter.

COAPS -- an interdisciplinary research center focusing on air-sea interaction, the ocean-atmosphere-land-ice earth system and climate prediction -- is 25 years old this year. Researchers at the center uses sophisticated ocean models to map the ocean and predict ocean currents that help scientists understand where marine litter released in the ocean is likely to travel and end its journey.

"If you have data for the past 20 years, a lot can be done in terms of modeling and simulations," Chassignet said.

Credit: 
Florida State University

Eye movements of those with dyslexia reveal laborious and inefficient reading strategies

image: Aaron Johnson: "Now that we know that there are these differences in how dyslexics read, we have to ask what we can do to improve their reading."

Image: 
Concordia University

Researchers have long noted that readers with dyslexia employ eye movements that are significantly different from non-dyslexics. While these movements have been studied in small sample sizes in the past, a new paper written by Concordia researchers and published in the Nature journal Scientific Reports looks at a much larger group. The study used eye-tracking technology to record the movements and concluded that people with dyslexia have a profoundly different and much more difficult way of sampling visual information than normal readers.

"People have known that individuals with dyslexia have slower reading rates for a long time," says the paper's co-author Aaron Johnson, an associate professor and chair of the Department of Psychology.

"Previous studies have also looked at eye movement in adult dyslexics. But this paper quite nicely brings these together and uses behavioural measures to give us a full representation of what differences do occur."

The eyes have it

Dyslexia researchers use several metrics to measure eye movements. These include fixations (the duration of a stop), saccades (lengths of a jump) and counting the number of times a reader's eyes express a jump. Traditionally, dyslexia researchers would use a single sentence to measure these movements. Johnson and his co-authors used instead standardized identical texts several sentences long that were read by 35 undergraduate students diagnosed with dyslexia and 38 others in a control group.

The researchers wanted to address a core question in the field: are reading difficulties the result of a cognitive or neurological origin or of the eye movements that guide the uptake of information while reading?

"We saw that there was a real spectrum of reading speed, with some speeds among the dyslexic students as low as a third of the speed than that of the fastest readers in the control group," says lead author Léon Franzen, a former Horizon postdoctoral fellow at Concordia's Centre for Sensory Studies now at the University of Lübeck in Germany.

"But by using a variety of measures to put together a comprehensive profile, we found that the difference in speed was not the result of longer processing times of non-linguistic visual information. This suggested there was a direct link to eye movements."

Franzen notes that when the participants with dyslexia read a text, they paused longer to uptake the information but they did not have any trouble integrating the word meanings into the context of a sentence. That behaviour is seen commonly in children who are learning to read. Adults who read at normal speeds do not exhibit those pauses and eye movements.

"Dyslexia is a development disorder that begins in childhood," explains Zoey Stark (MA 21), the study's second author. The Concordia student was just awarded her MA in Psychology and will soon begin working toward a PhD where she will continue her studies of dyslexia. "It often goes undiagnosed until the child experiences real difficulties."

All three researchers worked together at the Concordia Vision Lab.

Borrowing commercial tools

Franzen likens the use of eye-tracking technology to the ability to peer into the cognitive process: researchers can see how individuals with dyslexia approach reading and where and how they struggle. And as eye-tracking technology becomes more commonplace and affordable -- most web and smartphone cameras are already equipped with it, for instance -- the researchers hope they can harness it to help them track and intervene how people with dyslexia read.

"Now that we know that there are these differences in how dyslexics read, we have to ask what we can do to improve their reading," Johnson says. "Are there ways that we can alter texts to make it easier to process, such as changing fonts or increasing text size? This is the next step in our research."

Credit: 
Concordia University

New 2D superconductor forms at higher temperatures than ever before

image: Superconducting state discovered at interfaces with (111) oriented KTaO3 surfaces, which has a buckled honeycomb lattice. Cooper pairs of electrons are shown in purple. Transport measurements suggest that the superconducting state is anisotropic.

Image: 
(Image by Anand Bhattacharya/Argonne National Laboratory.)

New interfacial superconductor has novel properties that raise new fundamental questions and might be useful for quantum information processing or quantum sensing.

Interfaces in solids form the basis for much of modern technology. For example, transistors found in all our electronic devices work by controlling the electrons at interfaces of semiconductors. More broadly, the interface between any two materials can have unique properties that are dramatically different from those found within either material separately, setting the stage for new discoveries.

Like semiconductors, superconducting materials have many important implications for technology, from magnets for MRIs to speeding up electrical connections or perhaps making possible quantum technology. The vast majority of superconducting materials and devices are 3D, giving them properties that are well understood by scientists.

One of the foundational questions with superconducting materials involves the transition temperature — the extremely cold temperature at which a material becomes superconducting.  All superconducting materials at regular pressures become superconducting at temperatures far below the coldest day outside.

Now, researchers at the U.S. Department of Energy’s Argonne National Laboratory have discovered a new way to generate 2D superconductivity at a material interface at a relatively high — though still cold —  transition temperature. This interfacial superconductor has novel properties that raise new fundamental questions and might be useful for quantum information processing or quantum sensing.

In the study, Argonne postdoctoral researcher Changjiang Liu and colleagues, working in a team led by Argonne materials scientist Anand Bhattacharya, have discovered that a novel 2D superconductor forms at the interface of an oxide insulator called KTaO3 (KTO). Their results were published online in the journal Science on February 12.

In 2004, scientists observed a thin sheet of conducting electrons between two other oxide insulators, LaAlO3 (LAO) and SrTiO3 (STO). It was later shown that that this material, called a 2D electron gas (2DEG) can even become superconducting — allowing the transport of electricity without dissipating energy. Importantly, the superconductivity could be switched on and off using electric fields, just like in a transistor.

However, to achieve such a superconducting state, the sample had to be cooled down to about 0.2 K — a temperature that is close to absolute zero (– 273.15 °C), requiring a specialized apparatus known as a dilution refrigerator. Even with such low transition temperatures (TC), the LAO/STO interface has been heavily studied in the context of superconductivity, spintronics and magnetism.

In the new research, the team discovered that in KTO, interfacial superconductivity could emerge at much higher temperatures. To obtain the superconducting interface, Liu, graduate student Xi Yan and coworkers grew thin layers of either europium oxide (EuO) or LAO on KTO using state-of-the-art thin film growth facilities at Argonne.

“This new oxide interface makes the application of 2D superconducting devices more feasible,” Liu said. “With its order-of-magnitude higher transition temperature of 2.2 K, this material will not need a dilution refrigerator to be superconducting. Its unique properties raise many interesting questions.”

A strange superconductor

Surprisingly, this new interfacial superconductivity shows a strong dependence on the orientation of the facet of the crystal where the electron gas is formed.

Adding to the mystery, measurements suggest the formation of stripe-like superconductivity in lower doping samples where rivulets of superconducting regions are separated by normal, nonsuperconducting regions. This kind of spontaneous stripe formation is also called nematicity, and is usually found in liquid crystal materials used for displays.

“Electronic realizations of nematicity are rare and of great fundamental interest. It turns out that EuO overlayer is magnetic, and the role of this magnetism in realizing the nematic state in KTO remains an open question,” Bhattacharya said.

In their Science paper, the authors also discuss the reasons why the electron gas forms. Using atomic resolution transmission electron microscopes, Jianguo Wen at the Center for Nanoscale Materials at Argonne, along with Professor Jian-Min Zuo’s group at the University of Illinois at Urbana-Champaign, showed that defects formed during the growth of the overlayer may play a central role.

In particular, they found evidence for oxygen vacancies and substitutional defects, where the potassium atoms are replaced by europium or lanthanum ions — all of which add electrons to the interface and turn it into a 2D conductor. Using ultrabright X-rays at the Advanced Photon Source (APS), Yan along with Argonne scientists Hua Zhou and Dillon Fong, probed the interfaces of KTO buried under the overlayer and observed spectroscopic signatures of these extra electrons near the interface.

“Interface-sensitive X-ray toolkits available at the APS empower us to reveal the structural basis for the 2DEG formation and the unusual crystal-facet dependence of the 2D superconductivity. A more detailed understanding is in progress,” Zhou said.

Beyond describing the mechanism of 2DEG formation, these results point the way to improving the quality of the interfacial electron gas by controlling synthesis conditions. Being that the superconductivity occurs for both the EuO and LAO oxide overlayers that have been tried thus far, many other possibilities remain to be explored.

The research is discussed in the paper “Two-dimensional superconductivity and anisotropic transport at KTaO3 (111) interfaces,” ScienceDOI: 10.1126/science.aba5511.

The authors are Changjiang Liu, Xi Yan, Dafei Jin, Yang Ma, Haw-Wen Hsiao, Yulin Lin, Terence M. Bretz-Sullivan, Xianjing Zhou, John Pearson, Brandon Fisher, J. Samuel Jiang, Wei Han, Jian-Min Zuo, Jianguo Wen, Dillon D. Fong, Jirong Sun, Hua Zhou and Anand Bhattacharya.

The work at Argonne was supported by DOE’s Office of Science (Office of Basic Energy Sciences). The Center for Nanoscale Materials and the Advanced Photon Source are both DOE Office of Science User Facilities.

About Argonne’s Center for Nanoscale MaterialsThe Center for Nanoscale Materials is one of the five DOE Nanoscale Science Research Centers, premier national user facilities for interdisciplinary research at the nanoscale supported by the DOE Office of Science. Together the NSRCs comprise a suite of complementary facilities that provide researchers with state-of-the-art capabilities to fabricate, process, characterize and model nanoscale materials, and constitute the largest infrastructure investment of the National Nanotechnology Initiative. The NSRCs are located at DOE’s Argonne, Brookhaven, Lawrence Berkeley, Oak Ridge, Sandia and Los Alamos National Laboratories. For more information about the DOE NSRCs, please visit https://science.osti.gov/User-Facilities/User-Facilities-at-a-Glance.

About the Advanced Photon Source

The U. S. Department of Energy Office of Science’s Advanced Photon Source (APS) at Argonne National Laboratory is one of the world’s most productive X-ray light source facilities. The APS provides high-brightness X-ray beams to a diverse community of researchers in materials science, chemistry, condensed matter physics, the life and environmental sciences, and applied research. These X-rays are ideally suited for explorations of materials and biological structures; elemental distribution; chemical, magnetic, electronic states; and a wide range of technologically important engineering systems from batteries to fuel injector sprays, all of which are the foundations of our nation’s economic, technological, and physical well-being. Each year, more than 5,000 researchers use the APS to produce over 2,000 publications detailing impactful discoveries, and solve more vital biological protein structures than users of any other X-ray light source research facility. APS scientists and engineers innovate technology that is at the heart of advancing accelerator and light-source operations. This includes the insertion devices that produce extreme-brightness X-rays prized by researchers, lenses that focus the X-rays down to a few nanometers, instrumentation that maximizes the way the X-rays interact with samples being studied, and software that gathers and manages the massive quantity of data resulting from discovery research at the APS.

This research used resources of the Advanced Photon Source, a U.S. DOE Office of Science User Facility operated for the DOE Office of Science by Argonne National Laboratory under Contract No. DE-AC02-06CH11357.

Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science.

The U.S. Department of Energy’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit https://energy.gov/science.

Credit: 
DOE/Argonne National Laboratory

Few young adult men have gotten the HPV vaccine, study finds

image: Young adult men have a low rate of vaccination when it comes to the HPV vaccine.

Image: 
Jacob Dwyer

The COVID-19 vaccine isn't having any trouble attracting suitors.

But there's another, older model that's been mostly ignored by the young men of America: the HPV vaccine.

Using data from the 2010-2018 National Health Interview Surveys, Michigan Medicine researchers found that just 16% of men who were 18 to 21 years old had received at least one dose of the HPV vaccine at any age. In comparison, 42% of women in the same age bracket had gotten at least one shot of the vaccine.

The CDC's Advisory Committee on Immunization Practices recommends two doses of the vaccine at 11 or 12 years old, but Americans can still benefit from the HPV vaccine if they receive it later, as long as they get three doses by age 26.

In the U-M study, however -- even among those who were vaccinated after turning 18 -- less than a third of men received all three vaccine doses, and about half of women did.

"Eighteen- to 21-year-olds are at this age where they're making health care decisions on their own for the first time," says Michelle M. Chen, M.D., a clinical lecturer in the Department of Otolaryngology-Head and Neck Surgery and the first author of the study. "They're in a period of a lot of transition, but young adult men especially, who are less likely to have a primary care doctor, are often not getting health education about things like cancer prevention vaccines."

The HPV vaccine was designed to prevent reproductive warts and cancers caused by the most common sexually transmitted infection in the United States. The FDA approved the vaccine for women in 2006 and expanded it to men in 2009.

Preventing cervical cancer was the primary focus at that time, so girls and women were more likely to hear about it from their pediatricians or OBGYNs. Yet oropharyngeal cancer, which occurs in the throat, tonsils, and back of the tongue, has now surpassed cervical cancer as the leading cancer caused by HPV -- and 80% of those diagnosed with it are men.

"I don't think that a lot of people, both providers and patients, are aware that this vaccine is actually a cancer-prevention vaccine for men as well as women," Chen says. "But HPV-associated oropharyngeal cancer can impact anyone -- and there's no good screening for it, which makes vaccination even more important."

Chen believes a dual-prong approach is necessary to up the HPV vaccination rate for those who are male, with renewed pushes from pediatricians to target kids and outreach from university health services and fraternity houses for the young adult population who may have missed getting the vaccine when they were younger. Pharmacists as well as urgent care and emergency room providers could also be helpful allies.

Credit: 
Michigan Medicine - University of Michigan

Seaweed solutions

It's easy to think that more nutrients -- the stuff life needs to grow and thrive -- would foster more vibrant ecosystems. Yet nutrient pollution has in fact wrought havoc on marine systems, contributing to harmful algae blooms, worse water quality and oxygen-poor dead zones.

A team of researchers from UC Santa Barbara has proposed a novel strategy for reducing large amounts of nutrients -- specifically nitrogen and phosphorous -- after they have already been released into the environment. In a study appearing in the journal Marine Policy, the authors contend that seaweed's incredible ability to draw nutrients from the water could provide an efficient and cost-effective solution. Looking at the U.S. Gulf of Mexico, the team identified over 63,000 square kilometers suitable for seaweed aquaculture.

"A key goal of conservation ecology is to understand and maintain the natural balance of ecosystems, because human activity tends to tip things out of balance," said co-author Darcy Bradley, co-director of the Ocean and Fisheries Program at the university's Environmental Markets Lab. Activities on land, like industrial-scale farming, send lots of nutrients into waterways where they accumulate and flow into the ocean in greater quantities than they naturally would.

Opportunistic algae and microbes take advantage of the glut of nutrients, which fuel massive blooms. This growth can have all kinds of consequences, from producing biotoxins to smothering habitats in virtual monocultures. And while these algae produce oxygen when they're alive, they die so suddenly and in such volume that their rapid decomposition consumes all the available oxygen in the water, transforming huge swaths of the ocean into so-called "dead zones."

Cultivated seaweed could draw down available nutrients, the authors claim, limiting the resources for unchecked growth of nuisance algae and microbes. Seaweeds also produce oxygen, which could alleviate the development of hypoxic dead zones.

The authors analyzed data from the U.S. Gulf of Mexico, which they say exemplifies the challenges associated with nutrient pollution. More than 800 watersheds across 32 states deliver nutrients to the Gulf, which has led to a growing low-oxygen dead zone. In 2019, this dead zone stretched just over 18,000 square kilometers, slightly smaller than the area of New Jersey.

Cortez grunt fish swim beneath a "red tide" algae bloom near the Bat Islands in Costa Rica's Santa Rosa National Park.
Cortez grunt fish swim beneath a "red tide" algae bloom near the Bat Islands in Costa Rica's Santa Rosa National Park. Blooms like these can release biotoxins and create oxygen-poor dead zones in the ocean.

Using open-source oceanographic and human-use data, the team identified areas of the gulf suitable for seaweed cultivation. They found roughly 9% of the United States' exclusive economic zone in the gulf could support seaweed aquaculture, particularly off the west coast of Florida.

"Cultivating seaweed in less than 1% of the U.S. Gulf of Mexico could potentially reach the country's pollution reduction goals that, for decades, have been difficult to achieve," said lead author Phoebe Racine, a Ph.D. candidate at UCSB's Bren School of Environmental Science & Management.

"Dealing with nutrient pollution is difficult and expensive," Bradley added. The U.S. alone spends more than $27 billion every year on wastewater treatment.

Many regions employ water quality trading programs to manage this issue. In these cap-and-trade systems regulators set a limit on the amount of a pollutant that can be released, and then entities trade credits in a market. Water quality trading programs exist all over the U.S., though they are often small, bespoke and can be ephemeral. That said, they show a lot of promise and, according to Racine, have bipartisan support.

Seaweed aquaculture would fit nicely within these initiatives. "Depending on farming costs and efficiency, seaweed aquaculture could be financed by water quality trading markets for anywhere between $2 and $70 per kilogram of nitrogen removed," Racine said, "which is within range of observed credit prices in existing markets."

What's more, the researchers note that demand is rising for seaweed in food and industry sectors. Potential products include biofuel, fertilizer and food, depending on the water quality, Racine said. This means that, unlike many remediation strategies, seaweed aquaculture could pay for itself or even generate revenue.

And the time seems ripe for the authors' proposal. "The U.S. has traditionally had a lot of barriers to getting aquaculture in the ocean," Bradley explained. "But there is mounting political support in the form of drafted bills and a signed executive order that could catalyze the expansion of the U.S. aquaculture industry."

This study is the first of several to come out of the Seaweed Working Group, an interdisciplinary group of researchers looking to understand and chart the potential of seaweed aquaculture's benefits to society. They are currently investigating a range of other ecosystem services that seaweed cultivation could provide, such as benefits to surrounding fisheries and carbon capture. The researchers are also working on a paper that explores nitrogen and phosphorous removal at the national level with fine-scale scale analysis modeling nutrient removal from native seaweeds off the coast of Florida.

As long as humans continue adding nutrients to the environment, nature will find ways to use them. By deliberately cultivating seaweeds, we can grow algae that we know are benign, helpful, or even potentially useful, rather than the opportunistic algae that currently draw upon these excess nutrients.

Credit: 
University of California - Santa Barbara

The pioneering technology that is uncovering the mysteries of the 'Kraken'

The legend of the "kraken" has captivated humans for millennia. Stories of deep-sea squid dragging sailors and even entire ships to their doom can be found in everything from ancient Greek mythology to modern-day movie blockbusters. It is therefore ironic that the species that inspired these stories, the giant squid Architeuthis dux, is camera-shy. In fact, filming this species in the wild has proven an insurmountable challenge for countless scientists, explorers, and filmmakers. To date, only one scientist, Dr. Edith Widder of the Ocean Research & Conservation Association, has repeatedly caught a live giant squid on camera. In a new study, Dr. Widder and her colleagues have finally revealed the secrets behind their success. This study, which is free to access, also includes several fascinating videos of large deep-sea squid that have never been published before.

The giant squid is the largest invertebrate (=animal without a backbone) on this planet, reaching total lengths of up to 14 m (46 ft). Even though most of the squid's body is made up of its long sinuous tentacles, you would still think that an animal of this size would be easy to spot. However, the giant squid lives at depths of over 400 m, where very little sunlight penetrates. To adapt to these conditions of almost perpetual darkness, the giant squid has evolved the largest eyes in the animal kingdom. Reaching diameters of 30 cm, these dinner plate-sized eyes are sensitive enough to see under the dimmest light. In fact, the authors of this study think that giant squid eyes might have such good eyesight that they have been able to spot and avoid most submarines or underwater cameras that people have previously used to try and film these species.

To design a camera that the giant squid would not be able to see, Widder used dim-red lights instead of the conventional bright white lights that most deep-sea submarines or underwater cameras use to pierce the inky darkness. As most squid are unable to see red light, these cameras would therefore be all but invisible to any nearby squid.

To observe these giants requires more than stealth. There also needs to be a way to lure them in close enough so that they can be filmed. To address this problem, Widder once again thought about the giant squid's impressive eye. Even though bright white lights likely scare these animals away, giant squid often hunt deep-sea prey that create their own light - called bioluminescence. So Widder built a lure called an E-Jelly that mimicked the bioluminescent display of a deep-sea jellyfish (Atolla sp.). The neon blue pin-wheel display of the E-Jelly would suggest the presence of a nearby meal and hopefully bring the squid close enough to be caught on camera.

The use of red illuminators and the E-Jelly bait was clearly a winning combination, and this technology was key to filming the first footage of live giant squid in both Japanese and US waters. Moreover, the authors of this study also report on several other species of squid, each over 1 m in length, that were successfully filmed with this technology within the Wider Caribbean Region.

The effectiveness of this pioneering technology for filming large deep-sea squid has the potential to keep generating ever-more engaging footage of the mysterious and little understood species. Yet perhaps more importantly it can also provide new scientific insights into the behavior, distribution, and threats that these animals may face. Without this information, we simply do not know if the giant squid, like many other deep-sea species, is able to adapt to growing threats such as climate change or marine pollution. As Dr. Nathan Robinson, an Adjunct Researcher at the Cape Eleuthera Institute and the main author of this study, states "without this information, the future of these enigmatic species will remain uncertain."

Credit: 
Ocean Research & Conservation Association, Inc. (ORCA)

Study suggests that silicon could be a photonics game-changer

New research from the University of Surrey has shown that silicon could be one of the most powerful materials for photonic informational manipulation - opening up new possibilities for the production of lasers and displays.

While computer chips' extraordinary success has confirmed silicon as the prime material for electronic information control, silicon has a reputation as a poor choice for photonics; there are no commercially available silicon light-emitting diodes, lasers or displays.

Now, in a paper published by Light: Science and Applications journal, a Surrey-led international team of scientists has shown that silicon is an outstanding candidate for creating a device that can control multiple light beams.

The discovery means that it is now possible to produce silicon processors with built-in abilities for light beams to control other beams - boosting the speed and efficiency of electronic communications.

This is possible thanks to the wavelength band called the far-infrared or terahertz region of the electromagnetic spectrum. The effect works with a property called a nonlinearity, which is used to manipulate laser beams - for example, changing their colour. Green laser pointers work this way: they take the output from a very cheap and efficient but invisible infrared laser diode and change the colour to green with a nonlinear crystal that halves the wavelength.

Other kinds of nonlinearity can produce an output beam with a third of the wavelength or be used to redirect a laser beam to control the direction of the beam's information. The stronger the nonlinearity, the easier it is to control with weaker input beams.

The researchers found that silicon possesses the strongest nonlinearity of this type ever discovered. Although the study was carried out with the crystal being cooled to very low cryogenic temperatures, such strong nonlinearities mean that extremely weak beams can be used.

Ben Murdin, co-author of the study and Professor of Physics at the University of Surrey, said: "Our finding was lucky because we weren't looking for it. We were trying to understand how a very small number of phosphorus atoms in a silicon crystal could be used for making a quantum computer and how to use light beams to control quantum information stored in the phosphorus atoms.

"We were astonished to find that the phosphorus atoms were re-emitting light beams that were almost as bright as the very intense laser we were shining on them. We shelved the data for a couple of years while we thought about proving where the beams were coming from. It's a great example of the way science proceeds by accident, and also how pan-European teams can still work together very effectively."

Credit: 
University of Surrey

Metabolite fumarate can reveal cell damage: New method to generate fumarate for MRI

image: Hyperpolarization of fumarate for use as a biosensor

Image: 
ill./©: John Blanchard, James Eills / JGU

A promising new concept published by an interdisciplinary research team in "Proceedings of the National Academy of Sciences" (PNAS) paves the way for major advances in the field of magnetic resonance imaging (MRI). Their new technique could significantly simplify hyperpolarized MRI, which developed around 20 years ago for observing metabolic processes in the body. The proposal involves the hyperpolarization of the metabolic product fumarate using parahydrogen and the subsequent purification of the metabolite. "This technique would not only be simpler, but also much cheaper than the previous procedure," said leader of the project Dr. James Eills, a member of the research team of Professor Dmitry Budker at Johannes Gutenberg University Mainz (JGU) and the Helmholtz Institute Mainz (HIM). Also participating in the project were scientists from the fields of chemistry, biotechnology, and physics at TU Darmstadt, TU Kaiserslautern, the University of California Berkeley in the United States, the University of Turin in Italy, and the University of Southampton in England.

Fumarate is a key biosensor for hyperpolarized imaging

The potential applications of MRI are hindered by its low sensitivity and the technique is essentially limited to observing water molecules in the body. Researchers are therefore constantly working on different ways of improving MRI. A major breakthrough was achieved around 20 years ago when hyperpolarized magnetic resonance imaging was first developed: Because hyperpolarized molecules emit significantly stronger MRI signals, substances that are only present in low concentrations in the body can also be visualized. By hyperpolarizing biomolecules and introducing them in patients, it is possible to track metabolism in real time, thus providing doctors with much more information.

Hyperpolarized fumarate is a promising biosensor for the imaging of metabolic processes. Fumarate is a metabolite of the citric acid cycle that plays an important role in the energy production of living beings. For imaging purposes, the fumarate is tagged with carbon-13 as the atomic nuclei of this isotope can be hyperpolarized. Dynamic nuclear polarization is the current state-of-the-art method for hyperpolarizing fumarate, but this is expensive and relatively slow. The equipment required costs one to two million euros. "Dynamic nuclear polarization is very difficult to use in everyday clinical practice due to the related high costs and technical complexity. Using parahydrogen, we are able to hyperpolarize this important biomolecule in a cost-effective and convenient way," said Dr. Stephan Knecht of TU Darmstadt, the first author of the published article.

A new method to hyperpolarize and purify fumarate for subsequent use as a biosensor

The research team led by Dr. James Eills has already been working on this concept for some time. "We have made a significant breakthrough as our approach is not only cheap, but also fast and easy to handle," emphasized Eills. However, parahydrogen-induced polarization, or PHIP for short, also has its disadvantages. The low level of polarization and the large number of unwanted accompanying substances are particularly problematic in the case of this chemistry-based technique. Among other things, transferring the polarization from parahydrogen into fumarate requires a catalyst, which remains in the reaction fluid just like other reaction side-products. "The chemical contaminants must be removed from the solution so it is biocompatible and can be injected in living beings. This is essential if we think about the future clinical translation of this hyperpolarized biosensor," said Dr. Eleonora Cavallari, a physicist from the Department of Molecular Biotechnology and Health Sciences in Turin.

The solution to this problem is to purify the hyperpolarized fumarate through precipitation. The fumarate then takes the form of a purified solid and can be redissolved at the desired concentration later. "This means we have a product from which all toxic substances have been removed so that it can readily be used in the body," added Dr. James Eills. In addition, compared to previous experiments with PHIP, the polarization is increased to remarkable 30 to 45 percent. Preclinical studies have already shown that hyperpolarized fumarate imaging is a suitable method of monitoring how tumors respond to therapy as well as for imaging acute kidney injuries or the effects of myocardial infarction. This new way of producing hyperpolarized fumarate should greatly accelerate preclinical studies and bring this technology to more laboratories.

Credit: 
Johannes Gutenberg Universitaet Mainz

New AI tool calculates materials' stress and strain based on photos

image: This visualization shows the deep-learning approach in predicting physical fields given different input geometries. The left figure shows a varying geometry of the composite in which the soft material is elongating, and the right figure shows the predicted mechanical field corresponding to the geometry in the left figure.

Image: 
Courtesy of Zhenze Yang, Markus Buehler, et al

Isaac Newton may have met his match.

For centuries, engineers have relied on physical laws -- developed by Newton and others -- to understand the stresses and strains on the materials they work with. But solving those equations can be a computational slog, especially for complex materials.

MIT researchers have developed a technique to quickly determine certain properties of a material, like stress and strain, based on an image of the material showing its internal structure. The approach could one day eliminate the need for arduous physics-based calculations, instead relying on computer vision and machine learning to generate estimates in real time.

The researchers say the advance could enable faster design prototyping and material inspections. "It's a brand new approach," says Zhenze Yang, adding that the algorithm "completes the whole process without any domain knowledge of physics."

The research appears today in the journal Science Advances. Yang is the paper's lead author and a PhD student in the Department of Materials Science and Engineering. Co-authors include former MIT postdoc Chi-Hua Yu and Markus Buehler, the McAfee Professor of Engineering and the director of the Laboratory for Atomistic and Molecular Mechanics.

Engineers spend lots of time solving equations. They help reveal a material's internal forces, like stress and strain, which can cause that material to deform or break. Such calculations might suggest how a proposed bridge would hold up amid heavy traffic loads or high winds. Unlike Sir Isaac, engineers today don't need pen and paper for the task. "Many generations of mathematicians and engineers have written down these equations and then figured out how to solve them on computers," says Buehler. "But it's still a tough problem. It's very expensive -- it can take days, weeks, or even months to run some simulations. So, we thought: Let's teach an AI to do this problem for you."

The researchers turned to a machine learning technique called a Generative Adversarial Neural Network. They trained the network with thousands of paired images -- one depicting a material's internal microstructure subject to mechanical forces, and the other depicting that same material's color-coded stress and strain values. With these examples, the network uses principles of game theory to iteratively figure out the relationships between the geometry of a material and its resulting stresses.

"So, from a picture, the computer is able to predict all those forces: the deformations, the stresses, and so forth," Buehler says. "That's really the breakthrough -- in the conventional way, you would need to code the equations and ask the computer to solve partial differential equations. We just go picture to picture."

That image-based approach is especially advantageous for complex, composite materials. Forces on a material may operate differently at the atomic scale than at the macroscopic scale. "If you look at an airplane, you might have glue, a metal, and a polymer in between. So, you have all these different faces and different scales that determine the solution," say Buehler. "If you go the hard way -- the Newton way -- you have to walk a huge detour to get to the answer."

But the researcher's network is adept at dealing with multiple scales. It processes information through a series of "convolutions," which analyze the images at progressively larger scales. "That's why these neural networks are a great fit for describing material properties," says Buehler.

The fully trained network performed well in tests, successfully rendering stress and strain values given a series of close-up images of the microstructure of various soft composite materials. The network was even able to capture "singularities," like cracks developing in a material. In these instances, forces and fields change rapidly across tiny distances. "As a material scientist, you would want to know if the model can recreate those singularities," says Buehler. "And the answer is yes."

The advance could "significantly reduce the iterations needed to design products," according to Suvranu De, a mechanical engineer at Rensselaer Polytechnic Institute who was not involved in the research. "The end-to-end approach proposed in this paper will have a significant impact on a variety of engineering applications -- from composites used in the automotive and aircraft industries to natural and engineered biomaterials. It will also have significant applications in the realm of pure scientific inquiry, as force plays a critical role in a surprisingly wide range of applications from micro/nanoelectronics to the migration and differentiation of cells."

In addition to saving engineers time and money, the new technique could give nonexperts access to state-of-the-art materials calculations. Architects or product designers, for example, could test the viability of their ideas before passing the project along to an engineering team. "They can just draw their proposal and find out," says Buehler. "That's a big deal."

Once trained, the network runs almost instantaneously on consumer-grade computer processors. That could enable mechanics and inspectors to diagnose potential problems with machinery simply by taking a picture.

In the new paper, the researchers worked primarily with composite materials that included both soft and brittle components in a variety of random geometrical arrangements. In future work, the team plans to use a wider range of material types. "I really think this method is going to have a huge impact," says Buehler. "Empowering engineers with AI is really what we're trying to do here."

Credit: 
Massachusetts Institute of Technology

The growing promise of community-based monitoring and citizen science

image: Volunteer monitoring programs, such as eBird, typically rely on observations made by nonscientist members of the public.

Image: 
Kayla Farmer

Over recent decades, community-based environmental monitoring (often called "citizen science") has exploded in popularity, aided both by smartphones and rapid gains in computing power that make the analysis of large data sets far easier.

Publishing in BioScience, handling editors Rick Bonney, of Cornell University, Finn Danielsen, of the Nordic Foundation for Development and Ecology, and colleagues share a special section that highlights numerous community-based monitoring programs currently underway. They also describe the potential for such efforts to advance the scientific enterprise well into the future and make recommendations for best practices and future directions.

In an article on locally based monitoring, Danielsen and colleagues describe the potential for monitoring by local community members--who may have little scientific training--to deliver "credible data at local scale independent of external experts and can be used to inform local and national decision making within a short timeframe." The authors argue that this important source of data can prove particularly valuable in areas in which scientist-led monitoring is sparse or too costly to administer.

Community-based monitoring efforts also have the potential to empower Indigenous rightsholders and stakeholders through their broader inclusion in the scientific process, writes Bonney in an introductory Viewpoint. Likewise, he explains, "Indigenous and local peoples' in situ knowledge practices have the potential to make significant contributions to meeting contemporary sustainability challenges both locally and around the globe." This topic is explored in depth in an Overview article by Maria Tengö and colleagues.

A contribution from Noor Johnson and colleagues discusses the role of digital platforms in enabling community-based monitoring, including among Indigenous communities. While digital platforms, such as those that use smartphones, have the potential to improve data management in community-based monitoring, the authors caution that care must be taken with sensitive data and that such platforms may "increase inequities across communities because being able to use digital tools requires technical capacity that may or may not exist at the community level."

Key to realizing the potential of community-based monitoring will be linking large-scale, top-down monitoring programs with bottom-up approaches managed or initiated at the community level, write Hajo Eicken and colleagues a short video about the article can be found here. This effort will rely on a number of factors, including an ability to match program aims and scales while also fostering compatibility in methodology and data management--and ensuring that Indigenous intellectual property rights are respected. Only by linking monitoring programs, argue the authors, will it be possible to fully realize the value of community-based monitoring and effectively respond to the rapidly changing global environment.

Credit: 
American Institute of Biological Sciences

Lactic acid bacteria can extend the shelf life of foods

Researchers at the National Food Institute have come up with a solution that can help combat both food loss and food waste: They have generated a natural lactic acid bacterium, which secretes the antimicrobial peptide nisin, when grown on dairy waste.

Nisin is a food-grade preservative, which can extend the shelf life of foods, and thus can be used to reduce food waste. The discovery also makes it possible to better utilize the large quantities of whey generated when cheese is made.

Nisin is approved for use in a number of foods, where it can prevent the growth of certain spoilage microorganisms as well as microorganisms that make consumers sick. It can for instance inhibit spore germination in canned soups and prevent late blowing in cheeses—without affecting its flavour.

In theory, nisin could be added to fresh milk to extend its shelf life. However, different countries have different rules stating what types of products nisin may be added to and in which amounts. 

Extra step towards better utilization of whey

Many dairies are already turning a profit by extracting protein and lactose from the many tons of whey they generate, which they use in e.g. infant formula and sports nutrition. What is left behind can still be used to produce nisin.

In addition to ensuring better resource utilization, there may be a financial gain from producing nisin: Most commercially available nisin products contain 2.5% nisin and cost approximately 40 euro per kilogram.

Read more

The work related to isolating the nisin secreting lactic acid bacteria has been described in further detail in a scientific article in the Journal of Agricultural and Food Chemistry: Efficient Production of Nisin A from Low-Value Dairy Side Streams Using a Nonengineered Dairy Lactococcus lactis Strain with Low Lactate Dehydrogenase Activity.

A special topic portal on the National Food Institute’s website  showcases some of the many ways in which the institute works to create sustainable technological solutions in the area of food. Read e.g. about projects that transform side streams into new ingredients and foods.

Journal

Journal of Agricultural and Food Chemistry

DOI

10.1021/acs.jafc.0c07816

Credit: 
Technical University of Denmark

New algorithm makes it easier for computers to solve decision making problems

image: Computation cost of multiagent problems can be greatly reduced using an agent-by-agent PI approach

Image: 
IEEE/CAA Journal of Automatica Sinica

Computer scientists often encounter problems relevant to real-life scenarios. For instance, "multiagent problems," a category characterized by multi-stage decision-making by multiple decision makers or "agents," has relevant applications in search-and-rescue missions, firefighting, and emergency response.

Multiagent problems are often solved using a machine learning technique known as "reinforcement learning" (RL), which concerns itself with how intelligent agents make decisions in an environment unfamiliar to them. An approach usually adopted in such an endeavor is policy iteration (PI), which starts off with a "base policy" and then improves on it to generate a "rollout policy" (with the process of generation called a "rollout"). Rollout is simple, reliable, and well-suited for an on-line, model-free implementation.

There is, however, a serious issue. "In a standard rollout algorithm, the amount of total computation grows exponentially with the number of agents. This can make the computations prohibitively expensive even for a modest number of agents," explains Prof. Dimitri Bertsekas from Massachusetts Institute of Technology and Arizona State University, USA, who studies large-scale computation and optimization of communication and control.

In essence, PI is simply a repeated application of rollout, in which the rollout policy at each iteration becomes the base policy for the next iteration. Usually, in a standard multiagent rollout policy, all agents are allowed to influence the rollout algorithm at once ("all-agents-at-once" policy). Now, in a new study published in the IEEE/CAA Journal of Automatica Sinica, Prof. Bertsekas has come up with an approach that might be a game changer.

In his paper, Prof. Bertsekas focused on applying PI to problems with a multiple-component control, each component selected by a different agent. He assumed that all agents had perfect state information and shared it among themselves. He then reformulated the problem by trading off control space complexity with state space complexity. Additionally, instead of an all-agents-at-once policy, he adopted an "agent-by-agent" policy wherein only one agent was allowed to execute a rollout algorithm at a time, with coordinating information provided by the other agents.

The result was impressive. Instead of an exponentially growing complexity, Prof. Bertsekas found only a linear growth in computation with the number of agents, leading to a dramatic reduction in the computation cost. Moreover, the computational simplification did not sacrifice the quality of the improved policy, performing at par with the standard rollout algorithm.

Prof. Bertsekas then explored exact and approximate PI algorithms using the new version of agent-by-agent policy improvement and repeated application of rollout. For highly complex problems, he explored the use of neural networks to encode the successive rollout policies, and to precompute signaling policies that coordinate the parallel computations of different agents.

Overall, Prof. Bertsekas is optimistic about his findings and future prospects of his approach. "The idea of agent-by-agent rollout can be applied to challenging multidimensional control problems, as well as deterministic discrete/combinatorial optimization problems, involving constraints that couple the controls of different stages," he observes. He has published two books on RL, one of which, titled "Rollout, Policy Iteration, and Distributed Reinforcement Learning" soon to be published by Tsinghua Press, China, deals with the subject of his study in detail.

The new approach to multiagent systems might very well revolutionize how complex sequential decision problems are solved.

Credit: 
Chinese Association of Automation

Potential advancements in treatment of PTSD and PTSD-related Cardiovascular disease

image: Laxmi Iyer, PhD, The George Washington University, Washington D.C.

Image: 
Laxmi Iyer

Rockville, Md. (April 27, 2021)--A new study reveals that renin-angiotensin system (RAS) genes within the amygdala--the brain region important for traumatic memory processing--express differently when the brain develops fearful memories, such as when people undergo traumatic stress. Researchers have found that medication may potentially be used as a pharmacological blockade of the angiotensin type 1 receptor, thereby improving components of fear memory as assessed by freezing behavior. The research team from George Washington University in Washington, D.C., will present their findings virtually at the American Physiological Society's (APS) annual meeting at Experimental Biology 2021.

Post-traumatic stress disorder (PTSD) is a strong predictor of cardiovascular disease (CVD), the leading cause of death in the U.S. The RAS gene, which is critical for blood pressure regulation, is seen as a potentially important link between PTSD and CVD. This study examined the RAS within areas of the brain responsible for processing traumatic or fear-related memories and its effects on cardiovascular regulation. The researchers hoped that it would lead to new treatment and prevention strategies for both PTSD and PTSD-related CVD risk.

Current treatment options for PTSD are limited, and the causes of PTSD-CVD risk is unclear. These pre-clinical findings shed light on a potential therapeutic target and extend the current understanding for the regulation of brain RAS during fear learning and memory recall processes that are impaired in PTSD.

Credit: 
Experimental Biology

Depression medication could also protect against heart disease

image: Patricia A. Lozano, a research assistant at the Texas A&M University Rangel College of Pharmacy

Image: 
Patricia A. Lozano

The antidepression drug duloxetine could be beneficial to patients with both depression and cardiovascular disease, according to new studies performed in human blood and in mice. Globally, more than 300 million people have depression, which comes with an increased risk of developing cardiovascular disease.

When a blood vessel is injured, the platelets in our blood respond by forming clots that stop blood bleeding. If this activation goes into overdrive, it can lead to thrombosis, a condition where blood clots form inside blood vessels and can dislodge to lead to a heart attack or stroke. In the new studies, researchers showed that duloxetine inhibited platelet function and protected against clot formation.

"Understanding the antiplatelet effects of duloxetine is critical due to the prevalence of patients with depression and cardiovascular disease," said Patricia A. Lozano, a research assistant at the Texas A&M University Rangel College of Pharmacy. "Having one drug that can treat both conditions could help avoid drug interactions. Duloxetine may also serve as a blueprint for developing a novel class of antithrombotic agents."

Lozano, who works in the laboratory of Fatima Alshbool, PharmD, PhD, and Fadi Khasawneh, PhD, will present the new research at the American Society for Pharmacology and Experimental Therapeutics annual meeting during the virtual Experimental Biology (EB) 2021 meeting, to be held April 27-30.

"Our study shows, for the first time, that duloxetine, which has already been approved by the FDA for depression, has antithrombotic activity," said Lozano. "Repurposing an existing drug already approved by the FDA, helps avoid the lengthy and costly process of drug discovery and development."

The researchers studied duloxetine because it is a serotonin-norepinephrine reuptake inhibitor, which means it acts on the protein that controls levels of the neurotransmitter serotonin. In addition to playing a role in depression, serotonin is known to help control platelet activity.

Using human blood, the researchers performed a series of experiments to examine duloxetine's effects on platelets in human blood. They found that the antidepressant inhibited platelet aggregation in a dose-dependent manner, implying that the drug could prevent clot formation. Using a mouse model of thrombosis, the researchers also found that duloxetine slowed down the time it took for platelets to aggregate into a clot large enough to block an artery.

The researchers want to continue characterizing the antithrombotic effects of duloxetine. They also hope to collaborate with an expert in drug design to develop new drugs based on the structure of duloxetine and test their ability to protect from thrombosis.

Patricia A. Lozano will present the findings in poster R4622 (abstract). Contact the media team for more information or to obtain a free press pass to access the meeting.

Image available.

Credit: 
Experimental Biology