Earth

How do birds breathe better? Researchers' discovery will throw you for a loop

video: Experiments on a network with two loops, the lower subject to oscillations that mimic breathing and the upper developing one-way flow.

Image: 
NYU's Applied Mathematics Laboratory

Birds breathe with greater efficiency than humans due to the structure of their lungs--looped airways that facilitate air flows that go in one direction--a team of researchers has found through a series of lab experiments and simulations.

The findings will appear Fri., March 19 in the journal Physical Review Letters (to be posted between 10 and 11 a.m. EDT).

The study, conducted by researchers at New York University and the New Jersey Institute of Technology, also points to smarter ways to pump fluids and control flows in applications such as respiratory ventilators.

"Unlike the air flows deep in the branches of our lungs, which oscillate back and forth as we breathe in and out, the flow moves in a single direction in bird lungs even as they inhale and exhale," explains Leif Ristroph, an associate professor at NYU's Courant Institute of Mathematical Sciences and the senior author of the paper. "This allows them to perform the most difficult and energetically costly activity of any animal: they can fly, and they can do so across whole oceans and entire continents and at elevations as high as Mount Everest, where the oxygen is extremely thin."

"The key is that bird lungs are made of looped airways--not just the branches and tree-like structure of our lungs--and we found that this leads to one-way or directed flows around the loops," adds Ristroph. "This wind ventilates even the deep recesses of the lungs and brings in fresh air."

Videos and an image depicting the work are available on Google Drive: https://drive.google.com/drive/folders/1ifFo9zdNIG7Pl9_7_D88zUM0d_EmEFct

The one-way flow of air in birds' breathing systems was discovered a century ago. But what had remained a mystery was an explanation of the aerodynamics behind this efficient breathing system.

To explore this, the researchers conducted a series of experiments that mimicked birds' breathing in NYU's Applied Mathematics Lab.

For the experiments, they built piping filled with water--to replicate air flow--and bent the piping to imitate the loop-like structure of birds' lungs--similar to the way freeways are connected by on-ramps and off-ramps. The researchers mixed microparticles into the water, which allowed them to track the direction of the water flow.

These experiments showed that back-and-forth motions generated by breathing were transformed into one-way flows around the loops.

"This is in essence what happens inside lungs, but now we could actually see and measure--and thus understand--what was going on," explains Ristroph, director of the Applied Mathematics Lab. "The way this plays out is that the network has loops and thus junctions, which are a bit like 'forks in the road' where the flows have a choice about which route to take."

The scientists then used computer simulations to reproduce the experimental results and better understand the mechanisms.

"Inertia tends to cause the flows to keep going straight rather than turn down a side street, which gets obstructed by a vortex," explains NJIT assistant professor and co-author Anand Oza. "This ends up leading to one-way flows and circulation around loops because of how the
junctions are hooked up in the network."

Ristroph points to several potential engineering uses for these findings.

"Directing, controlling, and pumping fluids is a very common goal in many applications, from healthcare to chemical processing to the fuel, lubricant, and coolant systems in all sorts of machinery," he observes. "In all these cases, we need to pump fluids in specific directions for specific purposes, and now we've learned from birds an entirely new way to accomplish this that we hope can be used in our technologies."

Credit: 
New York University

The blast that shook the ionosphere

image: The epicenter in Beirut, before and after the explosion (Bhaskar Kundu, et al. Scientific Reports. February 2, 2021).

Image: 
Bhaskar Kundu, et al. Scientific Reports. February 2, 2021

A 2020 explosion in Lebanon's port city of Beirut led to a southward-bound, high-velocity atmospheric wave that rivaled ones generated by volcanic eruptions.

Just after 6 p.m. local time (15.00 UTC) on August 4, 2020, more than 2,750 tons worth of unsafely stored ammonium nitrate exploded in Lebanon's port city of Beirut, killing around 200 people, making more than 300,000 temporarily homeless, and leaving a 140-metre-diameter crater in its wake. The blast is considered one of the most powerful non-nuclear, man-made explosions in human history.

Now, calculations by Hokkaido University scientists in Japan have found that the atmospheric wave from the blast led to electron disturbances high in Earth's upper atmosphere. They published their findings in the journal Scientific Reports.

The team of scientists, which included colleagues from the National Institute of Technology Rourkela in India, calculated changes in total electron content in Earth's ionosphere: the part of the atmosphere from around 50 to 965 kilometres in altitude. Natural events like extreme ultraviolet radiation and geomagnetic storms, and man-made activities like nuclear tests, can cause disturbances to the ionosphere's electron content.

"We found that the blast generated a wave that travelled in the ionosphere in a southwards direction at a velocity of around 0.8 kilometres per second," says Hokkaido University Earth and Planetary scientist Kosuke Heki. This is similar to the speed of sound waves travelling through the ionosphere.

The team calculated changes in ionospheric electron content by looking at differences in delays experienced by microwave signals transmitted by GPS satellites to their ground stations. Changes in electron content affect these signals as they pass through the ionosphere and must be regularly taken into consideration to accurately measure GPS positions.

The scientists also compared the magnitude of the ionospheric wave generated by the Beirut blast to similar waves following natural and anthropogenic events. They found that the wave generated by the Beirut blast was slightly larger than a wave generated by the 2004 eruption of Asama Volcano in central Japan, and comparable to ones that followed other recent eruptions on Japanese islands.

The energy of the ionospheric wave generated by the Beirut blast was significantly larger than a more energetic explosion in a Wyoming coal mine in the USA in 1996. The Beirut blast was equivalent to an explosion of 1.1 kilotons of TNT, while the Wyoming explosion was equivalent to 1.5 kilotons of TNT. The total electron content disturbance of the Wyoming explosion was only 1/10 of that caused by the Beirut blast. The scientists believe this was partially due to the Wyoming mine being located in a somewhat protected pit.

Credit: 
Hokkaido University

While drowning numbers soar, beach safety programs are largely unevaluated

A global review of coastal drowning science has found there is only one study worldwide that has evaluated beach safety education programs in schools.

Researchers from UNSW's Beach Safety Research Group have conducted the first in-depth review specific to coastal drowning.

The study, published in PLOS ONE, reviewed 146 coastal drowning studies from around the world.

"We found that evaluation of coastal drowning prevention strategies is rare," said William Koon, the lead author of the study and a PhD candidate in the School of Biological, Earth and Environmental Sciences said.

"This means we simply don't have enough data showing what works and what doesn't work.

"There was only one study worldwide - involving a private primary school in Queensland - to see if beach safety education program is effective in schools."

Since the review, an additional school-based evaluation of water safety virtual reality programs in Victoria has been published.

Mr Koon said the review's findings are concerning as tens of thousands of Australian primary and secondary school students participate in beach or water safety programs from lifeguards and lifesavers every year.

"There is remarkably little information out there to say [firstly], does it work and [secondly], here's how it works best," Mr Koon said.

"We need to assess if programs function as intended, and continually refine them to improve effectiveness."

Researchers and UNSW Beach Safety Research Group founding members Dr Amy Peden, Dr Jaz Lawes and Professor Rob Brander were also involved in the study.

"I find it interesting that over the last 16 years we haven't really seen any improvement in the number of coastal drownings each year in Australia, despite lots of ongoing school and public education programs," Professor Brander said.

The study found that more than three-quarters (76.7%) of coastal drowning research was from high-income countries. Australia is leading the way with 49 studies, followed by the US (28 studies).

It also found that existing drowning prevention strategies are largely un-evaluated, with little research being done in low-income countries where a majority of drowning events occur.

Mr Koon said while there were 125 coastal drowning deaths in Australia last year, the World Health Organisation estimates that more than 90 percent of all drowning occurs in low-and middle-income countries.

He said to address this global health problem, researchers need to start looking to local data from lower-resourced settings to understand the coastal safety issues there and prioritise drowning prevention programs for different groups of people.

"Researchers like myself need to ask 'is what I learned from Australian coastal safety research applicable to a place like Ghana or Costa Rica or India, where similar hazards exist, but the cultural context is very different'," he said.

The researchers conducted the review to better understand the science driving safety initiatives and highlight gaps in the field of coastal drowning, in order to prioritise future studies and prevention initiatives that will ultimately save lives.

They focused on fatal unintentional coastal drowning that was unrelated to boating, disaster (ie cyclones) or occupational accidents (ie commercial fishermen or scuba divers).

They found studies inconsistently reported intentional, occupational and boating coastal drowning deaths, and the terminology used to describe coastal waters was also non-uniform.

"Reviews such as this one are so important as they highlight gaps in the current evidence base, identifying opportunities for future research to really make a difference, rather than more of the same," study co-author Dr Amy Peden, from the UNSW School of Population Health said.

Over 100 different risk factors related to coastal drowning were identified, but the data sources, outcomes used, and analyses employed were variable.

"What we learned is that the consistency in reporting and analysing of these different scenarios was just all over the place," Mr Koon said. "Not every place in the world is recording drowning in the same way.

Studies were also published in a variety of journals representing different disciplines.

Many studies recommended prevention measures, most frequently related to education, lifeguards and signage.?

"Is that enough? Are these efforts working? How well? We don't have enough data to answer these questions," Mr Koon said.

There are limited resources for evaluations on water safety programs, he said.

"But someone with a long-term view should start to see that investment in monitoring and evaluation is a way to really make sure the work is doing what it's supposed to do," he said.
He says it is important to continue to refine school water safety programs.

"There are probably different lessons to be learned, such as messaging at different ages, whether that's stopping to look for rips or putting on sunscreen, avoiding alcohol or avoiding jumping off rocks and cliffs.

He said there has been "remarkably little information" studying prevention measures.

"Research on danger signs on beaches has already told us that people don't really look at signs, and if they do, are not really influenced to change decisions or behaviour," he said.

"We're still in the stage where most programs are driven by expert opinion without much supporting data."

He said Australia continues to lead the world in drowning prevention and water safety research because organisations like Surf Life Saving Australia and Royal Life Saving Society - Australia maintain robust databases, with some of the most detailed drowning data in the world.

UNSW Beach Safety Research Group researchers are currently working with Lake Macquarie lifeguards to evaluate a beach safety program which they deliver to high school students.

"We hope to be able to offer some recommendations on how other people in the industry can move forward with evaluating some of their school and other beach safety education programs," he said.

"As our review has shown, the kind of information this evaluation will yield addresses a massive gap in our understanding of what's effective and what's not in the drowning prevention education space," Dr Peden said.

"Identifying what can improving safety and reduce young people's risk of drowning during adolescence can result in positive behaviours throughout adulthood."

Credit: 
University of New South Wales

Extinct Caribbean bird's closest relatives hail from Africa, South Pacific

image: These tiny fossil bones belonged to a Haitian cave-rail, Nesotrochis steganinos, an extinct species unique to the Caribbean. While the bird may have resembled a common gallinule, pictured here, ancient DNA analysis reveals Nesotrochis' closest relatives are found in Africa, New Guinea and New Zealand.

Image: 
Jeff Gage/Florida Museum of Natural History

GAINESVILLE, Fla. --- In a genetic surprise, ancient DNA shows the closest family members of an extinct bird known as the Haitian cave-rail are not in the Americas, but Africa and the South Pacific, uncovering an unexpected link between Caribbean bird life and the Old World.

Like many animals unique to the Caribbean, cave-rails became extinct soon after people settled the islands. The last of three known West Indian species of cave-rails - flightless, chicken-sized birds - vanished within the past 1,000 years. Florida Museum of Natural History researchers sought to resolve the group's long-debated ancestry by analyzing DNA from a fossil toe bone of the Haitian cave-rail, Nesotrochis steganinos. But they were unprepared for the results: The genus Nesotrochis is most closely related to the flufftails, flying birds that live in sub-Saharan Africa, Madagascar and New Guinea, and the adzebills, large, extinct, flightless birds native to New Zealand.

The study presents the first example of a Caribbean bird whose closest relatives live in the Old World, showcasing the power of ancient DNA to reveal a history erased by humans.

The discovery was "just mind-blowing," said study lead author Jessica Oswald, who began the project as a postdoctoral researcher at the Florida Museum.

"If this study had not happened, we might still be under the assumption that the closest relatives of most things in the Caribbean are on the mainland in the Americas," said Oswald, now a postdoctoral researcher at the University of Nevada, Reno and a Florida Museum research affiliate. "This gives us an understanding of the region's biodiversity that would otherwise be obscured."

Many animals evolved unusual forms on islands, often making it difficult to classify extinct species based on their physical characteristics alone. But advancements in extracting viable DNA from fossils now enables scientists like Oswald to answer longstanding questions with ancient genetic evidence. Oswald described her work as similar to a forensic investigation, tracing the evolutionary backstory of extinct animals by piecing together fragmented, degraded genetic material.

"Understanding where all of these extinct species fit into a larger family tree or evolutionary history gives us insight into what a place looked like before people arrived," she said. "That's why my job is so fun. It's always this whodunit."

Oswald was just starting her ancient DNA work at the Florida Museum when David Steadman, curator of ornithology and study co-author, suggested the Haitian cave-rail as a good candidate for analysis.

Cave-rails share physical characteristics with several types of modern birds, and scientists have conjectured for decades whether they are most closely related to wood rails, coots or swamphens - birds that all belong to the rail family, part of a larger group known as the Gruiformes. Oswald and Steadman hoped that studying cave-rail DNA would clarify "what the heck this thing is," Oswald said.

When preliminary results indicated the species had a trans-Atlantic connection, Steadman, who has worked in the Caribbean for more than 40 years, was skeptical.

The genetics also showed that the cave-rail isn't a rail at all: While flufftails and adzebills are also members of the Gruiformes, they are in separate families from rails.

"It just didn't seem logical that you'd have to go across the Atlantic to find the closest relative," Steadman said. "But the fact that people had a hard time classifying where Nesotrochis was within the rails - in hindsight, maybe that should have been a clue. Now I have a much more open mind."

One reason the cave-rail was so difficult to classify is that when birds lose the ability to fly, they often converge on a similar body plan, Steadman said. Flightlessness is a common adaptation in island birds, which face far fewer predators in the absence of humans and invasive species such as dogs, cats, rats and pigs.

"You don't have to outfly or outrun predators, so your flying and running abilities become reduced," Steadman said. "Because island birds spend less energy avoiding predators, they also tend to have a lower metabolic rate and nest on the ground. It's no longer life in the fast lane. They're essentially living in a Corona commercial."

While sheltered from the mass extinctions that swept the mainland, cave-rails were helpless once people touched foot on the islands, having lost their defenses and cautiousness.

"Being flightless and plump was not a great strategy during human colonization of the Caribbean," said study co-author Robert Guralnick, Florida Museum curator of biodiversity informatics.

How did cave-rails get to the Caribbean in the first place? Monkeys and capybara-like rodents journeyed from Africa to the New World about 25-36 million years ago, likely by rafting, and cave-rails may also have migrated during that timespan, Steadman said. He and Oswald envision two probable scenarios: The ancestors of cave-rails either made a long-distance flight across an Atlantic Ocean that was not much narrower than today, or the group was once more widespread across the continents, with more relatives remaining to be discovered in the fossil record.

Other researchers have recently published findings that corroborate the story told by cave-rail DNA: A study of foot features suggested Nesotrochis could be more closely related to flufftails than rails, and other research showed that adzebills are close relatives of the flufftails. Like cave-rails, adzebills are also an example of a flightless island bird extinguished by human hunters.

"Humans have meddled so much in the region and caused so many extinctions, we need ancient DNA to help us sort out what's related to what," Oswald said.

The findings also underscore the value of museum collections, Steadman said. The toe bone Oswald used in her analysis was collected in 1983 by Charles Woods, then the Florida Museum's curator of mammals. At that time, "nobody was thinking about ancient DNA," Steadman said. "It shows the beauty of keeping things well curated in a museum."

Credit: 
Florida Museum of Natural History

Faster drug discovery through machine learning

image: MIT researchers have developed a machine learning-based technique to more quickly calculate the binding affinity of a drug molecule (represented in pink) with a target protein (the circular structure).

Image: 
Image courtesy of MIT News, Xinqiang Ding and Bin Zhang

Drugs can only work if they stick to their target proteins in the body. Assessing that stickiness is a key hurdle in the drug discovery and screening process. New research combining chemistry and machine learning could lower that hurdle.

The new technique, dubbed DeepBAR, quickly calculates the binding affinities between drug candidates and their targets. The approach yields precise calculations in a fraction of the time compared to previous state-of-the-art methods. The researchers say DeepBAR could one day quicken the pace of drug discovery and protein engineering.

"Our method is orders of magnitude faster than before, meaning we can have drug discovery that is both efficient and reliable," says Bin Zhang, the Pfizer-Laubach Career Development Professor in Chemistry at MIT, an associate member of the Broad Institute of MIT and Harvard, and a co-author of a new paper describing the technique.

The research appears today in the Journal of Physical Chemistry Letters. The study's lead author is Xinqiang Ding, a postdoc in MIT's Department of Chemistry.

The affinity between a drug molecule and a target protein is measured by a quantity called the binding free energy -- the smaller the number, the stickier the bind. "A lower binding free energy means the drug can better compete against other molecules," says Zhang, "meaning it can more effectively disrupt the protein's normal function." Calculating the binding free energy of a drug candidate provides an indicator of a drug's potential effectiveness. But it's a difficult quantity to nail down.

Methods for computing binding free energy fall into two broad categories, each with its own drawbacks. One category calculates the quantity exactly, eating up significant time and computer resources. The second category is less computationally expensive, but it yields only an approximation of the binding free energy. Zhang and Ding devised an approach to get the best of both worlds.

Exact and efficient

DeepBAR computes binding free energy exactly, but it requires just a fraction of the calculations demanded by previous methods. The new technique combines traditional chemistry calculations with recent advances in machine learning.

The "BAR" in DeepBAR stands for "Bennett acceptance ratio," a decades-old algorithm used in exact calculations of binding free energy. Using the Bennet acceptance ratio typically requires a knowledge of two "endpoint" states (e.g., a drug molecule bound to a protein and a drug molecule completely dissociated from a protein), plus knowledge of many intermediate states (e.g., varying levels of partial binding), all of which bog down calculation speed.

DeepBAR slashes those in-between states by deploying the Bennett acceptance ratio in machine-learning frameworks called deep generative models. "These models create a reference state for each endpoint, the bound state and the unbound state," says Zhang. These two reference states are similar enough that the Bennett acceptance ratio can be used directly, without all the costly intermediate steps.

In using deep generative models, the researchers were borrowing from the field of computer vision. "It's basically the same model that people use to do computer image synthensis," says Zhang. "We're sort of treating each molecular structure as an image, which the model can learn. So, this project is building on the effort of the machine learning community."

While adapting a computer vision approach to chemistry was DeepBAR's key innovation, the crossover also raised some challenges. "These models were originally developed for 2D images," says Ding. "But here we have proteins and molecules -- it's really a 3D structure. So, adapting those methods in our case was the biggest technical challenge we had to overcome."

A faster future for drug screening

In tests using small protein-like molecules, DeepBAR calculated binding free energy nearly 50 times faster than previous methods. Zhang says that efficiency means "we can really start to think about using this to do drug screening, in particular in the context of Covid. DeepBAR has the exact same accuracy as the gold standard, but it's much faster." The researchers add that, in addition to drug screening, DeepBAR could aid protein design and engineering, since the method could be used to model interactions between multiple proteins.

DeepBAR is "a really nice computational work" with a few hurdles to clear before it can be used in real-world drug discovery, says Michael Gilson, a professor of pharmaceutical sciences at the University of California at San Diego, who was not involved in the research. He says DeepBAR would need to be validated against complex experimental data. "That will certainly pose added challenges, and it may require adding in further approximations."

In the future, the researchers plan to improve DeepBAR's ability to run calculations for large proteins, a task made feasible by recent advances in computer science. "This research is an example of combining traditional computational chemistry methods, developed over decades, with the latest developments in machine learning," says Ding. "So, we achieved something that would have been impossible before now."

Credit: 
Massachusetts Institute of Technology

Tweens and TV: UCLA's 50-year survey reveals the values kids learn from popular shows

image: UCLA's chart tracking the ranking of TV values over each decade, from 1967 to 2017.

Image: 
UCLA Center for Scholars and Storytellers

How important is fame? What about self-acceptance? Benevolence? The messages children between the ages of 8 and 12 glean from TV play a significant role in their development, influencing attitudes and behaviors as they grow into their teenage years and beyond, UCLA psychologists say.

Now, a new report by UCLA's Center for Scholars and Storytellers assesses the values emphasized by television programs popular with tweens over each decade from 1967 to 2017, charting how 16 values have waxed and waned in importance during that 50-year span.

Among the key findings is that fame, after nearly 40 years of ranking near the bottom (it was 15th in 1967, 1987 and 1997), rose to become the No. 1 value in 2007, then dropped to sixth in importance in 2017.

Achievement -- being very successful -- was ranked first in 2017, with self-acceptance, image, popularity and being part of a community rounding out the top five.

The report, "The Rise and Fall of Fame: Tracking the Landscape of Values Portrayed on Television from 1967 to 2017" (PDF), evaluated two programs per decade (and four in 2017), from "The Andy Griffith Show" in 1967 and "Happy Days" in 1977 to "American Idol" and "Hannah Montana" in 2007 and "America's Got Talent" and "Girl Meets World" in 2017.

Like fame, values such as community feeling and benevolence have also seen dramatic rises and falls over the past half-century, with their rankings typically echoing changes in the larger culture, the researchers found.

Being part of community, for instance, which ranked No. 1 in 1967, 1977 and 1997 (and No. 2 in 1987), plummeted to the 11th spot in 2007 -- 10 spots below fame -- before rising again to fifth in 2017. Likewise, being kind and helping others, the No. 2 value in 1967 and 1997, fell to the 12th spot in 2007. It is now ranked eighth.

"I believe that television reflects the culture, and this half-century of data shows that American culture has changed drastically," said report author Yalda Uhls, founder and executive director of the Center for Scholars and Storytellers and an adjunct assistant professor of psychology. "Media plays an important role as young people are developing a concept of the social world outside of their immediate environment."

The concepts children develop can also vary widely based on what types of programs they're watching, according to the authors, who found a stark divergence between the values conveyed in reality shows -- first evaluated in 2007 -- and those of scripted fictional shows.

Values in reality TV vs. fictional programs

The most popular tween reality shows in 2017, based on Nielsen ratings, were "America's Got Talent" and "American Ninja Warrior," while the top two scripted shows were "Thundermans" and "Girl Meets World." In the scripted shows, the top values conveyed were self-acceptance, benevolence and being part of a community. In contrast, the top values conveyed in reality shows were fame, image and self-centeredness.

Reality shows, created for a broad audience and watched frequently by tweens, often highlight competition and the importance of being No. 1 and include bullying, cheating and a winning-at-all-costs value system, the authors note.

"If tweens watch, admire and identify with people who mostly care about fame and winning, these values may become even more important in our culture," said the report's lead author, Agnes Varghese, a fellow of the center and a UC Riverside graduate student. "Reality television shows continued to reflect the same trend we saw in 2007, with self-focused values such as fame ranking highest."

The authors recommend that parents help children understand that reality shows do not depict the experience of the average person and that fictional shows do not adequately depict the hard work and struggles associated with achieving fame.

TV values and the rise of social media

Uhls believes the explosive growth and popularity of social media platforms such as Facebook, launched in 2004, and YouTube, launched in 2005, may have influenced television content creators in the first decade of the 2000s to make fame-focused tween shows. Other research has shown that social media growth was accompanied by a rise in narcissism and a decrease in empathy among college students in the U.S., she notes.

"I don't think this is a coincidence," said Uhls, who was formerly a movie studio executive. "The growth of social media gives children access to an audience beyond the school grounds."

By 2017, the social media landscape had expanded to include platforms like Snapchat and Instagram, and the access they provided to an ever-widening audience made popularity seem more easily attainable. For that reason, Uhls believes, achieving fame may have become less desirable and unique. In addition, the severe recession of 2007-09 may have shifted the culture away from self-focused values such as fame and getting rich.

Research conducted by UCLA distinguished professor of psychology Patricia Greenfield and colleagues has shown that society tends to become more community-focused in times of collective distress, Uhls noted.

Because children, particularly in their tween years, are forming a belief system that integrates the many messages about desirable future aspirations they receive from parents, school, peers and media, it is crucial to understand the role television plays in promoting values -- both positive and negative, the researchers say.

Credit: 
University of California - Los Angeles

Smart quantum technologies for secure communication

Researchers from Louisiana State University have introduced a smart quantum technology for the spatial mode correction of single photons. In a paper featured on the cover of the March 2021 issue of Advanced Quantum Technologies, the authors exploit the self-learning and self-evolving features of artificial neural networks to correct the distorted spatial profile of single photons.

The authors, PhD candidate Narayan Bhusal, postdoctoral researcher Chenglong You, graduate student Mingyuan Hong, undergraduate student Joshua Fabre, and Assistant Professor Omar S. Magaña?Loaiza of LSU--together with collaborators Sanjaya Lohani, Erin M. Knutson, and Ryan T. Glasser of Tulane University and Pengcheng Zhao of Qingdao University of Science and Technology--report on the potential of artificial intelligence to correct spatial modes at the single-photon level.

"The random phase distortion is one of the biggest challenges in using spatial modes of light in a wide variety of quantum technologies, such as quantum communication, quantum cryptography, and quantum sensing," said Bhusal. "In this paper, we use artificial neurons to correct distorted spatial modes of light at the single-photon level. Our method is remarkably effective and time-efficient compared to conventional techniques. This is an exciting development for the future of free-space quantum technologies."

The newly developed technique boosts the channel capacity of optical communication protocols that rely on structured photons.

"One important goal of the Quantum Photonics Group at LSU is to develop robust quantum technologies that work under realistic conditions," said Magaña?Loaiza. "This smart quantum technology demonstrates the possibility of encoding multiple bits of information in a single photon in realistic communication protocols affected by atmospheric turbulence. Our technique has enormous implications for optical communication and quantum cryptography. We are now exploring paths to implement our machine learning scheme in the Louisiana Optical Network Initiative (LONI) to make it smart, secure, and quantum."

The U.S. Army Research Office is supporting Magaña?Loaiza's research on a project titled "Quantum Sensing, Imaging, and Metrology using Multipartite Orbital Angular Momentum."

"We are still in the fairly early stages of understanding the potential for machine learning techniques to play a role in quantum information science," said Dr. Sara Gamble, program manager at the Army Research Office, an element of DEVCOM ARL. "The team's result is an exciting step forward in developing this understanding, and it has the potential to ultimately enhance the Army's sensing and communication capabilities on the battlefield."

Credit: 
Louisiana State University

Model predicts urban development and greenhouses gasses will fuel urban floods

When rain began falling in northern Georgia on Sept. 15, 2009, little did Atlantans know that they would bear witness to epic flooding throughout the city. Neighborhoods, like Peachtree Hills, were submerged; Georgia's busiest expressway was underwater, as were roads and bridges; untreated sewage mingled with rising flood waters; cars and people were swept away. Then-Georgia-governor, Sonny Perdue, declared a state of emergency.

Moisture from the Gulf of Mexico fueled the flood of 2009. A decade later, Arizona State University researchers are asking whether a combination of urban development--and climate change fueled by greenhouse gasses--could bring about comparable scenarios in U.S. cities. Based on a just-published study, the answer is yes.

"When we account for these twin forcing agents of environmental change, the effect of the built environment and the effect of greenhouse gasses, we note a strong tendency toward increased extreme precipitation over future US metropolitan regions," said Matei Georgescu, associate professor in ASU's School of Geographical Sciences and Urban Planning and lead author of the study.

Previous studies have shown that urban development modifies precipitation, thanks to what's known as the urban heat-island effect, the difference between the temperature in a city and the surrounding rural area. As a city grows, it gets warmer. The added warmth adds energy to the air, which forces it to rise faster, condense, form precipitation and rain out over the city or downwind of the city. So, the amount of precipitation a city receives either increases or decreases in response to the urban heat-island effect.

However, when greenhouse gasses and urban development are both taken into account, regional climate modeling focused on the continental United States shows compensating impacts between the effect of urban development and greenhouse gas emissions on extreme precipitation.

The study was published online in the journal Environmental Research Letters.

Researchers have not previously looked at these two variables in tandem. Studies on future precipitation over urban environments typically examine effects for a limited number of events, and they do not account for the twin forcing agents of urban- and greenhouse-gas induced climate change.

"This new study is unique," said Georgescu. "We used climate-scale simulations with a regional climate model to examine potential changes in future extreme precipitation resulting from both urban expansion and increases in greenhouse gasses, across dozens of cities across the continental United States."

In essence, the new study showed that incorporating greenhouse gasses into a regional climate model offset the sometimes-diminishing effect of urban development on extreme precipitation, said Georgescu.

"These are the effects our cities are likely to experience when accounting for the twin forcing agents of urban expansion and greenhouse gas emissions, simultaneously," explained Georgescu. "What this means for U.S. cities in the future is the need for a consistent response to an increase in extreme precipitation. We're no longer likely to see a decrease in precipitation as we've seen before."

Like Atlanta, cities across the U.S., including Denver, Phoenix and Houston, appear to be vulnerable to extreme precipitation and its resultant flooding. Georgescu said the study's findings show the pressing need for cities to develop policies to address flooding that threatens each city's unique resilience and planned infrastructure investments.

"If we trust the models' capability to simulate average and extreme precipitation so well, and our results demonstrate such simulation skill, then we can conduct simulations that include future urbanization, future greenhouse gasses, separately and then together, and trust what the model will tell us," explained Georgescu.

But it's not just about reducing greenhouse gas emissions, he noted. "It's also about how you build cities. How extensive they are, how vertical they are, how dense they are, how much vegetation there is, how much waste heat you put into the environment through electricity use, through air conditioning, or through transportation. All of these things can impact future precipitation in our cities."

In fact, the study has important implications for climate change adaptation and planning. The study highlights the complex and regionally specific ways in which the competing forces of greenhouse gases and urban development can impact rainfall across U.S. metropolitan regions, explained Ashley Broadbent, assistant research professor in ASU's School of Geographical Sciences and Urban Planning.

"This complexity reinforces that future adaptation efforts must be informed by simulations that account for these interacting agents of environmental change," he said.

Credit: 
Arizona State University

Injections or light irradiation?

image: Light instead of injections: A new concept of drug delivery system that automatically releases medication from an in vivo medical device by simply shining light whenever the drug injection is needed.

Image: 
POSTECH

A new concept of on-demand drug delivery system has emerged in which the drugs are automatically released from in vivo medical devices simply by shining light on the skin.

A research team led by Professor Sei Kwang Hahn of the Department of Materials Science and Engineering and Professor Kilwon Cho of the Department of Chemical Engineering at POSTECH have together developed an on-demand drug delivery system (DDS) using an organic photovoltaic cell coated with upconversion nanoparticles. This newly developed DDS allows nanoparticles to convert skin-penetrating near-infrared (NIR) light into visible light so that drug release can be controlled in medical devices installed in the body. These research findings were published in Nano Energy on March 1, 2021.

For patients who need periodic drug injections as in the case of diabetes, DDSs that automatically administer drugs in lieu of repetitive shots are being researched and developed. However, its size and shape have been restricted due to limitations in power supply for operating such a device.

The research team found the answer in solar power. Upconversion nanoparticles were used for the photovoltaic device to induce photovoltaic power generation with NIR light that can penetrate the skin. An organic photovoltaic cell coated with a core-shell structured upconversion nanoparticles was designed to operate a drug delivery system made of a mechanical and electronic system by generating an electric current upon irradiation of NIR light. When electricity is applied in this manner, the thin gold film sealing the drug reservoir melts and the drug is released.

"The combination of a flexible photovoltaic cell and a drug delivery system enables on-demand drug release using light," explained Professor Sei Kwang Hahn. "The drug delivery system is activated using near-infrared light that is harmless to the human body and is highly skin-penetrating."

He added, "Since this enables nimble control of drug release of medical devices inserted into the body by using near-infrared light, it is highly anticipated to contribute to the development of phototherapy technology using implantable medical devices."

Credit: 
Pohang University of Science & Technology (POSTECH)

Improved tool to help understand the brain, one section at a time

In the brain, billions of neurons reach to each other, exchanging information, storing memories, reacting to danger and more. Scientists have barely scratched the surface of the most complex organ, but a new device to automatically collect tissue for analysis may allow for a quicker, deeper dive into the brain.

Their approach was published in IEEE/CAA Journal of Automatica Sinica, a joint publication of the IEEE and the Chinese Association of Automation.

"The ultimate goal of this study is to further promote the speed and quality of 3D-reconstruction of brain neural connections," said the author Long Cheng, professor with the State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences.

Currently, researchers make thin serial slices of biopsied brain tissue or of animal tissue samples -- smaller than the width of a human hair -- using a cutting tool called a microtome. The tissue floats in water, from which researchers collect the sections and place them on a silicon wafer to be imaged with an electron microscope. Once the images are taken, they are digitally reconstructed into a 3D model.

"The manual collection of brain sections requires operators to possess a very high professional literacy, and it also consumes considerable time and energy," Cheng said. "A natural way of overcoming this limitation is to employ an automation technique using a robot to improve the collection effectiveness."

The researchers developed a circular silicon wafer that rotates as part of the microtome. As brain sections are cut, the rotation motion moves the water so that the sections adhere automatically to the wafer. The device, called the automated silicon-substrate ultra-microtome (ASUM), is controlled by an automated system that detects the brain sections on the surface of the water to improve collection efficiency, increasing the number of sections each wafer can hold.

"The proposed ASUM can reduce the collection skill requirement of the operator, and the interventions the operator must perform are less demanding than using existing assisting devices," Cheng said. "It also ensures the quality of electron microscopic imaging of brain sections without cumbersome post-processing operations."

However, Cheng said, it is not a completely automated system, as the device cannot automatically replace the silicon wafer. Next, the researchers plan to introduce an automatic silicon wafer replacement device that incorporates an advanced control system.

"Understanding the structure of neural connections in the brain is helpful to explore the working mechanism of the human brain, so as to promote the diagnosis and treatment of brain diseases and the development of brain-like intelligence systems," Cheng said. "Our ultimate goal is to test whether the quality and efficiency of the reconstruction of the brain's neural network can be improved by the designed automatic collection device."

Credit: 
Chinese Association of Automation

Traces of Earth's early magma ocean identified in Greenland rocks

image: At first glance the rocks that make up Greenland's Isua supracrustal belt look just like any modern basalt you'd find on the sea floor. But this outcrop, which was first described in the 1960s, is the oldest exposure of rocks on Earth. It is known to contain the earliest evidence of microbial life and plate tectonics.

Image: 
Hanika Rizo

New research led by the University of Cambridge has found rare evidence - preserved in the chemistry of ancient rocks from Greenland - which tells of a time when Earth was almost entirely molten.

The study, published in the journal Science Advances, yields information on a important period in our planet's formation, when a deep sea of incandescent magma stretched across Earth's surface and extended hundreds of kilometres into its interior.

It is the gradual cooling and crystallisation of this 'magma ocean' that set the chemistry of Earth's interior - a defining stage in the assembly of our planet's structure and the formation of our early atmosphere.

Scientists know that catastrophic impacts during the formation of the Earth and Moon would have generated enough energy to melt our planet's interior. But we don't know much about this distant and fiery phase of Earth's history because tectonic processes have recycled almost all rocks older than 4 billion years.

Now researchers have found the chemical remnants of the magma ocean in 3.6-billion-year-old rocks from southwestern Greenland.

The findings support the long-held theory that Earth was once almost entirely molten and provide a window into a time when the planet started to solidify and develop the chemistry that now governs its internal structure. The research suggests that other rocks on Earth's surface may also preserve evidence of ancient magma oceans.

"There are few opportunities to get geological constraints on the events in the first billion years of Earth's history. It's astonishing that we can even hold these rocks in our hands - let alone get so much detail about the early history of our planet," said lead author Dr Helen Williams, from Cambridge's Department of Earth Sciences.

The study brings forensic chemical analysis together with thermodynamic modelling in search of the primeval origins of the Greenland rocks, and how they got to the surface.

At first glance, the rocks that make up Greenland's Isua supracrustal belt look just like any modern basalt you'd find on the sea floor. But this outcrop, which was first described in the 1960s, is the oldest exposure of rocks on Earth. It is known to contain the earliest evidence of microbial life and plate tectonics.

The new research shows that the Isua rocks also preserve rare evidence which even predates plate tectonics - the residues of some of the crystals left behind as that magma ocean cooled.

"It was a combination of some new chemical analyses we did and the previously published data that flagged to us that the Isua rocks might contain traces of ancient material. The hafnium and neodymium isotopes were really tantalizing, because those isotope systems are very hard to modify - so we had to look at their chemistry in more detail," said co-author Dr Hanika Rizo, from Carleton University.

Iron isotopic systematics confirmed to Williams and the team that the Isua rocks were derived from parts of the Earth's interior that formed as a consequence of magma ocean crystallisation.

Most of this primeval rock has been mixed up by convection in the mantle, but scientists think that some isolated zones deep at the mantle-core boundary - ancient crystal graveyards - may have remained undisturbed for billions of years.

It's the relics of these crystal graveyards that Williams and her colleagues observed in the Isua rock chemistry. "Those samples with the iron fingerprint also have a tungsten anomaly - a signature of Earth's formation - which makes us think that their origin can be traced back to these primeval crystals," said Williams.

But how did these signals from the deep mantle find their way up to the surface? Their isotopic makeup shows they were not just funnelled up from melting at the core-mantle boundary. Their journey was more circuitous, involving several stages of crystallization and remelting - a kind of distillation process. The mix of ancient crystals and magma would have first migrated to the upper mantle, where it was churned up to create a 'marble cake' of rocks from different depths. Later melting of that hybrid of rocks is what produced the magma which fed this part of Greenland.

The team's findings suggest that modern hotspot volcanoes, which are thought to have formed relatively recently, may actually be influenced by ancient processes.

"The geochemical signals we report in the Greenland rocks bear similarities to rocks erupted from hotspot volcanoes like Hawaii - something we are interested in is whether they might also be tapping into the depths and accessing regions of the interior usually beyond our reach," said Dr Oliver Shorttle, who is jointly based at Cambridge's Department of Earth Sciences and Institute of Astronomy.

The team's findings came out of a project funded by Deep Volatiles, a NERC-funded 5-year research programme. They now plan to continue their quest to understand the magma ocean by widening their search for clues in ancient rocks and experimentally modelling isotopic fractionation in the lower mantle.

"We've been able to unpick what one part of our planet's interior was doing billions of years ago, but to fill in the picture further we must keep searching for more chemical clues in ancient rocks," said co-author Dr Simon Matthews from the University of Iceland.

Scientists have often been reluctant to look for chemical evidence of these ancient events. "The evidence is often altered by the course of time. But the fact we found what we did suggests that the chemistry of other ancient rocks may yield further insights into the Earth's formation and evolution - and that's immensely exciting," said Williams.

Credit: 
University of Cambridge

Association between preterm birth, psychotropic drug use in adolescence, young adulthood

What The Study Did: Researchers compared rates of psychotropic drug prescriptions during adolescence and young adulthood between individuals born preterm and at term.

Authors: Christine S. Bachmann, M.D., of the Norwegian University of Science and Technology in Trondheim, Norway, is the corresponding author.

To access the embargoed study: Visit our For The Media website at this link https://media.jamanetwork.com/

(doi:10.1001/jamanetworkopen.2021.1420)

Editor's Note: Editor's Note: The article includes funding/support disclosures. Please see the article for additional information, including other authors, author contributions and affiliations, conflict of interest and financial disclosures, and funding and support.

Credit: 
JAMA Network

New proteins 'out of nothing'

image: Regions of the protein's flexibility: not very flexible (blue), moderately flexible (green/yellow) and highly flexible (red). However, both the central alpha helix and the N-terminus (start of the protein) display stable folding in comparison with the rest of the protein

Image: 
Adam Damry

Proteins are the key component in all modern forms of life. Haemoglobin, for example, transports the oxygen in our blood; photosynthesis proteins in the leaves of plants convert sunlight into energy; and fungal enzymes help us to brew beer and bake bread. Researchers have long been examining the question of how proteins mutate or come into existence in the course of millennia. That completely new proteins - and, with them, new properties - can emerge practically out of nothing, was inconceivable for decades, in line with what the Greek philosopher Parmenides said: "Nothing can emerge from nothing" (ex nihilo nihil fit). Working with colleagues from the USA and Australia, researchers from the University of Münster (Germany) have now reconstructed how evolution forms the structure and function of a newly emerged protein in flies. This protein is essential for male fertility. The results have been published in the journal "Nature Communications".

The background:

It had been assumed up to now that new proteins emerge from already existing proteins - by a duplication of the underlying genes and by a series of small mutations in one or both gene copies. In the past ten years, however, a new understanding of protein evolution has come about: proteins can also develop from so-called non-coding DNA (deoxyribonucleic acid) - in other words, from that part of the genetic material which does not normally produce proteins - and can subsequently develop into functional cell components. This is surprising for several reasons: for many years, it had been assumed that, in order to be functional, proteins had to take on a highly developed geometrical form (a "3D structure"). It had further been assumed that such a form could not develop from a gene emerging at random, but would require a complex combination of amino-acids enabling this protein to exist in its functional form.

Despite decades of trying, researchers worldwide have not yet succeeded in constructing proteins with the desired 3D structures and functions, which means that the "code" for the formation of a functioning protein is essentially unknown. While this task remains a puzzle for scientists, nature has proven to be more adept at the formation of new proteins. A team of researchers headed by Prof. Erich Bornberg-Bauer, from the Institute of Evolution and Biodiversity at the University of Münster, discovered, by comparing the newly analysed genomes in numerous organisms, that species not only differ through duplicated protein-coding genes adapted in the course of evolution. In addition, proteins are constantly being formed de novo ("anew") - i.e. without any related precursor protein going through a selection process.

The vast majority of these de novo proteins are useless, or even slightly deleterious, as they can interfere with existing proteins in the cell. Such new proteins are quickly lost again after several generations, as organisms carrying the new gene encoding the protein have impaired survival or reproduction. However, a select few de novo proteins prove to have beneficial functions. These proteins integrate into the molecular components of cells and eventually, after millions of years of minor modifications, become indispensable. There are some important questions which many reearchers wonder about in this context: How do such novel proteins look like upon birth? How do they change, and which functions do they assume as the "new kids on the block"? Spearheaded by Prof. Bornberg-Bauer's group in Münster, an international team of researchers has answered this question in much detail for "Goddard", a fruit fly protein that is essential for male fertility.

Methodology

The research proceeded on three related fronts across three continents. At the College of the Holy Cross in Massachusetts, USA, Dr. Prajal Patel and Prof. Geoff Findlay used CRISPR/Cas9 genome editing to show that male flies that do not produce Goddard are sterile, but otherwise healthy. Meanwhile, Dr. Andreas Lange and PhD student Brennen Heames of Prof. Bornberg-Bauer's group used biochemical techniques to predict the shape of the novel protein in present-day flies. They then used evolutionary methods to reconstruct the likely structure of Goddard ~50 million years ago when the protein first arose. What they found was quite a surprise: "The ancestral Goddard protein looked already very much like the ones which exist in fly species today" Erich Bornberg-Bauer explains. "Right from the beginning, Goddard contained some structural elements, so called alpha-helices, which are believed to be essential for most proteins." To confirm these findings, the scene shifted to the Australian National University in Canberra, where Dr. Adam Damry and Prof. Colin Jackson used intensive, computational simulations to verify the predicted shape of the Goddard protein. They validated the structural analysis of Dr. Lange and showed that Goddard, in spite of its young age, is already quite stable - though not quite as stable as most fly proteins that are believed to have existed for longer, perhaps hundreds of millions of years.

The results match up with several other current studies, which have shown that the genomic elements from which protein-coding genes emerge are activated frequently - tens of thousands of times in each individual. These fragments are then "sorted" through the process of evolutionary selection. The ones which are useless or harmful - the vast majority - are quickly discarded. But those which are neutral, or are slightly beneficial, can be optimized over millions of years and changed into something useful.

Credit: 
University of Münster

'Magical' fire suppressant kills zombie fires 40% faster than water alone

The researchers say this is a big step in tackling smouldering peat fires, which are the largest fires on Earth. They ignite very easily, are notoriously difficult to put out, and release up to 100 times more carbon into the atmosphere than flaming fires, contributing to climate change.

The fires, known as 'zombie fires' for their ability to hide and smoulder underground and then reanimate as new flames days or weeks after the wildfire had been extinguished, are prevalent in regions like Southeast Asia, North America, and Siberia.

They are driven by the burning of soils rich in organic content like peat, which is a large natural reservoir of carbon. Worldwide, peat fires account for millions of tonnes of carbon released into the atmosphere each year.

Firefighters currently use millions to billions of litres of water per to tackle a peat fire: The 2008 Evans Road peat fire in the USA consumed 7.5 billion litres of water, and the 2018 Lake Cobrico peat fire in Australia consumed 65 million.

However, when water alone is used to extinguish peat fires, it tends to create a few large channels in the soil, diverting the water from nearby smouldering hotspots where it is most needed. This is partly why they take can so long to be extinguished.

Now, researchers at Imperial College London have combined water with an environmentally friendly fire suppressant that is already used to help extinguish flaming wildfires, to measure its effectiveness against peat fires at different concentrations.

During laboratory experiments at Imperial's HazeLab, they found that adding the suppressant to water helped them put out peat fires nearly twice as fast as using water alone, while using only a third to a half of the usual amount of water.

Lead author Muhammad Agung Santoso of Imperial's Department of Mechanical Engineering said: "The suppressant could enable firefighters to put out peat fires much faster while using between a third to half of the amount of water. This could be critical in ending pollution-related deaths, devastation of local communities, and environmental damage caused by these fires."

The results are published in International Journal of Wildland Fire.

The suppressant, also known as a 'wetting agent', increases the penetrating properties of liquids like water by reducing their surface tension. This agent is made from plant matter and is biodegradable so it doesn't harm the environment.

The researchers mixed the wetting agent with water at three concentrations: 0% (pure water), 1% (low concentration), and 5% (high concentration). They used each concentration on a laboratory peat fire with varying rates of flow between 0.3 and 18 litres per hour.

They found that the suppressant reduced the surface tension of the liquid, which made it less likely to create large channels and instead flow uniformly through the soil. Low-concentration solutions reduced the average fire suppression time by 39%, and the high concentration solution reduced it by 26% but more consistently. The average volume of liquid needed for suppression was 5.7 litres per kilogram of burning peat, regardless of flow rates or suppressant.

They also learned that the agent acts thermally and not chemically: it encapsulates the fire to bring down the temperature and remove the 'heat' element from the fire triangle. The other two essential elements for fire are oxygen and fuel.

Senior author Professor Guillermo Rein, Head of Hazelab at Imperial's Department of Mechanical Engineering, said: "Fighting peat fires uses an incredible amount of work, time and water, and this biodegradable wetting agent could help everybody: fire brigades, communities and the planet. This magical suppressant could make it easier to put zombie fires to rest for good."

The results provide a better understanding of the suppression mechanism of peat fires and could help to improve firefighting and mitigation strategies. The researchers are now looking to replicate their findings in controlled peat fires outside the lab in real peatlands.

Credit: 
Imperial College London

Tiny bubbles making large impact on medical ultrasound imaging

image: Schematic of bubble membrane showing the influence of membrane stiffener and membrane softener in the phospholipid packing.

Image: 
Amin Jafari Sojahrood and Al C. de Leon

If you were given "ultrasound" in a word association game, "sound wave" might easily come to mind. But in recent years, a new term has surfaced: bubbles. Those ephemeral, globular shapes are proving useful in improving medical imaging, disease detection and targeted drug delivery. There's just one glitch: bubbles fizzle out soon after injection into the bloodstream.

Now, after 10 years' work, a multidisciplinary research team has built a better bubble. Their new formulations have resulted in nanoscale bubbles with customizable outer shells -- so small and durable that they can travel to and penetrate some of the most inaccessible areas in the human body.

The work is a collaboration between Al C. de Leon and co-authors, under the supervision of Agata A. Exner of the Department of Radiology at the Case Western Reserve University School of Medicine in Cleveland and Amin Jafari Sojahrood under the supervision of Michael Kolios of the Department of Physics at Ryerson University and the Institute for Biomedical Engineering, Science and Technology (iBEST) in Toronto. Their results were recently published in ACS Nano, in a paper entitled "Towards Precisely Controllable Acoustic Response of Shell-Stabilized Nanobubbles: High-Yield and Narrow-Dispersity".

"The advancement can eventually lead to clearer ultrasound images," says Kolios. "But more broadly, our joint theoretical and experimental findings provide a fundamental framework that will help establish nanobubbles for applications in biomedical imaging -- and potentially into other fields, from material science to surface cleaning and mixing."

Bubbles in Ultrasound: Shrinking Down to Nanoscale

Ultrasound is the second most used medical imaging modality in the world. As with other modalities, a patient may swallow or be injected with an agent to create image contrast, thereby making bodily structures or fluids easier to see.

With ultrasound, bubbles serve as the contrast agent. These gas-filled globes are enclosed by a phospholipid shell. Contrast is generated when ultrasound waves interact with the bubbles, causing them to oscillate and reflect soundwaves that differ significantly from waves reflected by body tissues. Bubbles are used routinely in patients to improve image quality and enhance the detection of diseases. But due to their size (about the same as red blood cells), microbubbles are confined to circulating in blood vessels, and cannot reach diseased tissue outside.

"Our research team at CWRU now engineered stable, long-circulating bubbles at the nanoscale -- measuring 100-500 nm in diameter," says Exner. "They're so that they can even squeeze through leaky vasculature of cancerous tumours."

With such capabilities, nanobubbles are well-suited for finer applications such as molecular imaging and targeted drug delivery. Working together with the Ryerson team, the researchers have developed a clearer understanding of the theory of how nanobubbles are visualized with ultrasound, and what imaging techniques are needed to best visualize the bubbles in the body.

Controlling Nanobubble Behaviour

Size issues aside, bubbles are also complex oscillators, exhibiting behaviours that are difficult to control. In the current work, the research team also devised a way to precisely control and predict how bubbles interact with and respond acoustically to ultrasound.

"By introducing membrane additives to our bubble formulations, we demonstrated the ability to control how stiff (or how flexible) the bubble shells become," says de Leon. "Bubble formulations can then be customized to match the particular needs of different applications."

For example, stiffer, stable bubble designs may last long enough to reach body tissues that are difficult to access. Softer bubbles may produce clearer ultrasound images of certain types of body tissue. Bubble oscillation could even be tweaked to increase cell permeability, potentially increasing drug delivery to diseased cells, which may in turn decrease the dosage required.

Patients, the Ultimate Beneficiaries

Having successfully demonstrated the ability to customize bubble shell properties and their interaction with sound waves, the current work has exciting implications for nanobubble potency -- in both diagnostic and therapeutic applications.

Sojahrood sees many potential benefits, for biomedicine and for patients in clinic. "Compared to other imaging or treatment options, such as surgery with scalpels, bulky MRI machinery, or the risk of radioactive iodine in CT scans, ultrasound could be a lot faster, cheaper, more effective and less invasive," he says. "By advancing ultrasound through nanobubbles, we could eventually make diagnosis and treatment more available and more effective, even in more remote areas of the world, ultimately improving patient outcomes and saving more lives."

Credit: 
Ryerson University - Faculty of Science