Tech

Discovery of life in solid rock deep beneath sea may inspire new search for life on Mars

image: Associate Professor Yohey Suzuki at the University of Tokyo led the effort to develop a new way to prepare rock samples to search for life deep beneath the seafloor. This is an example of one of the thin slices of rock he prepared using special epoxy to ensure the rock held its shape while it was cut.

Image: 
Caitlin Devor, University of Tokyo, CC BY 4.0

Newly discovered single-celled creatures living deep beneath the seafloor have given researchers clues about how they might find life on Mars. These bacteria were discovered living in tiny cracks inside volcanic rocks after researchers persisted over a decade of trial and error to find a new way to examine the rocks.

Researchers estimate that the rock cracks are home to a community of bacteria as dense as that of the human gut, about 10 billion bacterial cells per cubic centimeter (0.06 cubic inch). In contrast, the average density of bacteria living in mud sediment on the seafloor is estimated to be 100 cells per cubic centimeter.

"I am now almost over-expecting that I can find life on Mars. If not, it must be that life relies on some other process that Mars does not have, like plate tectonics," said Associate Professor Yohey Suzuki from the University of Tokyo, referring to the movement of land masses around Earth most notable for causing earthquakes. Suzuki is first author of the research paper announcing the discovery, published in Communications Biology.

Magic of clay minerals

"I thought it was a dream, seeing such rich microbial life in rocks," said Suzuki, recalling the first time he saw bacteria inside the undersea rock samples.

Undersea volcanoes spew out lava at approximately 1,200 degrees Celsius (2,200 degrees Fahrenheit), which eventually cracks as it cools down and becomes rock. The cracks are narrow, often less than 1 millimeter (0.04 inch) across. Over millions of years, those cracks fill up with clay minerals, the same clay used to make pottery. Somehow, bacteria find their way into those cracks and multiply.

"These cracks are a very friendly place for life. Clay minerals are like a magic material on Earth; if you can find clay minerals, you can almost always find microbes living in them," explained Suzuki.

The microbes identified in the cracks are aerobic bacteria, meaning they use a process similar to how human cells make energy, relying on oxygen and organic nutrients.

"Honestly, it was a very unexpected discovery. I was very lucky, because I almost gave up," said Suzuki.

Cruise for deep ocean samples

Suzuki and his colleagues discovered the bacteria in rock samples that he helped collect in late 2010 during the Integrated Ocean Drilling Program (IODP). IODP Expedition 329 took a team of researchers from the tropical island of Tahiti in the middle of the Pacific Ocean to Auckland, New Zealand. The research ship anchored above three locations along the route across the South Pacific Gyre and used a metal tube 5.7 kilometers long to reach the ocean floor. Then, a drill cut down 125 meters below the seafloor and pulled out core samples, each about 6.2 centimeters across. The first 75 meters beneath the seafloor were mud sediment and then researchers collected another 40 meters of solid rock.

Depending on the location, the rock samples were estimated to be 13.5 million, 33.5 million and 104 million years old. The collection sites were not near any hydrothermal vents or sub-seafloor water channels, so researchers are confident the bacteria arrived in the cracks independently rather than being forced in by a current. The rock core samples were also sterilized to prevent surface contamination using an artificial seawater wash and a quick burn, a process Suzuki compares to making aburi (flame-seared) sushi.

At that time, the standard way to find bacteria in rock samples was to chip away the outer layer of the rock, then grind the center of the rock into a powder and count cells out of that crushed rock.

"I was making loud noises with my hammer and chisel, breaking open rocks while everyone else was working quietly with their mud," he recalled.

How to slice a rock

Over the years, continuing to hope that bacteria might be present but unable to find any, Suzuki decided he needed a new way to look specifically at the cracks running through the rocks. He found inspiration in the way pathologists prepare ultrathin slices of body tissue samples to diagnose disease. Suzuki decided to coat the rocks in a special epoxy to support their natural shape so that they wouldn't crumble when he sliced off thin layers.

These thin sheets of solid rock were then washed with dye that stains DNA and placed under a microscope.

The bacteria appeared as glowing green spheres tightly packed into tunnels that glow orange, surrounded by black rock. That orange glow comes from clay mineral deposits, the "magic material" giving bacteria an attractive place to live.

Whole genome DNA analysis identified the different species of bacteria that lived in the cracks. Samples from different locations had similar, but not identical, species of bacteria. Rocks at different locations are different ages, which may affect what minerals have had time to accumulate and therefore what bacteria are most common in the cracks.

Suzuki and his colleagues speculate that the clay mineral-filled cracks concentrate the nutrients that the bacteria use as fuel. This might explain why the density of bacteria in the rock cracks is eight orders of magnitude greater than the density of bacteria living freely in mud sediment where seawater dilutes the nutrients.

From the ocean floor to Mars

The clay minerals filling cracks in deep ocean rocks are likely similar to the minerals that may be in rocks now on the surface of Mars.

"Minerals are like a fingerprint for what conditions were present when the clay formed. Neutral to slightly alkaline levels, low temperature, moderate salinity, iron-rich environment, basalt rock -- all of these conditions are shared between the deep ocean and the surface of Mars," said Suzuki.

Suzuki's research team is beginning a collaboration with NASA's Johnson Space Center to design a plan to examine rocks collected from the Martian surface by rovers. Ideas include keeping the samples locked in a titanium tube and using a CT (computed tomography) scanner, a type of 3D X-ray, to look for life inside clay mineral-filled cracks.

"This discovery of life where no one expected it in solid rock below the seafloor may be changing the game for the search for life in space," said Suzuki.

Credit: 
University of Tokyo

Oysters and clams can be farmed together

image: The four bivalve species studied at Rutgers' New Jersey Aquaculture Innovation Center include the Eastern oyster, Atlantic surfclam, hard clam and softshell clam (left to right).

Image: 
Michael Acquafredda/Rutgers University

Eastern oysters and three species of clams can be farmed together and flourish, potentially boosting profits of shellfish growers, according to a Rutgers University-New Brunswick study.

Though diverse groups of species often outperform single-species groups, most bivalve farms in the United States and around the world grow their crops as monocultures, notes the study in the journal Marine Ecology Progress Series.

"Farming multiple species together can sustain the economic viability of farm operations and increase profitability by allowing shellfish growers to more easily navigate market forces if the price of each individual crop fluctuates," said lead author Michael P. Acquafredda, a doctoral student based at Rutgers' Haskin Shellfish Research Laboratory in Port Norris, New Jersey.

Farming mollusks such as clams, oysters and scallops contributes billions of dollars annually to the world's economy. In the United States, more than 47 million pounds of farm-raised clam, oyster and mussel meat worth more than $340 million were harvested in 2016, the study says.

The study, which took place in a laboratory setting at Rutgers' New Jersey Aquaculture Innovation Center in North Cape May, New Jersey, tested the feasibility of farming multiple bivalve species in close proximity to each another.

Mimicking farm conditions, the study examined the filtration rate, growth and survival of four economically and ecologically important bivalve species native to the northeastern United States. They are the Eastern oyster (Crassostrea virginica); Atlantic surfclam (Spisula solidissima); hard clam (Mercenaria mercenaria); and softshell clam (Mya arenaria).

When supplied with seawater containing naturally occurring algal particles, the groups that contained all four species removed significantly more particles than most monocultures. This suggests that each species prefers to filter a particular set of algal food particles.

"This shows that, to some degree, these bivalve species complement each other," said co-author Daphne Munroe, an associate professor in the Department of Marine and Coastal Sciences in the School of Environmental and Biological Sciences. She is based at the Haskin Shellfish Research Laboratory.

The scientists also found virtually no differences in growth or survival for any of the four species, suggesting that when food is not an issue, these bivalves could be raised together without outcompeting each another.

"This study illustrates the benefits of diversifying crops on shellfish farms," Acquafredda said. "Crop diversification gives aquaculture farmers protection from any individual crop failure, whether it's due to disease, predation or fluctuating environmental conditions. In future studies, the feasibility of bivalve polyculture should be tested on commercial bivalve farms."

Credit: 
Rutgers University

Device that tracks location of nurses re-purposed to record patient mobility

By repurposing badges originally designed to locate nurses and other hospital staff, Johns Hopkins Medicine scientists say they can precisely monitor how patients in the hospital are walking outside of their rooms, a well-known indicator and contributor to recovery after surgery.

A team of engineers and clinicians at The Johns Hopkins Hospital developed the repurposed badges to study their value in tracking "ambulation," or mobility, among inpatients who had undergone cardiac surgery.

The study, which began nearly four years ago and is described in a report published March 17 in JAMA Network Open, was inspired by Johns Hopkins University School of Medicine Vice Dean for Research Antony Rosen, M.D., who also directs the institution's precision medicine effort, inHealth. Rosen asked his colleague, Johns Hopkins University engineer Peter Searson, Ph.D., to help find ways to improve the assessment of how well patients are functioning.

"I was sold on Antony's vision to improve patient care by finding ways to make high value measurements of patients' functional status," says Searson, the Joseph R. and Lynne C. Reynolds Professor of Engineering and a professor of materials science and engineering at The Johns Hopkins University.

After collecting information about how clinicians currently assess functional status day to day, Searson joined efforts with anesthesiologist and clinical researcher Charles Brown, M.D., who was conducting an ongoing study funded by inHealth, which focused on measuring the mobility of patients after cardiac surgery.

"Ambulation is important for hospitalized patients; in particular, for patients who have surgery and those who are older," says Brown, an associate professor of anesthesiology and critical care medicine at the Johns Hopkins University School of Medicine, whose research focuses on improving perioperative care for older adults. "More ambulation immediately after surgery probably helps preserve patients' cognitive and physical function, and is linked to spending less time in the hospital."

Most of the nursing staff in the hospital wear small badges on their uniforms as location and paging systems. The badges send beams of light, much like the ones used for TV remotes, to sensors in the ceilings of hospital rooms and corridors. The research team's idea was to adapt the system to assess how far and how fast patients walked after surgery.

Searson and In cheol Jeong, Ph.D., replicated the tracker system in their laboratory to see how well the devices could record the timing and speed of patients' movements.

The team ruled out other movement tracking devices that rely on GPS or accelerometers because they aren't sensitive enough to detect whether a patient is in or out of their room, and the devices may not register the typical shuffling gait of a patient recovering from surgery.

"The system collects and records real-time information about a patient's mobility," says Jeong, a former trainee in Searson's lab and now assistant professor at the Icahn School of Medicine at Mt. Sinai Medical Center in New York.

For the study, the team obtained consent from 100 patients, mostly men, whose average age was 63, and attached trackers to their hospital gowns. Researchers collected information on how far and how fast patients walked in the unit's corridors every time they left their room.

Generally, patients are encouraged by hospital staff to walk outside of their rooms three times a day. Data collected by the Johns Hopkins team showed that approximately 25 percent of the 100 patients achieved that goal.

The Johns Hopkins team also found that the tracked mobility records among patients were more than 90 percent accurate in predicting the patients' 30-day readmission rate, discharge to home or rehabilitation center and their length of stay in the hospital.

Brown cautioned that, "There are many aspects of measuring and establishing ambulation metrics that aren't clear. Maybe the goal of three times a day needs to be refined or adjusted for baseline function and speed," he explains. The researchers also said there may have been ambulation that wasn't captured by the device.

But the study results, he said, suggest the badges would be valuable in giving feedback to health care workers and patients, encouraging ambulation and helping clinicians identify earlier those who can benefit from earlier discharge or more intensive rehabilitation.

The researchers have filed patents for the technology developed for the study. Funding for the study was provided by Johns Hopkins inHealth.

Credit: 
Johns Hopkins Medicine

Researchers unveil the universal properties of active turbulence

image: (Left) Disordered pattern of eddies of a characteristic size. The color code indicates the local orientation of the liquid crystal. (Right) Large-scale circulating flows at scales much larges the the characteristic size of the underlying pattern of vortices.

Image: 
R. Alert et al.

Turbulent flows are chaotic yet feature universal statistical properties.Over the recent years, seemingly turbulent flows have been discovered in active fluids such as bacterial suspensions, epithelial cell monolayers, and mixtures of biopolymers and molecular motors. In a new study published in Nature Physics, researchers from the University of Barcelona, Princeton University and Collège de France have shown that the chaotic flows in active nematic fluids are described by distinct universal scaling laws.

Turbulence is ubiquitous in nature, from plasma flows in stars to large-scale atmospheric and oceanic flows on Earth, through air flows caused by an airplane. Turbulent flows are chaotic, creating eddies that appear and break into smaller swirls constantly. However, when this complex chaotic behavior is considered in a statistical sense, turbulence follows universal scaling laws. This means that the statistical properties of turbulence are independent in both the way in which turbulent flows are generated, and the properties of the specific fluid that we look at, such as its viscosity and density.

In the study now published in Nature Physics, researchers have revisited this notion of universality in the context of active fluids. In active turbulence, flows and eddies are not generated by the action of some external agent (such as temperature gradients in the atmosphere) but rather by the active fluid itself. The active nature of these fluids relies on their ability to internally generate forces, for example due to the swimming of bacteria or the action of molecular motors on biopolymers.

"When these active forces are sufficiently strong, the fluid starts to spontaneously flow, powered by the energy injected by the active processes" explains Ricard Alert, postdoctoral fellow at Princeton University. When active forces are strong, these spontaneous flows become a chaotic mix of self-generated eddies ? what we call active turbulence.

The authors focused on a specific type of active fluid: two-dimensional active nematic liquid crystals, which describe experimental systems such as cell monolayers and suspensions of biopolymers and molecular motors. Large-scale simulations showed that the active flows organize into a disordered pattern of eddies of a characteristic size (Fig. 1, Left). The researchers then studied the flows at much larger scales than the characteristic size of the eddies (Fig. 1, Right). They found that the statistical properties of these large-scale flows follow a distinct scaling law.

"We showed that this scaling law is universal, independent of the specific properties of the active fluid" points out Professor Jaume Casademunt from the Institute of Complex Systems (UBICS) of the University of Barcelona. This scaling law is the equivalent in active nematic fluids of Andrei Kolmogorov's 1941 scaling law for classic turbulence, but with a different exponent that results from the combination of inertia-less viscous flows and the internal, self-organized forcing of active fluids.

Another striking result of this research is that all the energy that is injected by the active forces at a given scale is dissipated by viscous effects at that same scale. As a consequence, in stark contrast to classic turbulence, no energy is left to be transferred to other scales. "Both in simulations and analytically, researchers proved that a minimal active nematic fluid self-organizes in a way such that the active energy injection exactly balances energy dissipation at each scale" concludes Jean-François Joanny, from the Collège de France.

Credit: 
University of Barcelona

Representation of driving behavior as a statistical model

image: Impacts of informative and incentive ISA for drivers with higher usual speeding tendency.

Image: 
COPYRIGHT (C) TOYOHASHI UNIVERSITY OF TECHNOLOGY. ALL RIGHTS RESERVED.

Overview:

A joint research team from Toyohashi University of Technology, Department of Architecture and Civil Engineering, Toyota Transportation Research Institute, and Daido University has established a method to represent driving behaviors and their changes that differ among drivers in a single statistical model, taking into account the effect of various external factors such as road structure. This method was applied to measure the effectiveness of Intelligent Speed Adaptation (ISA), which controls excessive speed violations. As a result, the research team found that in some cases, it is effective for drivers with a high tendency of frequent excessive speeding, and in other cases, effective only for elderly drivers depending on the style of the ISA. This method can be applied not only to ISA but also to measure the effectiveness of traffic safety management technologies that encourage drivers to change their various driving behaviors.

Details:

Traffic safety has been recognized as a global issue to be solved, as Target 3.6 of the 3rd Sustainable Development Goal to reduce the number of traffic accidents by half. ISA is a traffic safety management technology that recognizes the speed limit of the road section based on the current position of the vehicle to prevent drivers from over-speeding by providing speed information, over-speed warning, compulsory speed control, speed compliance incentives, etc. There has been active research on ISA since the 2000s, mainly in Europe. Most of the previous studies have measured the restrictive effects of excessive speeding by comparing the driving behavior of subjects before and after the introduction of an ISA with driving simulator experiments and field studies for various types of ISAs. However, driving behavior varies greatly from driver to driver. In addition, the structural environment of the road is different in the field experiment for each driver. Therefore, it is important to measure the effects of various factors, such as the individual and driving environment, in order to spread the technology appropriately.

To resolve this issue, the research team has developed a method to accurately measure the effect of ISA by using a single statistical model to simultaneously estimate each driver's unique driving behavior, the effect of various external factors such as road structure, and the effect of ISA on excessive speeding.

"The foremost challenge to be solved was how to represent and demonstrate the hypothesis that 'the difference in the driver's regular tendency to exceed the speed limit also affects the effect of ISA' in the model. This method solves the challenge by estimating the model by taking into account the correlation between the parameter that defines the driver's speeding tendency and the parameter that defines the effect of the ISA. Recent developments in the field of data science, such as Bayesian statistics and improvements in computer performance, have made it possible to apply this method with slightly more complex models to real-world problems." explains Associate Professor Kojiro Matsuo, who leads the research team.

Development background:

Associate Professor Matsuo says, "ISA research began as a graduation research topic by a student under my guidance. The student performed an in-depth analysis of the data obtained from an ISA field experiment conducted with our collaborator, Toyota Transportation Research Institute. As a result, it was found that in some cases, there is an association between the subjects' regular speeding tendency and the effectiveness of ISA, but in other cases, there is no association between the two. Therefore, we started to consider measuring the effect using a statistical model instead of a simple comparative analysis before and after the introduction of ISA. As a result, we were able to represent the seemingly disparate driving behavior of different drivers in a single model and to find a law in it. It was extremely interesting work."

Future outlook:

The research team believes that this method can be applied not only to measure the reduction of drivers' speeding behavior with ISA, but also to the effectiveness of traffic safety management technologies that help improve various dangerous driving behaviors, such as running a red light, not stopping at an intersection without traffic lights, and obstructing pedestrians from crossing at a crosswalk. We are hoping to contribute to the reduction of traffic accidents worldwide by further developing traffic safety management technologies and measuring their effectiveness.

Credit: 
Toyohashi University of Technology (TUT)

Robo-turtles in fish farms reduce fish stress

image: A new field of research is looking at the interaction between fish and robots. Results show that the fish are far more affected by their environment than we have been aware of.

Image: 
Maarja Kruusmaa

A sea cage can hold up to 200000 farmed salmon. If the cage sustains damage, such as a hole in the nets, the fish could swim out through the opening and make their escape in short order.

Clearly, the aquaculture industry wants to avoid this scenario. Not only do escapes lead to large losses for the industry, but no one wants farm-raised salmon to mix and interbreed with wild populations.

Keeping an eye on what is going on inside the cages is critical for being able to respond and repair any damage promptly.

Monitoring life in the cages is important for other reasons as well, such as ensuring good fish welfare: What is the health condition of the fish? How serious is the salmon lice problem? Do the cages need to be cleaned?

Human divers and underwater vehicles controlled by operators on land are commonly used to check the conditions in sea cages. Both types of intruders can disrupt and stress the fish.

These methods also limit the frequency of inspections.

Robotics and biology researchers have been trying to find out which monitoring methods disturb fish least. The tests that have a robotic turtle swimming around the cage to film the equipment and fish have proven to do the inspection job better and more gently.

The experiments show that the fish are only negligibly scared or stressed by the robotic turtle. They swim calmly and fairly close to the turtle, whereas they keep away from the intruders in experiments with divers and thruster-driven underwater robots.

"The overall purpose of the experiments wasn't just to test the turtle robot, but also to investigate what characteristics robots being used in the aquaculture industry should have," says Maarja Kruusmaa. She is a professor at the Norwegian University of Science and Technology's (NTNU) Department of Engineering Cybernetics and at Tallinn University of Technology.

"We've found that the most crucial characteristics of the surveillance robot are its size and speed, whereas colour and motor noise hardly matter at all," she said.

The turtle robot's small size and slow movements are the characteristics that make it less disturbing to the fish. The fact that it resembles an organism that lives in the ocean is less important.

"The conclusion turned out to be the opposite of our expectations. The fact that the robot looks like a marine animal doesn't seem to play any role at all. And that's actually good news - it means we don't have to build the robots to be fish- or turtle-like. That will make it cheaper to develop and use robots in this new field of application to monitor marine organisms," Kruusmaa says.

The research indicates which factors are important when developing robots for the fish farming industry or for monitoring fish in their natural setting.

Kruusmaa and Jo Arve Alfredsen, an associate professor in the Department of Engineering Cybernetics at NTNU, published an article about their findings in Royal Society Open Science about their findings. Kruusmaa is the first author.

Kruusmaa and Alfredsen are both employed by NTNU AMOS - the Centre for Autonomous Marine Operations and Systems. AMOS is developing new types of underwater vehicles and new offshore monitoring methods as their focus areas.

Robots like the robotic turtle can provide fish breeders with online updates and monitoring of life in the sea cage. The turtle can also be connected to various measuring instruments and sensors.

Using robotic technology instead of divers for surveillance allows monitoring to continue without interruption. This continuity can contribute to quicker responses, greater predictability, better fish welfare and lower mortality.

The researchers carried out the practical experiments in SINTEF Ocean's full-scale aquaculture laboratory ACE, operated by SalMar as part of the EU project AQUAEXCEL2020.

SINTEF, NTNU and Tallinn University of Technology are collaborating on this project.

The turtle robot, named U-CAT, was developed at Tallinn University of Technology in Estonia and was originally designed for underwater archaeology applications. The idea was to use it to investigate shipwrecks on the sea floor, so it was designed as a small and very manoeuvrable robot.

Alfredsen discovered that the robot could be used in aquaculture because it had precisely these properties.

The experiments in the sea cages at SalMar have shown that this robotic technology can also benefit the aquaculture industry.

Credit: 
Norwegian University of Science and Technology

The Arctic may influence Eurasian extreme weather events in just two to three weeks

image: The study is featured on the cover of the latest issue of Advances in Atmospheric Sciences(AAS). The figure in the top right is the Arctic sea-ice reduction, representing an Arctic anomaly during that period. Meanwhile, some extreme events occurred in Eurasia. During January 2019, strong snowstorms that attacked Europe and southern China met a rare long-lasting rainy event, which are shown in the top left and bottom of the cover, respectively.

Image: 
AAS

Previous research studies have revealed how rising temperatures and melting ice in the Arctic may impact the rest of Earth's climate over seasons, years and even longer. Now, two researchers from Fudan University in Shanghai, China, are making the argument that the effects may actually be felt in a matter of weeks, but more robust, observational-based analysis is needed to fully understand how quickly Arctic events impact the rest of Earth.

They published their analysis of current studies and future plans on March 30 in the peer-reviewed journal Advances in Atmospheric Sciences.

"Many investigations have been conducted to reveal the influences of the Arctic on Eurasian extreme weather events from the perspective of climatological statistics," said Guokun Dai, paper co-author and postdoctoral researcher in the Department of Atmospheric and Oceanic Sciences at Fudan University. "We think it's now important to investigate the relationship using case studies at weather time scales due to the sensitivity and nonlinearity of the atmospheric circulations in midlatitude to Arctic conditions."

Dai noted that the mechanism for extreme event formation may vary, depending on the Arctic conditions and the eventual weather event, pointing to the need for further investigation to understand the cause and effect fully. The researchers are specifically focusing on weather events in Eurasia, the continental land area encompassing all of Asia and Europe. It's the largest land mass on Earth. Such extreme weather events could include record-breaking temperatures, massive snow fall and other unusual, although more frequent, occurrences.

Mu Mu, co-author and professor in the Department of Atmospheric and Oceanic Sciences at Fudan University and in the Institute of Atmospheric Physics, said their work is specifically focused on improving how extreme events are forecasted with a foundation of accurate initial and boundary conditions.

"Data based on the Arctic have large uncertainties since there are few observations there, and these uncertainties could have a great impact on numerical weather predictions, especially for extreme weather events," Mu said. "We are going to investigate and work to understand what kind of Arctic sea ice uncertainty and Arctic atmospheric uncertainty would have large influences on Eurasian extreme weather event predictions."

Mu and Dai are now identifying the more sensitive connections between the Arctic's ocean-ice-air systems to extreme winter weather events in Eurasia, with the goal of targeting specific observations. They will develop simulation experiments to finetune a forecast model using shifts in the Arctic systems to predict events likely to occur in Eurasia within two weeks of the initial shift.

"Our ultimate goal is to provide the scientific support for conducting the Arctic targeted observations and improving the forecast skill of the Eurasian winter extreme weather events," Mu said.

Credit: 
Institute of Atmospheric Physics, Chinese Academy of Sciences

Decrypting cryptocurrencies

image: Cryptocurrencies are becoming a serious and full-fledged financial instrument. (Source: WorldSpectrum via Pixabay)

Image: 
Source: WorldSpectrum via Pixabay

Cryptocurrencies have been treated as a financial terra incognita - they enjoyed growing interest but also raised concerns due to their virtuality. The use of statistical methods utilizing correlation matrices to analyze the hundred most-traded virtual currencies shows that the cryptocurrency market in the last two years is less and less different from the mature global currency market (Forex) and becomes independent of it. It means cryptocurrencies can be considered a serious and full-fledged financial instrument.

The concept of cryptocurrency has settled in general public for some time. This financial instrument has both enthusiastic supporters and implacable opponents. What does this term mean? Simply put, a cryptocurrency is a digital or virtual means of payment that exists only in a computer system and therefore has no physical equivalents in the form of banknotes or coins. More technically, it is a type of decentralized register consisting of independent devices, based on blockchain technology, using cryptographic solutions and storing assets information in contractual units. All transactions carried out in the world of cryptocurrencies are anonymous, but each of them is publicly available.

Cryptocurrencies have emerged relatively recently. The first of them - Bitcoin - was proposed in 2008 by a person or group of people nicknamed Satoshi Nakamoto. This event coincided with the epicenter of the global financial crisis. According to its creators, the Bitcoin was supposed to provide a tool for transactions via the Internet without the participation of a monetary authority - the central bank in the case of standard currency. The first transaction using bitcoins - the purchase of 2 pizzas for a nowadays staggering amount of 10,000 bitcoins - was made in 2010. In the same year, the first Bitcoin exchange market started its operation. Since then, there has been a spectacular development of the cryptocurrency market up to a level reaching a peak capitalization of 800 billion USD. Currently, about 5200 virtual currencies, in particular Bitcoin (BTC), Ethereum (ETH) and Ripple (XRP), are traded on various exchange markets.

The cryptocurrency market is unique in terms of research capabilities due to its spectacular development in a reasonably short period and almost unlimited data availability. This allows statistical analyses to be carried out at various stages of development and to track its evolution and trends. These studies are conducted by a team of scientists led by Professor Stanislaw Drozdz from the Institute of Nuclear Physics of the Polish Academy of Sciences in Krakow and the Cracow University of Technology. The group decided to tackle the problem of the existence of dependencies and their evolution within a basket of 100 virtual currencies representing about 95% of the capitalization of the entire cryptocurrency market, taking into account 1278 days from 1 October 2015 to 31 March 2019.

In their studies, based on previous analyzes, researchers used correlation matrix formalism derived from statistical physics. In general, calculations with such matrices make it possible to determine if a particular pattern exists in the data set. An example of a perfect two-dimensional correlation is a straight line, and the complete lack of it is symbolized by points scattered randomly on the plane. The determinant of the correlation matrix is a measure of collinearity, i.e. the compliance of variables representing the system. The closer it is to 1, the lower the degree of interdependence of these variables, while the closer to 0, the more significant the correlation. The research focused on the distribution of matrix elements and deviations of the distribution of eigenvalues of the correlation matrix from the so-called Wishart model distribution for random matrices, which corresponds to the situation of a complete lack of correlations.

It turned out that the value of the main eigenvalue, which indicates the degree of correlation, significantly depends on the choice of the base currency - says the co-author of the article Marcin Watorek. In general, the base currency is the more important concerning the market value, the smaller the eigenvalue (which, however, must be higher than the values generated for a random background). This is a very important result for assessing the role of a specific cryptocurrency on the global market of these financial instruments.

Researchers conducted analyses comparing the characteristics of the exchange rate fluctuations of cryptocurrencies in relation to each other, to the USD exchange rate and the so-called fictitious currency, artificially generated from a random distribution, introduced as a reference for research purposes. The results obtained for the cryptocurrency market are similar to the corresponding measurements from the stock and traditional currency market - for example, the Forex market. This similarity applies both to the variability dynamics of applicable exchange rates and the fact that the dominant feature of this dynamics is behaviour indistinguishable from chaos. In the initial phase of the analyzed period, it was the USD and Bitcoin that played the role of the main base currencies for cryptocurrency trading. While approaching today, one can observe gradual independence of the virtual currency market from USD. Currently, only Bitcoin is the natural base currency for other cryptocurrencies.

Furthermore, starting from mid-2017, the correlation patterns of exchange rate fluctuations of cryptocurrencies expressed in USD are beginning to imitate more clearly the models generated in a situation when these rates are shown in a fictitious currency whose variability is entirely random, and therefore independent of the cryptocurrency market. At the same time, historically the first and strongest cryptocurrency - Bitcoin and also Ethereum, turn out to be, among virtual currencies, the USD and EUR equivalents of the Forex market. These results indicate that cryptocurrencies are becoming a fully mature, integral, self-contained and independent of the Forex market, and therefore an alternative to it. It seems that we are just witnessing the finalization of the transformation from paper to digital civilization.

Our research on internal correlations on the cryptocurrency market indicates that this market has reached the level of maturity allowing it to be treated as equivalent to regular financial markets, and in particular the global currency exchange market, which is Forex - explains Stanislaw Drozdz. We witness the emergence of an integrated and independent market in which cryptocurrencies are exchanged with each other without the need to use, for example, the USD, as was the case in the initial phase of cryptocurrency trading.

The work of the Krakow team has shown that despite existing economic connections, cryptocurrencies cease to be significantly correlated with traditional financial instruments such as currencies, stock markets or commodities. Apart from being a new medium of brokerage in trade - an alternative to traditional currencies - they create new possibilities for diversifying investment portfolios. The described analyses certainly have the potential to reduce the degree of distrust of investors towards this visionary and futuristic financial instrument.

The Henryk Niewodniczanski Institute of Nuclear Physics (IFJ PAN) is currently the largest research institute of the Polish Academy of Sciences. The broad range of studies and activities of IFJ PAN includes basic and applied research, ranging from particle physics and astrophysics, through hadron physics, high-, medium-, and low-energy nuclear physics, condensed matter physics (including materials engineering), to various applications of methods of nuclear physics in interdisciplinary research, covering medical physics, dosimetry, radiation and environmental biology, environmental protection, and other related disciplines. The average yearly yield of the IFJ PAN encompasses more than 600 scientific papers in the Journal Citation Reports published by the Clarivate Analytics. The part of the Institute is the Cyclotron Centre Bronowice (CCB) which is an infrastructure, unique in Central Europe, to serve as a clinical and research centre in the area of medical and nuclear physics. IFJ PAN is a member of the Marian Smoluchowski Kraków Research Consortium: "Matter-Energy-Future" which possesses the status of a Leading National Research Centre (KNOW) in physics for the years 2012-2017. In 2017 the European Commission granted to the Institute the HR Excellence in Research award. The Institute is of A+ Category (leading level in Poland) in the field of sciences and engineering.

Credit: 
The Henryk Niewodniczanski Institute of Nuclear Physics Polish Academy of Sciences

Capturing 3D microstructures in real time

video: Argonne 3D machine learning algorithm shows nucleation of ice leading to the formation of nanocrystalline structure followed by subsequent grain growth.

Image: 
Argonne National Laboratory

Modern scientific research on materials relies heavily on exploring their behavior at the atomic and molecular scales. For that reason, scientists are constantly on the hunt for new and improved methods for data gathering and analysis of materials at those scales.

Researchers at the Center for Nanoscale Materials (CNM), a U.S. Department of Energy (DOE) Office of Science User Facility located at the DOE’s Argonne National Laboratory, have invented a machine-learning based algorithm for quantitatively characterizing, in three dimensions, materials with features as small as nanometers. Researchers can apply this pivotal discovery to the analysis of most structural materials of interest to industry.

“What makes our algorithm unique is that if you start with a material for which you know essentially nothing about the microstructure, it will, within seconds, tell the user the exact microstructure in all three dimensions,” said Subramanian Sankaranarayanan, group leader of the CNM theory and modeling group and an associate professor in the Department of Mechanical and Industrial Engineering at the University of Illinois at Chicago.

“For example, with data analyzed by our 3D tool,” said Henry Chan, CNM postdoctoral researcher and lead author of the study, “users can detect faults and cracks and potentially predict the lifetimes under different stresses and strains for all kinds of structural materials.”

“What makes our algorithm unique is that if you start with a material for which you know essentially nothing about the microstructure, it will, within seconds, tell the user the exact microstructure in all three dimensions.” — Subramanian Sankaranarayanan, CNM group leader and associate professor at the University of Illinois at Chicago

Most structural materials are polycrystalline, meaning a sample used for purposes of analysis can contain millions of grains. The size and distribution of those grains and the voids within a sample are critical microstructural features that affect important physical, mechanical, optical, chemical and thermal properties. Such knowledge is important, for example, to the discovery of new materials with desired properties, such as stronger and harder machine components that last longer.

In the past, scientists have visualized 3D microstructural features within a material by taking snapshots at the microscale of many 2D slices, processing the individual slices, and then pasting them together to form a 3D picture. Such is the case, for example, with the computerized tomography scanning routine done in hospitals. That process, however, is inefficient and leads to the loss of information. Researchers have thus been searching for better methods for 3D analyses.

“At first,” said Mathew Cherukara, an assistant scientist at CNM, “we thought of designing an intercept-based algorithm to search for all the boundaries among the numerous grains in the sample until mapping the entire microstructure in all three dimensions, but as you can imagine, with millions of grains, that is extraordinarily time-consuming and inefficient.”

“The beauty of our machine learning algorithm is that it uses an unsupervised algorithm to handle the boundary problem and produce highly accurate results with high efficiency,” said Chan. “Coupled with down-sampling techniques, it only takes seconds to process large 3D samples and obtain precise microstructural information that is robust and resilient to noise.”

The team successfully tested the algorithm by comparison with data obtained from analyses of several different metals (aluminum, iron, silicon and titanium) and soft materials (polymers and micelles). These data came from earlier published experiments as well as computer simulations run at two DOE Office of Science User Facilities, the Argonne Leadership Computing Facility and the National Energy Research Scientific Computing Center. Also used in this research were the Laboratory Computing Resource Center at Argonne and the Carbon Cluster in CNM.

“For researchers using our tool, the main advantage is not just the impressive 3D image generated but, more importantly, the detailed characterization data,” said Sankaranarayanan. “They can even quantitatively and visually track the evolution of a microstructure as it changes in real time.”

The machine-learning algorithm is not restricted to solids. The team has extended it to include characterization of the distribution of molecular clusters in fluids with important energy, chemical and biological applications.

This machine-learning tool should prove especially impactful for future real-time analysis of data obtained from large materials characterization facilities, such as the Advanced Photon Source, another DOE Office of Science User Facility at Argonne, and other synchrotrons around the world.

Credit: 
DOE/Argonne National Laboratory

Chilling concussed cells shows promise for full recovery

MADISON, Wis. -- In the future, treating a concussion could be as simple as cooling the brain.

That's according to research conducted by University of Wisconsin-Madison engineers, whose findings support the treatment approach at the cellular level.

"There are currently no effective medical treatments for concussions and other types of traumatic brain injuries," says Christian Franck, the UW-Madison associate professor of mechanical engineering who led the study. "We're very excited about our findings because they could potentially pave the way for treatments we can offer patients."

The process is a bit more finicky than just applying an ice pack to the head.

Conducting experiments on brain cells in a dish, Franck and his team discovered several key parameters that determined the effectiveness of therapeutic cooling for mitigating damage to the injured cells.

"We found that, for this treatment to be successful, there's a sweet spot," he says. "You can't cool too little; you can't cool too much; and you can't wait too long following an injury to start treatment."

And when the researchers identified that sweet spot, the results were striking.

"I was amazed at how well the cooling worked," Franck says. "We actually went back and repeated the experiments multiple times because I didn't believe it at first."

The researchers published their findings in the journal PLOS ONE.

The high occurrence of concussions underscores the pressing need for treatments. Every year in the United States, there are an estimated 1.7 million new cases of traumatic brain injury assessed in emergency rooms, and the incidence of sports-related concussions may approach 3.8 million annually.

A traumatic impact to the brain can turn on biochemical pathways that lead to neurodegeneration, the progressive deterioration and loss of function in brain cells. Neurodegeneration causes long-lasting and potentially devastating health issues for patients.

"These pathways are like flipping on a bad molecular switch in your brain," says Franck.

In their experiments, the researchers looked at two of those biochemical pathways.

First, they created a network of neurons in a dish and delivered a mechanical stimulus that simulates the kind of injury and cell damage that people experience with a concussion.

Then they cooled the injured cells separately to four different temperatures. They found that 33 degrees Celsius (91.4 degrees Fahrenheit) provided the most protective benefit for the cells after 24 and 48 hours post-injury. Notably, cooling to 31 degrees Celsius had a detrimental effect.

"So there's such a thing as cooling too much," Franck says.

Time also is a factor. For the best outcome, the team determined that cooling needed to begin within four hours of the injury and continue for at least six hours, although Franck says cooling for even 30 minutes still showed some benefits.

When they adhered to those parameters, the researchers discovered they could keep the cells' damaging biochemical pathways switched off. In other words, the cells remained healthy and functioning normally--even though they had just suffered a traumatic injury.

After six hours of cooling, the researchers brought the concussed brain cells back up to normal body temperature, curious about whether warming would cause the damaging biochemical pathways to turn on.

"The biggest surprise was that the molecular switches actually stayed off -- permanently -- through the duration of the lab experiment," Franck says. "That was huge."

He and his students compared their results with previous animal studies and randomized human trials that investigated cooling as a treatment for traumatic brain injuries.

"We found really good agreement between the studies when we dialed in to those specific parameters, so that's a very encouraging sign," Franck says. "But this isn't the end of the story. We think this warrants further investigation in animal studies."

Franck says there's more to learn before cooling the brain could be a practical treatment for patients at a clinic. For example, it's not as easy as simply lowering the temperature of a person's whole body, which taxes the heart and can have a strong negative effect on the immune system.

Rather, isolating cooling to the brain is crucial. "We hope our paper will spawn renewed motivation and interest in solving the technical challenges for getting this type of treatment to patients in the future," Franck says. "For a long time, the scientific literature was inconclusive on whether this would be a successful treatment. What we showed in our study was that, yes, as far as the cell biology is concerned, this is effective. And so now it's really worth thinking about how we might implement this in practice."

Credit: 
University of Wisconsin-Madison

Study shows six decades of change in plankton communities

image: The Continuous Plankton Recorder device is towed in surface waters and occupies a similar space to a marine mammal.

Image: 
Marine Biological Association

The UK's plankton population - microscopic algae and animals which support the entire marine food web - has undergone sweeping changes in the past six decades, according to new research published in Global Change Biology.

Involving leading marine scientists from across the UK, led by the University of Plymouth, the research for the first time combines the findings of UK offshore surveys such as the Continuous Plankton Recorder (CPR) and UK inshore long-term time-series.

It then maps those observations against recorded changes in sea surface temperature, to demonstrate the effect of our changing climate on these highly sensitive marine communities.

The study's authors say their findings provide further evidence that increasing direct human pressures on the marine environment - coupled with climate-driven changes - are perturbing marine ecosystems globally.

They also say it is crucial to helping understand broader changes across UK waters, since any shifts in plankton communities have the potential for negative consequences for the marine ecosystem and the services it provides.

Since plankton are the very base of the marine food web, changes in the plankton are likely to result in changes to commercial fish stocks, sea birds, and even the ocean's ability to provide the oxygen we breathe.

The analyses of plankton functional groups showed profound long-term changes, which were coherent across large geographical areas right around the UK coastline.

For example, the 1998-2017 decadal average abundance of meroplankton, a group of animal plankton, which includes lobsters and crabs and which spend their adult lives on the seafloor, was 2.3 times that for 1958-1967 when comparing CPR samples in the North Sea, at a time of increasing sea surface temperatures.

This contrasted with a general decrease in plankton which spend their whole lives in the water column, while other offshore species noticed population decreases of around 75%.

The study was led by former postdoctoral researcher Dr Jacob Bedford and Dr Abigail McQuatters-Gollop, from the University of Plymouth's Marine Conservation Research Group. It also involved scientists from The Marine Biological Association, Plymouth Marine Laboratory, The Environment Agency, Marine Scotland Science, Centre for Environment Fisheries and Aquaculture Science (Cefas), Agri-Food & Biosciences Institute of Northern Ireland, and the Scottish Association for Marine Science.

Dr McQuatters-Gollop, the lead scientist for pelagic habitats policy for the UK, said: "Plankton are the base of the entire marine food web. But our work is showing that climate change has caused plankton around UK waters to experience a significant reorganisation. These changes in the plankton suggest alterations to the entire marine ecosystem and have consequences for marine biodiversity, climate change (carbon cycling) and food webs including commercial fisheries."

Dr Clare Ostle, of the Marine Biological Association's Continuous Plankton Recorder (CPR) Survey, said: "Changes in plankton communities not only affect many levels of marine ecosystems but also the people that depend on them, notably through the effects on commercial fish stocks. This research is a great example of how different datasets - including CPR data - can be brought together to investigate long-term changes in important plankton groups with increasing temperature. These kind of collaborative studies are important for guiding policy and assessments of our changing environment."

Report co-author Professor Paul Tett, from the Scottish Association for Marine Science (SAMS) in Oban, added: "In this paper, we have tried to turn decades of speculation into evidence. It has long been thought that warming seas impact on plankton, the most important organisms in the marine food web. By bringing together such a large, long-term dataset from around the UK for the first time, we have discovered that the picture is a complex one. We therefore need to build on the success of this collaboration by further supporting the Continuous Plankton Recorder and the inshore plankton observatories."

Credit: 
University of Plymouth

Almond orchard recycling a climate-smart strategy

image: Wood chips from recycled almond trees are spread across an orchard in California's Stanislaus County.

Image: 
Brent Holtz, UCANR

Recycling trees onsite can sequester carbon, save water and increase crop yields, making it a climate-smart practice for California's irrigated almond orchards, finds a study from the University of California, Davis.

Whole orchard recycling is when old orchard trees are ground, chipped and turned back into the soil before new almond trees are planted.

The study, published in the journal PLOS ONE, suggests that whole orchard recycling can help almond orchards be more sustainable and resilient to drought while also increasing carbon storage in the soil.

"To me what was really impressive was the water piece," said corresponding author Amélie Gaudin, an associate professor of agroecology in the UC Davis Department of Plant Sciences. "Water is central to how we think about agriculture in California. This is a clear example of capitalizing on soil health. Here we see some real benefits for water conservation and for growers."

BURN VS. TURN

Drought and high almond prices have encouraged higher rates of orchard turnover in recent years. The previous practice of burning trees that are no longer productive is now restricted under air quality regulations, so whole orchard recycling presents an alternative. But how sustainable and effective is it for the environment and for farmers?

For the study, scientists measured soil health and tree productivity of an almond orchard that turned previous Prunus woody biomass back into the soil through whole orchard recycling and compared it with an orchard that burned its old trees nine years prior.

They also experimentally reduced an orchard's irrigation by 20 percent to quantify its water resilience.

Their results found that, compared with burn treatments, whole orchard recycling can:

Sequester 5 tons of carbon per hectare

Increase water-use efficiency by 20 percent

Increase crop yields by 19 percent

"This seems to be a practice that can mitigate climate change by building the soil's potential to be a carbon sink, while also building nutrients and water retention," said Gaudin. "That can be especially important as water becomes more limited."

Credit: 
University of California - Davis

Significant global investment could save 11 million children

A significant, sustained, global investment in treating children with cancer could save 11 million lives and yield a triple return on investment, according to a Lancet Oncology Commission report published by Lancet.

Global childhood cancer rates are on the rise as more children worldwide survive infancy. Today, more than 80% of children with cancer live in low- and middle-income countries, where they lack access to adequate diagnosis and treatment. Most of those children will die from their disease. This is a sharp contrast to developed nations where survival rates for pediatric cancers exceed 80%.

The Commission report, titled Sustainable Care for Children with Cancer, analyzed the potential return of a cumulative, global investment of $594 billion in three simultaneous interventions. These interventions include access to primary care and specialist care, treatment such as chemotherapy and surgery, and supportive services to reduce treatment abandonment. The result could be 11 million lives saved and a 3-to-1 lifetime productivity gain of almost $2 trillion to the global economy.

"Our findings indicate that $20 billion (US) of funding per year over a 30-year period could bring a return of $3 for every dollar spent. This report should reassure policymakers that a sizeable return on investment is realistic and feasible," said St. Jude Global Director Carlos Rodriguez-Galindo, MD, who co-chaired the commission. "Without this investment to halt millions of needless deaths from childhood cancer, we are unlikely to reach the commission's sustainable development goals."

Credit: 
St. Jude Children's Research Hospital

Evolutionary adaptation helped cave bears hibernate, but may have caused extinction

image: The well-developed sinus system in the now extinct cave bear (top left) is associated with uneven mechanical stress distributions in biting simulations (bottom left) conducted at UB. The much less developed sinus system in living bears, for example the sun or honey bear, (top right) allows mechanical stress to distribute evenly over the forehead region, as seen in the biting simulation (bottom right).

Image: 
Alejandro Pérez-Ramos)

BUFFALO, N.Y. -- A study published in Science Advances on April 1 reveals a new hypothesis that may explain why European cave bears went extinct during past climate change periods. The research was motivated by controversy in the scientific literature as to what the animal (Ursus spelaeus) ate and how that affected their demise.

The new hypothesis emerged, in part, from computational analysis and computer biting simulations conducted in the laboratory of Jack Tseng, PhD, assistant professor of pathology and anatomical sciences in the Jacobs School of Medicine and Biomedical Sciences at the University at Buffalo.

Tseng is a co-author on the paper with corresponding authors Borja Figueirido, PhD, and Alejandro Pérez-Ramos, PhD, his doctoral student and first author, both of the Departamento de Ecologia y Geologia of the Universidad de Malaga, Spain.

Dietary dilemma

Cave bears were a species of bear (Ursus spelaeus) that lived in Europe and Asia that went extinct about 24,000 years ago. According to Figueirido, researchers have proposed different diets for cave bears, ranging from pure herbivory to carnivory or even scavenging.

"Knowing the feeding behaviour of the cave bear is not a trivial aspect," he said. "Feeding behaviour is intimately related to its decline and extinction."

He noted that two main hypotheses, not necessarily exclusive, have been proposed to explain cave bear extinction: a human-driven decline, either by competition for resources or by direct hunting; or a substantial demise in population sizes as a result of the climatic cooling that occurred during the late Pleistocene which caused vegetation to wane.

Previous research shows that cave bears were primarily herbivorous at least from 100,000 to 20,000 years ago. But even during the cooling periods, when vegetation productivity waned, these bears didn't change their diets. The researchers propose that this dietary inflexibility, combined with competition for cave shelters by humans, is what led to their extinction.

To find out if there were biomechanical explanations behind their inflexible diets, meaning that the bears weren't physically capable of adjusting their diets effectively during times of limited vegetation resources, the researchers analyzed three-dimensional computer simulations of different feeding scenarios.

Critical sinuses

They were especially interested in the sinuses of the bears because large paranasal sinuses allow for greater metabolic control, critical to survival during hibernation.

"Our study proposes that climate cooling probably forced the selection of highly developed sinuses," which in turn led to the appearance of the characteristic domed skull of the cave bear lineage," said Alejandro Pérez-Ramos.

Tseng explained that when the sinus system expands, the act of chewing may cause more or less strain on the skull. In both humans and bears, the sinus system lightens the weight of the face, reducing the amount of bone tissue needed to grow the skull.

"Mechanically speaking, being 'thickheaded' may not be a bad thing because more bone means more structural strength," he said. "However, our findings support the interpretation that requirements for sinus system function in cave bears necessitated a trade-off between sinus development and skull strength."

Tseng and Pérez-Ramos, who spent three months at UB to learn the procedure, used a biomechanical simulation methodology to estimate the biting stresses and strains in different bear species and different models of them. The bear skull specimens used were from several European institutions, where CT scans had been done on them, as well as the scientific CT repository, also known as the digital morphology library, at the University of Texas at Austin.

They found that the development of paranasal sinuses in cave bears caused the cranial dome to expand upward and backward from the forehead, changing the geometry of the bear's skull.

"This geometrical change generated a mechanically suboptimal cranial shape, with a very low efficiency to dissipate the stress along the skull, particularly when biting with the canines or carnassials, the teeth most often used by predatory mammals," said Pérez-Ramos.

When the sinus system expands, Tseng explained, it results in bone reduction relative to the size of the skull and therefore less structural support to resist the physical forces that chewing generates. Although other mammals with expanded sinuses, such as hyenas, appear to have evolutionarily modified their skull shape to effectively deal with decreased structural support, cave bear skulls showed compromised biomechanical capability compared to living bear species.

"Through the use of new techniques and virtual methods, such as biomechanical simulations across each tooth and the comparative internal anatomical study of the paranasal sinuses, we propose that large sinuses were probably selected in cave bears in order to be able to hibernate for longer periods with very low metabolic costs," said Pérez-Ramos.

Ultimately, though, that trade-off may have resulted in the extinction of the species, a finding that also has relevance to humans, Tseng said.

"Being able to stay alive during the coldest periods would have been equally important to human and bear alike," he said. "The success or demise of prehistoric megafauna, such as cave bears, provide crucial clues as to how humans may have out-competed and out-survived other large mammals during a critical time for the evolution of our own species."

Credit: 
University at Buffalo

New sensors could offer early detection of lung tumors

CAMBRIDGE, MA -- People who are at high risk of developing lung cancer, such as heavy smokers, are routinely screened with computed tomography (CT), which can detect tumors in the lungs. However, this test has an extremely high rate of false positives, as it also picks up benign nodules in the lungs.

Researchers at MIT have now developed a new approach to early diagnosis of lung cancer: a urine test that can detect the presence of proteins linked to the disease. This kind of noninvasive test could reduce the number of false positives and help detect more tumors in the early stages of the disease.

Early detection is very important for lung cancer, as the five-year survival rates are at least six times higher in patients whose tumors are detected before they spread to distant locations in the body.

"If you look at the field of cancer diagnostics and therapeutics, there's a renewed recognition of the importance of early cancer detection and prevention. We really need new technologies that are going to give us the capability to see cancer when we can intercept it and intervene early," says Sangeeta Bhatia, who is the John and Dorothy Wilson Professor of Health Sciences and Technology and Electrical Engineering and Computer Science, and a member of MIT's Koch Institute for Integrative Cancer Research and the Institute for Medical Engineering and Science.

Bhatia and her colleagues found that the new test, which is based on nanoparticles that can be injected or inhaled, could detect tumors as small as 2.8 cubic millimeters in mice.

Bhatia is the senior author of the study, which appears today in Science Translational Medicine. The paper's lead authors are MIT and Harvard University graduate students Jesse Kirkpatrick and Ava Soleimany, and former MIT graduate student Andrew Warren, who is now an associate at Third Rock Ventures.

Targeting lung tumors

For several years, Bhatia's lab has been developing nanoparticles that can detect cancer by interacting with enzymes called proteases. These enzymes help tumor cells to escape their original locations by cutting through proteins of the extracellular matrix.

To find those proteins, Bhatia created nanoparticles coated with peptides (short protein fragments) that are targeted by cancer-linked proteases. The particles accumulate at tumor sites, where the peptides are cleaved, releasing biomarkers that can then be detected in a urine sample.

Her lab has previously developed sensors for colon and ovarian cancer, and in their new study, the researchers wanted to apply the technology to lung cancer, which kills about 150,000 people in the United States every year. People who receive a CT screen and get a positive result often undergo a biopsy or other invasive test to search for lung cancer. In some cases, this procedure can cause complications, so a noninvasive follow-up test could be useful to determine which patients actually need a biopsy, Bhatia says.

"The CT scan is a good tool that can see a lot of things," she says. "The problem with it is that 95 percent of what it finds is not cancer, and right now you have to biopsy too many patients who test positive."

To customize their sensors for lung cancer, the researchers analyzed a database of cancer-related genes called the Cancer Genome Atlas and identified proteases that are abundant in lung cancer. They created a panel of 14 peptide-coated nanoparticles that could interact with these enzymes.

The researchers then tested the sensors in two different mouse models of cancer, both of which are engineered with genetic mutations that lead them to naturally develop lung tumors. To help prevent background noise that could come from other organs or the bloodstream, the researchers injected the particles directly into the airway.

Using these sensors, the researchers performed their diagnostic test at three time points: 5 weeks, 7.5 weeks, and 10.5 weeks after tumor growth began. To make the diagnoses more accurate, they used machine learning to train an algorithm to distinguish between data from mice that had tumors and mice that did not.

With this approach, the researchers found that they could accurately detect tumors in one of the mouse models as early as 7.5 weeks, when the tumors were only 2.8 cubic millimeters, on average. In the other strain of mice, tumors could be detected at 5 weeks. The sensors' success rate was also comparable to or better than the success rate of CT scans performed at the same time points.

Reducing false positives

The researchers also found that the sensors have another important ability -- they can distinguish between early-stage cancer and noncancerous inflammation of the lungs. Lung inflammation, common in people who smoke, is one of the reasons that CT scans produce so many false positives.

Bhatia envisions that the nanoparticle sensors could be used as a noninvasive diagnostic for people who get a positive result on a screening test, potentially eliminating the need for a biopsy. For use in humans, her team is working on a form of the particles that could be inhaled as a dry powder or through a nebulizer. Another possible application is using these sensors to monitor how well lung tumors respond to treatment, such as drugs or immunotherapies.

"A great next step would be to take this into patients who have known cancer, and are being treated, to see if they're on the right medicine," Bhatia says.

She is also working on a version of the sensor that could be used to distinguish between viral and bacterial forms of pneumonia, which could help doctors to determine which patients need antibiotics and may even provide complementary information to nucleic acid tests like those being developed for Covid-19. Glympse Bio, a company co-founded by Bhatia, is also working on developing this approach to replace biopsy in the assessment of liver disease.

Credit: 
Massachusetts Institute of Technology