Culture

Researchers develop low-cost, easy-to-use emergency ventilator for COVID-19 patients

video: A team of engineers and physicians at the University of California San Diego has developed a low-cost, easy-to-use emergency ventilator for COVID-19 patients that is built around a ventilator bag usually found in ambulances.

The team built an automated system around the bag and brought down the cost of an emergency ventilator to just $500 per unit--state of the art models cost at least $10,000. The device's components can be rapidly fabricated and the ventilator can be assembled in just 15 minutes. The device's electronics and sensors rely on a robust supply chain from fields not related to healthcare that are unlikely to be affected by shortages.

Image: 
University of California San Diego

A team of engineers and physicians at the University of California San Diego has developed a low-cost, easy-to-use emergency ventilator for COVID-19 patients that is built around a ventilator bag usually found in ambulances.

The team built an automated system around the bag and brought down the cost of an emergency ventilator to just $500 per unit--by comparison, state of the art ventilators currently cost at least $50,000. The device's components can be rapidly fabricated and the ventilator can be assembled in just 15 minutes. The device's electronics and sensors rely on a robust supply chain from fields not related to healthcare that are unlikely to be affected by shortages.

The UCSD MADVent Mark V is also the only device offering pressure-controlled ventilation equipped with alarms that can be adjusted to signal that pressure is too low or too high. This is especially important because excessive pressure can cause lung injury in COVID-19 patients that often experience rapid decreases in lung capacity as the disease progresses.

Most ventilators measure the volume of air that is being pumped into the patient's lungs, which requires expensive airflow sensors. By contrast, the UCSD MADVent Mark V measures pressure and uses that data to deduct and control the airflow to the lungs. This was key to lowering the device's price.

The team from tUC San Diego and industry partners will be seeking approval for the device from the Food and Drug Administration. They detail their work in an upcoming issue of Medical Devices and Sensors.

The device's plans and specifications are available at http://MADVent.ucsd.edu/

"The MADVent can safely meet the diverse requirements of COVID-19 patients because it can adjust over the broad ranges of respiration parameters needed to treat acute respiratory distress syndrome," said James Friend, a professor at the UC San Diego Jacobs School of Engineering and one of the paper's two corresponding authors. "The combination of off-the-shelf components and readily machined parts with mechanically driven pressure control makes our design both low cost and rapidly manufacturable."

Researchers also wanted to make sure that the device could be used by healthcare workers with limited experience with ventilators and no experience with this type of system, said Dr. Casper Petersen, co-author of the study and a project scientist in the Department of Anesthesiology at the UC San Diego School of Medicine. As a result, the MADVent Mark V is safe to use, easy to assemble and easy to repair.

"This device could be a great option for use in situations where materials are scarce, such as when the normal supply chain breaks down, or in developing nations and hard-to-reach rural areas," Dr. Casper Petersen said.

The device is not meant as a substitute for the highly complex ventilators used in Intensive Care Units.

"Rather, our low-cost ventilator is meant to bridge an urgent gap in situations of a large surge in patients where we may not have enough life sustaining equipment", said Dr. Lonnie Petersen, an assistant professor at the Jacobs School of Engineering, adjunct professor at UC San Diego Health and the paper's other corresponding author. "Safety is our main priority; while the MADVent is a low-tech and low-cost device, it actually offers robust and patient tailored ventilationThis really increases the safety for the patients suffering from the complex pulmonary infection and respiratory distress associated with COVID-19".

The UCSD MADVent Mark V

The UC San Diego team built their device around a ventilator bag usually found in ambulances and designed to be manually squeezed to help patients breathe. In the UCSD MADvent Mark V, a machined paddle squeezes the bag instead. The paddle is controlled by a series of pressure sensors to make sure the patients get the appropriate flow of air into their lungs. The team deliberately integrated as many standard hospital items as possible into the design because those have already undergone rigorous testing for safety, longevity and compatibility.

To measure pressure, the researchers developed an algorithm that deduces how much the bag was compressed based on how many turns the device's motor has made and calculates the volume of air sent into the patient's lungs as a result.

"The elasticity of the lungs changes very quickly, so it's important to be able to sense the feedback from the patient," said Dr. Lonnie Petersen.

Researchers tested their system more than 200 times and for days on end on a lung simulator, adhering to standards for the International Standards Organization and FDA guidelines to ensure it functioned correctly. The device was also tested on a medical mannequin simulator.

One of the keys for cost savings was developing computer models of the volume of air delivered through the ambulance bag when it is compressed. This allowed researchers to do away with expensive airflow sensors and the complex algorithms that control them.

The materials on the ventilator can be sanitized with conventional disinfectants such as 1.5% hydrogen peroxide and 70% ethanol.

"The system, in its current state of development, can easily accommodate new modules that enable more sophisticated features, such as flow monitoring, which can enable additional ventilation modes and provide healthcare operators more information regarding a patient's breathing," said Aditya Vasan, a Ph.D. student in Friend's research group and the paper's first author.

Collaboration across disciplines

A close collaboration between clinicians and engineers enabled the team to put together a crude prototype in just three days. They then spent countless hours refining and testing the ventilator. A lot of work went into making sure it was safe and could be manufactured with simple parts at a large scale.

Engineers with the UC San Diego Qualcomm Institute Prototyping Lab provided engineering design and fabrication support. Electrical engineer Mark Stambaugh stepped in to work on the microcontroller and help adjust the stroke cycle and control the speed and volume of the compressions to help patients breathe. Mechanical engineer Alex Grant provided design support and guidance.

Credit: 
University of California - San Diego

Newly discovered planet zips around baby star in a week

image: Artist's rendering of Au Mic b.

Image: 
NASA's Goddard Space Flight Center

Understanding how planets form is one of the main challenges scientists face when placing our own and other planetary systems in context. Planets are thought to form from the disk-shaped clouds of gas and dust that surround newborn stars, but this process has never been observed. Astronomers normally only observe planets after they have already formed and have to deduce the pathways that that led to their final states.

For more than a decade, astronomers have searched for planets orbiting AU Microscopii, a nearby star still surrounded by a disk of debris left over from its formation. Now scientists using data from NASA's Transiting Exoplanet Survey Satellite, or TESS, and now-retired Spitzer Space Telescope report the discovery of a planet about as large as Neptune that circles the young star in just over a week.

The new planet, AU Mic b, is located 31.9 light-years away in the southern constellation Microscopium and described in a paper published in Nature. The system, known as AU Mic for short, provides a one-of-a-kind laboratory for studying how planets and their atmospheres form, evolve, and interact with their stars.

"AU Mic is a young, nearby M dwarf star. It's surrounded by a vast debris disk in which moving clumps of dust have been tracked, and now, thanks to TESS and Spitzer, it has a planet with a direct size measurement," said co-author Bryson Cale, a doctoral student at George Mason University in Fairfax, Virginia. "There is no other known system that checks all of these important boxes."

"Finding a 'missing link,' such as the planet orbiting AU Mic, essentially caught in the act of forming, is extremely rare," said co-author Stephen Kane, an associate professor in the Department of Earth and Planetary Sciences at the University of California, Riverside. "What makes this especially rare is that it also transits its star, so we can measure the radius as well as the mass, leading to an estimate of the bulk density of the planet and its likely composition.

"This discovery will form the foundation for many years of observational and theoretical studies into the very earliest stages for planet formation," added Kane, who helped develop the instrument that measured the planet mass and was part of the TESS team that discovered the transit of the planet.

AU Mic is a cool red dwarf star with an age estimated at 20 million to 30 million years, making it a stellar infant compared to our sun, which is at least 150 times older. The planet AU Mic b almost hugs its star, completing an orbit every eight-and-a-half days. It weighs less than 58 times Earth's mass, placing it in the category of Neptune-like worlds.

"We think AU Mic b formed far from the star and migrated inward to its current orbit, something that can happen as planets interact gravitationally with a gas disk or with other planets," said co-author Thomas Barclay, an associate research scientist at the University of Maryland, Baltimore County, and an associate project scientist for TESS at NASA's Goddard Space Flight Center in Greenbelt, Maryland.

When a planet crosses in front of its star from our perspective -- an event called a transit -- its passage causes a distinct dip in the star's brightness. TESS monitors large swaths of the sky, called sectors, for 27 days at a time. During this long stare, the mission's cameras regularly capture snapshots that allow scientists to track changes in stellar brightness.

Regular dips in a star's brightness signal the possibility of a transiting planet. Usually, it takes at least two observed transits to recognize a planet's presence.

"As luck would have it, the second of three TESS transits occurred when the spacecraft was near its closest point to Earth. At such times, TESS is not observing because it is busy downlinking all of the stored data," said co-author Diana Dragomir, a research assistant professor at the University of New Mexico in Albuquerque. "To fill the gap, our team was granted observing time on Spitzer, which caught two additional transits in 2019 and enabled us to confirm the orbital period of AU Mic b."

Because the amount of light blocked by a transit depends on the planet's size and orbital distance, the TESS and Spitzer transits provide a direct measure of AU Mic b's size. Analysis of these measurements shows the planet is about 8% larger than Neptune.

AU Mic b might not be the only planet orbiting its star.

"There is an additional candidate transit event seen in the TESS data, and TESS will hopefully revisit AU Mic later this year in its extended mission," said lead author Peter Plavchan, an assistant professor of physics and astronomy at George Mason. "We are continuing to monitor the star with precise radial velocity measurements, so stay tuned."

Credit: 
University of California - Riverside

Imaging magnetic instabilities using laser accelerated protons

image: Protons accelerated by laser-plasma interaction in a first target (left) pass through a second target, itself irradiated by another laser beam (middle and framed). The Weibel instability induced there by energetic electrons (blue trajectories) generates magnetic fluctuations that deflect the protons onto a series of sensitive films (right), producing an image of the resulting magnetic structures.

Image: 
David Tordeux

The magnetic structures resulting from a plasma instability predicted by the physicist Erich Weibel about 50 years ago has been evidenced at surprisingly large scales in a laser-driven plasma in the prestigious journal Nature Physics. This instability is also expected to operate in astrophysical settings where it is held responsible for the acceleration of cosmic rays and the emission of gamma photons in the famous “gamma-ray bursts”.

Julien Fuchs, a graduate of the Institut national de la recherche scientifique (INRS) and a researcher at the Laboratoire pour l’utilisation des lasers intenses (LULI) in France, INRS Professor Patrizio Antici, a specialist in laser-driven particle acceleration, and INRS Professor Emeritus Henri Pépin have succeeded in measuring the magnetic fields produced by Weibel instabilities within a laser-driven plasma, an ionized gas. Their results were published on June 1 in the prestigious journal Nature Physics.

The researchers used the proton radiography technique to visualize this extremely fast phenomenon. “Our protons accelerated by laser-plasma interaction are able to take a sequence of images of very fast electromagnetic phenomena, lasting a few picoseconds only and with a resolution of a few microns. This allows us to probe instabilities with precision unmatched by other imaging techniques” reports Patrizio Antici, who did his thesis under the supervision of Professor Fuchs, himself formerly under the direction of Professor Pépin.

These three generations of researchers recreated a “small-scale model” of astrophysical phenomena in the laboratory by irradiating a target with an intense laser. The magnetic fluctuations generated by the interaction can be probed by protons on a series of sensitive films, producing a sequence of images showing the temporal evolution of the magnetic structures.

The interpretation and modeling of these structures were conducted by Laurent Gremillet and Charles Ruyer, physicists at the Commissariat à l’énergie atomique et aux énergies alternatives (CEA). After several years of hard work, combining theoretical modelling and advanced numerical simulations, they highlighted the growth of two variants of the Weibel instability according to the region of the plasma where they develop.

With more powerful lasers, researchers will be able to reproduce and analyze even more extreme astrophysical phenomena with unrivalled resolution.

Credit: 
Institut national de la recherche scientifique - INRS

In the wild, chimpanzees are more motivated to cooperate than bonobos

image: Chimpanzees arriving later at the snake were better informed and therefore less surprised to see it in this place than bonobos arriving later.

Image: 
Cédric Girard-Buttoz, Taï Chimpanzee Project

We humans have unique cooperative systems allowing us to cooperate in large numbers. Furthermore, we provide help to others, even outside the family unit. How we developed these cooperative abilities and helping behaviour during our evolutionary past remains highly debated. According to one prominent theory, the interdependence hypothesis, the cognitive skills underlying unique human cooperative abilities evolved when several individuals needed to coordinate their actions to achieve a common goal, for example when hunting large prey or during conflict with other groups. This hypothesis also predicts that humans who rely more on each other to achieve such goals, will be more likely to provide help and support to one another in other situations.

"While we cannot study the behaviour of our human ancestors", explains Roman Wittig, a senior author and head of the Taï Chimpanzee Project, "we can learn how relying on others may influence helping behaviour in our ancestors by studying our closest living relatives, chimpanzees and bonobos". Chimpanzees are more territorial than bonobos and in some populations engage more frequently in group hunts. According to the interdependence hypothesis, chimpanzees should thus have evolved a higher tendency to cooperate and help others in the group.

To test this hypothesis, researchers from the Max Planck Institute for Evolutionary Anthropology, Harvard University and Liverpool John Moores University, presented 82 chimpanzees and bonobos from five different communities with a model of a Gaboon viper, a deadly snake. During the experiment the apes could cooperate with each other by producing alarm calls to inform conspecifics about the snake. This represents the first experimental study ever conducted in wild bonobos. "This experimental study is a novel and promising approach to probe bonobo's mind," says Gottfried Hohmann, a senior author on the study and head of the LuiKotale bonobo project. Martin Surbeck, co-author on the paper adds: "This study should stimulate several more experimental studies on wild bonobo cooperation, cognition, and communication".

In this study, researchers show that both chimpanzees and bonobos can assess what others know, as they stopped calling when all individuals around had seen the snake. However, chimpanzees warned each other more efficiently: individuals arriving later at the snake were less surprised upon seeing it than late arriving bonobos. This suggests chimpanzees were better informed of the snake's presence than bonobos. Indeed, late arriving chimpanzees were more likely to hear a call before reaching the snake than bonobos in the same circumstance, suggesting that the motivation to help and warn others was higher in chimpanzees.

"Our findings support the theory that the extreme reliance on each other in humans, for instance during war and group hunting, may have promoted the evolution of some forms of help and support to others, even sometimes to complete strangers," says first author Cédric Girard-Buttoz. The authors confirm that chimpanzees may have some awareness of others' knowledge and demonstrate for the first time this ability in wild bonobos.

"How chimpanzees and bonobos apparently keep track of other's knowledge, the specific cognitive skills to do this, are not clear," adds Catherine Crockford, last author of the study, "we face a major challenge to understand which cognitive skills are unique to humans and which are shared with other apes".

Credit: 
Max Planck Institute for Evolutionary Anthropology

Jellyfish contain no calories, so why do they still attract predators?

They contain no carbohydrates. No fats. No proteins. Not much else but water. Still, the moon jelly (Aurelia aurita) are eaten by predators in the sea; fish, crustaceans, sea anemones and even corals and turtles.

Now a new study may explain why these predators bother to eat the gelatinous creatures. The study is based on moon jelly samples from a German Fjord.

- The jellyfish in our study showed to contain some fatty acids that are very valuable for their predators. Fatty acids are vital components of cell membranes and play a crucial role in processes like growth and reproduction, says marine biologist and jellyfish expert, Jamileh Javidpour from University of Southern Denmark.

Two years of fishing jelly fish

She is Principle Investigator and co-author of the study, published in Journal of Plankton Research. Co-authors are Vanessa Stenvers from University of Groningen and Chi Xupeng from Chinese Academy of Sciences.

The researchers collected moon jellies from North German Kiel Fjord every two weeks for two years. Their content of fatty acids varied with seasons, and variations linked to developmental stages were also found: mature individuals with reproductive tissues had the highest content.

- Jelly fish are likely to be more than just an opportunistic prey to many organisms. It is true that a predator does not get much from eating a single jelly fish, but if it eats many, it will make a difference and provide the predator with valuable fatty acids, she says.

In other words: Low food quality can be weighed up by high food quantity. As an example, researchers have observed a salmon eat a jelly fish 20 times faster than it took for it to eat a shrimp.

So, if the predator doesn't have to spend much energy on eating loads of jelly fish, this preying strategy begins to make sense, she explains:

- Jelly fish often come in shoals and they move slowly through the water. They can't really swim away when predators start eating them.

More jellyfish = more food

On a global scale, marine environments are changing, and an increasing abundance of jelly fish is thought to replace other prey items in the oceans.

- As we see an increase in jelly fish, I suspect that we will also come to see a change in predator populations - especially in areas where the abundances of usual prey items might be endangered by a changing environment, says Jamileh Javidpour.

Several essential fatty acids were found in the German moon jellies. Among them are the polyunsaturated fatty acids arachidonic acid, eicosapentaenoic acid and docosahexaenoic acid.

Credit: 
University of Southern Denmark

Dynamical and allosteric regulation of photoprotection in light harvesting complex II

image: Thermal or ΔpH driven dynamical and allosteric regulation mechanism for light-harvesting and photoprotection of LHCII trimer as a molecular switch.

Image: 
©Science China Press

The photosynthetic systems of green plants have developed a dual function of efficient light collection under low light intensity and photoprotection under intense sunshine to prevent the oxidative damage of reaction centers.

Whereas it is widely accepted that such a dual function is mainly realized through the most major light-harvesting complex of photosystem II (LHCII) in response to the photo-induced pH change at lumenal side, a long-lasting puzzle remains how this photosynthetic protein switches between these two opposite functions via fast structural change in adaption to changes of the environmental conditions. Prof. Weng, the Laboratory of Soft Matter Physics, Institute of Physics, CAS, said "an answer to this question would be instructive to engineering plants with higher efficiency in photosynthesis, and a greater productivity."

"Our study probes protein dynamics and allosteric structure changes of LHCII by integrating time-resolved spectroscopy and atomistic molecular dynamics simulations." Prof. Weng said, "In collaboration with the Minnesota groups headed by Professors Jiali Gao and Gianluigi Veglia (Department of Chemistry and Department of Biochemistry, Molecular Biology and Biophysics, University of Minnesota, USA), molecular dynamics simulations carried out by Dr. Yingjie Wang, now an Associate Research Investigator at Shenzhen Bay Laboratory, revealed that LHCII trimer itself can act as a molecular machine in response to environmental conditions, including increased temperature, acidity, or both cooperatively. The atomistic simulation results are fully consistent with, and helped to interpret experimental observations."

The net effect of the mechanical scissoring motions is to bring closer chromophores - molecules that absorbs and transfers photo-energy - such that the "bright" state of chlorophyll transmit its excited energy into the "dark" state of lutein. The lowest excited state of lutein does not emit or absorb photon directly, thus, a dark state, and eventually dissipates the excess energy as heat through thermal vibrations. This mechanism links phenomenological and environmental factors such as rapid fluctuations of Sun radiation due to clouds movement, which cause changes in physiological conditions across the membrane of the photosynthetic machinery, leading to structural changes of the light-harvesting antenna proteins at the atomistic level. The research also showed that aggregation of LHCII promotes energy dissipation both at high temperature and increased acidity conditions, a process well-known in the community under these conditions.

"The allosteric motions can be illustrated in the Figure below," Weng told us. "An increase either in temperature or in acidity, or both, induces local helices formation, which triggers a global transmembrane protein-conformation change, leading to close proximity between the embedded chromophores. This process is reversed when temperature and acidity are lowed when clouds move in, restoring the high efficiency mode of energy transfer for light harvesting."

Credit: 
Science China Press

'Infant' planet discovered by UH astronomers, Maunakea telescope

image: Illustration of AU Mic b orbiting its parent star, AU Mic.

Image: 
NASA's Goddard Space Flight Center/Chris Smith (USRA)

Astronomers study stars and planets much younger than the Sun to learn about past events that shaped the Solar System and Earth. Most of these stars are far enough away to make observations challenging, even with the largest telescopes. But now this is changing.

University of Hawai'i at Mānoa astronomers are part of an international team that recently discovered an infant planet around a nearby young star. The discovery was reported Wednesday in the international journal Nature.

The planet is about the size of Neptune, but, unlike Neptune, it is much closer to its star, taking only eight and a half days to complete one orbit. It is named "AU Mic b" after its host star, AU Microscopii, or "AU Mic" for short. The planet was discovered using the NASA TESS planet-finding satellite, as it periodically passed in front of AU Mic, blocking a small fraction of its light. The signal was confirmed by observations with another NASA satellite, the Spitzer Space Telescope, and with the NASA Infrared Telescope Facility (IRTF) on Maunakea. The observations on Hawai'i Island used a new instrument called iSHELL that can make very precise measurements of the motion of a star like AU Mic. These measurements revealed a slight wobble of the star, as it moves in response to the gravitational pull of the planet. It confirmed that AU Mic b was a planet and not a companion star, which would cause a much larger motion.

Discovery on Maunakea sets foundation

AU Mic and its planet are about 25 million years young, and in their infancy, astronomically speaking. AU Mic is also the second closest young star to Earth. It is so young that dust and debris left over from its formation still orbit around it. The debris collides and breaks into smaller dust particles, which orbit the star in a thin disk. This disk was detected in 2003 with the UH 88-inch telescope on Maunakea. The newly-discovered planet orbits within a cleared-out region inside the disk.

"This is an exciting discovery, especially as the planet is in one of the most well-known young star systems, and the second-closest to Earth. In addition to the debris disk, there is always the possibility of additional planets around this star. AU Mic could be the gift that keeps on giving," said Michael Bottom, an Assistant Astronomer at the UH Institute for Astronomy.

"Planets, like people, change as they mature. For planets this means that their orbits can move and the compositions of their atmospheres can change. Some planets form hot and cool down, and unlike people, they would become smaller over time. But we need observations to test these ideas and planets like AU Mic b are an exceptional opportunity," said Astronomer Eric Gaidos, a professor in the Department of Earth Sciences at UH Mānoa.

Clues to the origin of Earth-like planets

AU Mic is not only much younger than the Sun, it is considerably smaller, dimmer and redder. It is a "red dwarf," the most numerous type of star in the galaxy. The TESS satellite is also discovering Earth-sized and possibly habitable planets around older red dwarfs, and what astronomers learn from AU Mic and AU Mic b can be applied to understand the history of those planets.

"AU Mic b, and any kindred planets that are discovered in the future, will be intensely studied to understand how planets form and evolve. Fortuitously, this star and its planet are on our cosmic doorstep. We do not have to venture very far to see the show," Gaidos explained. He is a co-author on another five forthcoming scientific publications that have used other telescopes, including several on Maunakea, to learn more about AU Mic and its planet.

AU Mic appears low in the summer skies of Hawai'i but you'll need binoculars to see it. Despite its proximity, the fact that it is a dim red star means it is too faint to be seen with the unaided eye.

Credit: 
University of Hawaii at Manoa

Study confirms "classic" symptoms of COVID-19

A persistent cough and fever have been confirmed as the most prevalent symptoms associated with COVID-19, according to a major review of the scientific literature. 
 

Other major symptoms include fatigue, losing the ability to smell and difficulty in breathing.  

The study ratifies the list of symptoms listed by the World Health Organisation at the start of the pandemic. 

The researchers - from five universities including the University of Leeds in the UK - combined data from 148 separate studies to identify the common symptoms experienced by more than 24,000 patients from nine countries, including the UK, China and the US. 

The study - published in the online journal PLoS ONE - is one of the biggest reviews ever conducted into COVID-19 symptoms. The researchers also acknowledge there is likely to be a large proportion of people who had the virus but did not display symptoms .

Of the 24,410 cases, the study found:

78 percent had a fever. Although this tended to vary across countries: with 72 percent of fever reported by patients in Singapore and 32 percent in Korea.

57 percent reported a cough. Again, this varied across countries, with 76 percent of patients reporting a cough in the Netherlands compared to 18 percent in Korea. 

31 percent said they had suffered fatigue. 

25 percent lost the ability to smell. 

23 percent reported difficulty breathing. 

 

The researchers believe the variation in the prevalence of symptoms between countries is due, in part, to the way data was collected. 
 

Of those patients who needed hospital treatment, 17 percent needed non-invasive help with their breathing; 19 percent had to be looked after in an intensive care unit, nine percent required invasive ventilation and two percent needed extra-corporeal membrane oxygenation, an artificial lung. 

Ryckie Wade, a surgeon and Clinical Research Fellow at the Leeds Institute of Medical Research, supervised the research. He said: "This analysis confirms that a cough and fever were the most common symptoms in people who tested positive with COVID-19."  
 

"This is important because it ensures that people who are symptomatic can be quarantined, so they are not infecting others. 
 

"The study gives confidence to the fact that we have been right in identifying the main symptoms and it can help determine who should get tested." 
 

Credit: 
University of Leeds

Economic alien plants more likely to go wild

image: Oxalis pes-caprae or Bermuda buttercup is native to South Africa, and has been introduced elsewhere as bee plant (for honey production) and for ornamental purposes. It is now widely naturalized elsewhere, like here on Crete (Greece).

Image: 
Mark van Kleunen

Humans have cultivated plants outside their native ranges for thousands of years. But as the world became increasingly interconnected over the past five hundred years, the scale of cultivation of non-native plants for economic value - for example as food, ornamentation or for medicinal purposes - has intensified. For the first time, a team of researchers led by University of Konstanz ecologist Mark van Kleunen has carried out scientific analyses to assess how economic use of non-native plants relates to their naturalization success (i.e. their establishment in the wild) around the world.

Cultivation a major driver of the introduction of alien plants

The international team of biologists from the University of Konstanz, Taizhou University and Fudan University (both in China), the University of Vienna, the Czech Academy of Sciences, Durham University and Georg August University of Göttingen analysed a global dataset on 11,685 economic plant species (World Economic Plants database) in combination with a global dataset on 12,013 naturalized plant species (Global Naturalized Alien Flora database).

The results, which were published in Nature Communications this week, suggest that cultivation for economic use is the major pathway for the introduction of naturalized alien plants in regions across the globe.

Economic plants are more likely to naturalize

"As an ecologist, I'm mainly interested in what determines the success of a plant species, particularly alien plant species", says Mark van Kleunen, lead author on the study. "Many contemporary studies look into their spread, trying to understand why these aliens are able to establish themselves in areas well beyond their native ranges. What these studies tend not to take into account is how and why they were introduced in the first place".

The results of the study confirm that there is a direct link between cultivation for economic purposes and naturalization: Plants with an economic use were 18 times more likely to naturalize than species without any known economic use, and plants with multiple economic uses were the most likely to naturalize. More than 50 percent of the plant species used as ornamental garden plants or for the production of animal food, which are among the most widely cultivated plants, have become naturalized somewhere in the world.

Plants from Northern Hemisphere among the most successful

Previous studies have shown that Northern Hemisphere continents are the most prolific when it comes to donating naturalized species, especially Europe. "Our research suggests that this is because more plants from the Northern Hemisphere have been cultivated for economic use elsewhere, and not because they are in some way superior or have an innate ability to naturalize outside their native environments", says Dr Trevor Fristoe, another University of Konstanz author on the study. Economic plants of Asian origin, however, were shown to have the greatest naturalization success.

Cultivation bias drives phylogenetic patterns in naturalization

The study further shows that phylogenetic patterns in the naturalized flora are partly due to which plants we cultivate. Naturalized species have been shown to be far more frequent in some families of the world's global seed plant flora than in others. While these patterns have been attributed to shared traits among closely related species that promote naturalization success, the new insights generated by van Kleunen et al. raise the possibility that these patterns are caused by a phylogenetic bias in the species selected and cultivated for their economic value.

Facts:

- Pioneering global study led by University of Konstanz ecologist Mark van Kleunen on the naturalization of plants shows that the economic use of plants plays a crucial role in driving global plant naturalization patterns.

- Analyses of a global dataset on 11,685 economic plant species in combination with a global dataset on 12,013 naturalized plant species show that plants with an economic use were 18 times more likely to naturalize than species without any known economic use.

- Original publication: Mark van Kleunen, Xinyi Xu, Qiang Yang, Noëlie Maurel, Zhijie Zhang, Wayne Dawson, Franz Essl, Holger Kreft, Jan Pergl, Petr Pyšek, Patrick Weigelt, Dietmar Moser, Bernd Lenzner and Trevor S. Fristoe, Economic use of plants is key to their naturalization success, Nature Communications, 24 June 2020. URL: https://doi.org/10.1038/s41467-020-16982-3

- Plants with multiple economic uses were the most likely to naturalize, while economic plants of Asian origin showed the greatest naturalization success.

- The phylogenetic distribution of naturalized plants is caused in part by a phylogenetic bias among plants selected for economic use.

Credit: 
University of Konstanz

Treating leukaemia more effectively

In the current issue of Communications Biology, Professor Jindrich Cinatl from the Institute for Medical Virology at Goethe University and Professor Martin Michaelis from the School of Biosciences at the University of Kent report on their investigations with nelarabine on different cell lines. "Nelarabine is the precursor of the drug, a prodrug, that does not become effective until it is combined with three phosphate groups in the leukaemia cell," explains Professor Cinatl. "In studies of various ALL cell lines and leukaemia cells from ALL patients, we have been able to demonstrate that the enzyme SAMHD1 splits the phosphate groups off so that the medicine loses its effect." Because B-ALL cells contain more SAMHD1 than T-ALL cells, nelarabine is less effective with B-ALL.

These results could improve the treatment of ALL in the future. In rare cases, B-ALL cells contain very little SAMHD1 so that treatment with nelarabine would be possible. On the contrary, there are also rare cases of T-ALL exhibiting a lot of SAMHD1. In such cases, the otherwise effective nelarabine would not be the right medication. Professor Michaelis observes: "SAMHD1 is thus a biomarker that allows us to better adapt treatment with nelarabine to the individual situation of ALL patients."

Tamara Rothenburger, whose doctoral dissertation was funded by the association "Hilfe für krebskranke Kinder Frankfurt e.V", is satisfied when she looks back at her research. "I hope that many children with leukaemia will benefit from the results." The research was also supported by the Frankfurt Stiftung für krebskranke Kinder. Additional members of the research group are Ludwig-Maximilians-Universität Munich, and University College London.

Credit: 
Goethe University Frankfurt

Genomes front and center of rare disease diagnosis

Cambridge UK, 24 June 2020: A research programme pioneering the use of whole genome sequencing in the NHS has diagnosed hundreds of patients and discovered new genetic causes of disease. Whole genome sequencing is the technology used by the 100,000 Genomes Project, a service set up by the government which aims to introduce routine genetic diagnostic testing in the NHS.

The present study, led by researchers at the National Institute for Health Research BioResource together with Genomics England, demonstrates that sequencing the whole genomes of large numbers of individuals in a standardised way can improve the diagnosis and treatment of patients with rare diseases.

The researchers studied the genomes of groups of patients with similar symptoms, affecting different tissues, such as the brain, eyes, blood or the immune system. They identified a genetic diagnosis for 60% of individuals in one group of patients with early loss of vision.

The programme, the results of which were published today in two articles in the journal Nature, offered whole-genome sequencing as a diagnostic test to patients with rare diseases across an integrated health system, a world first in clinical genomics. The integration of genetic research with NHS diagnostic systems increases the likelihood that a patient will receive a diagnosis and the chance that a diagnosis will be provided within weeks rather than months.

"Around 40,000 children are born each year with a rare inherited disease in the UK alone. Sadly, it takes more than two years, on average, for them to be diagnosed," says Willem Ouwehand, Professor of Experimental Haematology at the University of Cambridge, the National Institute for Health Research BioResource and NHS Blood and Transplant Principal Investigator. "We felt it was vital to shorten this odyssey for patients and parents."

"This research shows that quicker and better genetic diagnosis will be possible for more NHS patients."

In the study, funded principally by the National Institute for Health Research, the entire genomes of almost 10,000 NHS patients with rare diseases were sequenced and searched for genetic causes of their conditions. Previously unobserved genetic differences causing known rare diseases were identified, in addition to genetic differences causing completely new genetic diseases.

The team identified more than 172 million genetic differences in the genomes of the patients, many of which were previously unknown. Most of these genetic differences have no effect on human health, so the researchers used new statistical methods and powerful supercomputers to search for the differences which cause disease - a few hundred 'needles in the haystack'.

In one study from the programme, published as a standalone article in Nature, researchers examined 886 patients with primary immunodeficiency - a condition that affects the ability of the immune system to fight infections by microbes - and identified four novel associated genes.

"Providing the best treatment and the most appropriate care for patients with inherited immune disorders depends absolutely on a conclusive molecular diagnosis," says Professor Adrian Thrasher of the UCL Great Ormond Street Institute of Child Health (ICH) in London. "Our study demonstrates the value of whole-genome sequencing in this context and provides a suite of new diagnostic tools, some of which have already led to improved patient care."

Using a new analysis method developed specifically for the project, the team identified 95 genes in which rare genetic differences are statistically very likely to be the cause of rare diseases. Genetic differences in at least 79 of these genes have been shown definitively to cause disease.

The team searched for rare genetic differences in almost all of the 3.2 billion DNA letters that make up the genome of each patient. This contrasts with current clinical genomics tests, which usually examine a small fraction of the letters, where genetic differences are thought most likely to cause disease. By searching the entire genome researchers were able to explore the 'switches and dimmers' of the genome - the regulatory elements in DNA that control the activity of the thousands of genes.

The team showed that rare differences in these switches and dimmers, rather than disrupting the gene itself, affect whether or not the gene can be switched on at the correct intensity. Identifying genetic changes in regulatory elements that cause rare disease is not possible with the clinical genomics tests currently used by health services worldwide. It is only possible if the whole of the genetic code is analysed for each patient.

"We have shown that sequencing the whole genomes of patients with rare diseases routinely within a health system provides a more rapid and sensitive diagnostic service to patients than the previous fragmentary approach, and, simultaneously, it enhances genetics research for the future benefit of patients still waiting for a diagnosis," says Dr Ernest Turro from the University of Cambridge and the NIHR BioResource.

"Thanks to the contributions of hundreds of physicians and researchers across the UK and abroad, we were able to study patients in sufficient numbers to identify the causes of even very rare diseases."

Although individual rare diseases affect a very small proportion of the population, there exist thousands of rare diseases and, together, they affect more than three million people in the UK. To tackle this challenge, the NIHR BioResource created a network of 57 NHS hospitals which focus on the care of patients with rare diseases. Nearly 1000 doctors and nurses working at these hospitals made the project possible by asking their patients and, in some cases, the parents of affected children to join the NIHR BioResource.

"In setting up the NIHR BioResource Project, we were taking uncharted steps in a determined effort to improve diagnosis and treatment for patients in the NHS and further afield" says Dr Louise Wood, Director of Science, Research and Evidence at the Department of Health and Social Care and who together with the Chief Medical Officer Professor Chris Whitty has the overall responsibility for the National Institute for Health Research.

"The NIHR-funded researchers on this scientific report were part of those earliest discussions as we sought to ensure we could deliver the science and transform it into clinical practice across the NHS. This research has demonstrated that patients, their families and the health service can all benefit from placing genomic sequencing at the forefront of clinical care in appropriate settings.

"The pioneering work undertaken by the NHS in partnership with Genomics England and academic researchers across the UK has laid the foundation for applying the same genome test to patients with COVID-19, with the hope of finding clues why some patients experience such a severe form of this new disease."

Based on the emerging data from the present NIHR BioResource study and other studies by Genomics England, the UK government announced in October 2018 that the NHS will offer whole-genome sequencing analysis for all seriously ill children with a suspected genetic disorder, including those with cancer. The sequencing of whole genomes will expand to one million genomes per year by 2024.

Whole-genome sequencing will be phased in nationally for the diagnosis of rare diseases as the 'standard of care', ensuring equivalent care across the country.

The benefits include a hastened diagnosis for patients, reduced costs for health services, improved understanding of the reasons they suffer from disease for patients and their carers and improved provision of treatment.

Credit: 
Don Powell Associates Ltd

Turning alcohol into key ingredients for new medicines

Chemists have found a way to turn alcohol into amino acids, the building blocks of life.

In a study published Monday in the journal Nature Chemistry, researchers explained the transformation, which involves selectively identifying and replacing molecular bonds with unprecedented precision. The finding may make it easier to create some medications by expanding the types of new amino acids that can be made to more quickly build those medicines.

"One of the coolest applications of this research is that we found a new way to make unnatural amino acids - sometimes used in medicines to target diseases while avoiding natural metabolism," said David Nagib, a professor of chemistry at The Ohio State University and senior author of the paper. "And we may be able to use these unnatural amino acids to build new complex molecules that target various diseases."

Amino acids, which make up our proteins, are also sometimes used as building blocks in medicines, but creating new, artificial ones with correct three-dimensional geometry in a laboratory for pharmaceutical purposes can be an expensive and lengthy process.

Alcohol, though, is plentiful and cheap.

To transform alcohol into amino acids, researchers played with alcohol at the atomic level. An alcohol molecule is made of three different elements - hydrogen, carbon and oxygen. The researchers found a way to break the bonds between specific carbon and hydrogen atoms to introduce a nitrogen atom, the other most common element found in nature and medicines - a type of laboratory wizardry called "selective C-H functionalization."

"Carbon-hydrogen is the most ubiquitous bond - think of a field of grass in a park. Each piece of grass is a carbon-hydrogen bond, and the challenge of C-H functionalization is how do you pick the exact blade of grass you want to turn into a rose and ignore all the rest?" Nagib said. "How do you be selective about which bond you're transforming?"

Being able to choose the right bond is important. When chemists build new medications, they use molecules carefully assembled in a specific way, to target only a disease and not other biologically important machinery. Think of the molecules in humans, bacteria or viruses as individual locks, and medicines as a key: A good medicine, or key, fits only in the right lock.

"In alcohol, there are pairs of equal carbon-hydrogen bonds, but those bonds are not equal in their spatial arrangement on the molecule," Nagib said. "And now we can grab one of them over the others to make amines with various three-dimensional shapes, which will allow construction of new chemical structures to make drugs that may serve as a better key."

Credit: 
Ohio State University

Order out of disorder in ice

image: An illustration shows structural evolution of ice VII as a function of time at constant P-T conditions.

Image: 
Chuanlong Lin

The glass structure of a material is often believed to mimic its corresponding liquid. Polyamorphism between ices has been used as a guide to elucidate the properties of liquid water. But how many forms of amorphous ices are there? Do we understand how metastable high-pressure crystalline ice evolves towards the thermally stable low-density form? An international research team led by Chuanlong Lin and Wenge Yang from HPSTAR and John S. Tse from the University of Saskatchewan has revealed a multiple-step transformation mechanism using state-of-the-art time-resolved in situ synchrotron x-ray diffraction. A temperature/time-dependent kinetic pathway with three distinctive transitions was identified in the structural evolution from metastable crystalline ice (ice VII or ice VIII) to the thermodynamically stable ice I. These intermediate processes compete against each other. The end result is a juxtaposition of these processes. The work is published in PNAS.

Water plays a vital role in the origin of life on Earth. In the liquid phase, it exhibits many unusual properties. In the solid phase, ordinary ice also displays diverse phase transitions at high pressure. Many theoretical and experimental studies have been devoted to understanding the underlying inter-conversion mechanisms. So far, most experiments have been ex situ measurements on recovered samples and lack detailed information on the structural evolution accompanying the transformation. Previous studies have been hindered by technical difficulties in monitoring the rapid structural change over a broad pressure and temperature range.

In 2017, Lin and his colleagues overcame the experimental challenge. A series of studies was conducted to investigate ice transitions by combining in situ time-resolved x-ray diffraction, and remote pressure control with different ramp rates within a low-temperature cryostat. This capability allowed the suppression of thermally-driven crystalline-crystalline transitions [PNAS 115, 2010-2015(2018)]. Important insights into the complexity of the poly-amorphous transformations were obtained, such as the kinetically-controlled two-step amorphization in ice Ih [Phys. Rev. Lett. 119, 135701(2017)] and the successful venture into the no man's land [Phys. Rev. Lett. 121, 225703(2018)].

Now, they try to answer what exactly is the nature of the amorphous-amorphous phase transformation processes? Using the newly developed techniques, they explored the "mirror" process, i.e., reverse transformation from a meta-stable high-density crystalline ice (i.e, ice VII or ice VIII) to the ambient stable ice I. They identified the temperature/time-dependent kinetic pathways and characterized the interplay/competition between the high density amorphous (HDA)-low density amorphous (LDA) transition and recrystallization. Contrary to previously reported ice VII (or ice VIII) -- LDA -- ice I transformation sequences, time-resolved measurements show a three-step process: initial transformation of ice VII to HDA, followed by a HDA -- LDA transition, and then crystallization of LDA into ice I. Both the amorphization of ice VII and the HDA to LDA transition show distinctive thermal activation mechanisms. Significantly, both processes exhibit the Arrhenius behavior with a temperature-dependent duration time (τ) and a 'transition' temperature at around 110-115 K.

Large-scale molecular-dynamics calculations also support their experimental findings. Furthermore, it shows the HDA to LDA transformation is continuous with a large density difference and involves substantial displacements of water in the nano-scale. This study presents a new perspective on the metastability and complexities in shaping ice-transition kinetic pathways.

Credit: 
Center for High Pressure Science & Technology Advanced Research

Digital breast cancer detection technology does not improve outcomes

A new study in JNCI: Journal of the National Cancer Institute finds that breast cancer screening using digital mammography technology is not associated with improved health outcomes when compared to older film detection technology.

In 2000, the US FDA approved digital mammography technology. Studies suggested the new technology was potentially more specific in its findings. Proponents of the technology believed it would reduce the number of callbacks for positive findings, find more disease, and lead to fewer cancers diagnosed in between screenings (interval cancers).

Researchers conducted a systematic review and searched seven databases for publications that compared film to digital mammography within the same population of asymptomatic women. Researchers looked for evidence of improved health outcomes in the newer digital technology, by analyzing detection rates, recall rates (patients contacted for further testing), and cancers diagnosed in between scheduled screenings.

The meta-analysis included 24 studies with 16,583,743 screening examinations (10,968,843 film and 5,614,900 digital). The difference in cancer detection rate showed an increase of 0.51 per 1,000 screens, and a recall rate increase of 6.95 per 1,000 screens after the transition from film to digital mammography.

The researchers found that the small increase in cancer detection following the switch to new digital mammography did not translate into a reduction in cancers diagnosed in between scheduled screenings

The researchers conclude that while digital mammography is beneficial for medical facilities due to easier storage and handling of images, these results suggest the transition from film to digital mammography has not resulted in health benefits for screened women.

"While the transition from film to digital may have been beneficial for technological reasons and for efficiencies in service screening, our research shows the increase in cancer detection was largely attributable to more detection of DCIS (ductal carcinoma in situ), with little difference in invasive cancer detection," said study's lead author, Rachel Farber. "At a time when new mammography and other imaging technologies are proposed for adoption in population screening, it is critical to carefully consider and evaluate the effect this could have on health outcomes."

Credit: 
Oxford University Press USA

Supply constraint from earthquakes in Japan in input-output analysis

image: Direct damage of a disaster and indirect damage among sectors in a supply chain of five sectors (as an example)

Image: 
Copyright © 2020, John Wiley and Sons

Many people can recall shocking news images of Japan sustaining earthquake damage. Between 1996 and September of 2018, there were 155 earthquakes in Japan that resulted in human injuries. In 20 of these earthquakes, people have gone missing or were killed. During the Hyogo-ken Nanbu earthquake in 1995, 6,434 people were killed and 3 have gone missing. In 99 of the 155 earthquakes, damage to houses, school buildings, windows, water and sewage pipes, and landslides were recorded. Tsunami's occurred as a result of an earthquake in 18 of the 155 earthquakes.

Natural disasters cause damage to human life and also great disruption to economic activities. One of the economic activities that are effected by natural disasters are supply chains. Increasingly complex supply chains have caused risks in supply chain disruptions to also become complex, highly entangled and harder to access. Past disasters demonstrate the importance of forecasting economic damage from supply chain disruptions more accurately to structure risk-management schemes and minimize loss.

A study led by Senior Assistant Professor Michiyuki Yagi of Shinshu University used input-output analysis (IOA) to quantify economic damage associated with natural disasters; in particular, earthquakes. IOA is effective in evaluating economic impact at the regional/sectoral level. The researchers focused on the exogenous (flow) damage to focus on the monthly or quarterly levels of production statistics. They chose the Leontief price model to access the study, building on Ji Young Park (2007) that considered the supply constraint in the Ghosh price model to introduce the price elasticity of demand. This study modified Park (2007) to use the Leontief price model instead of the Ghosh price model and use the loss of social surplus as damage instead of the change in production.

The loss of social surplus was used instead of the change in production because production or sales is less informative as a damage index than profit (margin) because it can be any amount without considering profit. Production (sales) also does not identify how much damage is passed on to each supplier (upstream sector) and buyer (downstream sector).

The researchers found that previous studies' estimates of indirect damage estimation were similar to that of this study. The largest earthquakes in Japan tend to require 0.2 to 0.3 months of economic assistance for initial production immediately after the disaster within a damaged prefecture and more than 0.5 months or 50% of initial production in total until the first temporal recovery, which is the eighth month at most.

The earthquake known as the Great East Japan Earthquake required twice as much (fast) economic assistance in Fukushima, Iwate, Miyagi, Ibaraki, and Chiba prefectures. This had cumulatively, 25 month-production damage until the temporal recovery at the 37th month (to the five prefectures).

Credit: 
Shinshu University