Tech

UMD releases comprehensive review of the future of CRISPR technology in crops

image: CRISPR is often thought of as 'molecular scissors' used for precision breeding to cut DNA so that a certain trait can be removed, replaced, or edited, but Yiping Qi, assistant professor in Plant Science & Landscape Architecture at the University of Maryland, is looking far beyond these traditional applications in his latest publication in Nature Plants.

Image: 
National Institutes of Health, public domain

CRISPR is often thought of as "molecular scissors" used for precision breeding to cut DNA so that a certain trait can be removed, replaced, or edited, but Yiping Qi, assistant professor in Plant Science & Landscape Architecture at the University of Maryland, is looking far beyond these traditional applications in his latest publication in Nature Plants. In this comprehensive review, Qi and coauthors in his lab explore the current state of CRISPR in crops, and how scientists can use CRISPR to enhance traditional breeding techniques in nontraditional ways, with the goal of ensuring global food and nutritional security and feeding a growing population in the face of climate change, diseases, and pests.

With this new paper, Qi highlights recent achievements in applying CRISPR to crop breeding and ways in which these tools have been combined with other breeding methods to achieve goals that may not have been possible in the past. He aims to give a glimpse of what CRISPR holds for the future, beyond the scope of basic gene editing.

"When people think of CRISPR, they think of genome editing, but in fact CRISPR is really a versatile system that allows you to home in on a lot of things to target, recruit, or promote certain aspects already in the DNA," says Qi. "You can regulate activation or suppression of certain genes by using CRISPR not as a cutting tool, but instead as a binding tool to attract activators or repressors to induce traits."

Additionally, Qi discusses the prospect of recruiting proteins that can help to visualize DNA sequences, and the potential for grouping desirable traits together in the genome. "I call this gene shuffling," says Qi. "This is designed to move very important trait genes close to each other to physically and genetically link them so they always stick together in traditional crossbreeding, making it much easier to select for crops with all the traits you want."

These are just some of the examples of future directions Qi hopes to cultivate and draw more attention to with this paper. "I hope this review [in Nature Plants] will open eyes to show that there is a lot to be offered by CRISPR, going beyond the current status of genome editing, but also outside of just editing to see where the whole field can lead down the road."

This includes the process of taking CRISPR applications in animals and humans and applying them to crops in ways that haven't been done before. For example, CRISPR technology has already enhanced screenings for genes and traits in human health by using a library of tens of thousands of guide molecules that are tailored for targeting selected gene sets at the genome scale. This system could be potentially used in plants to screen for traits that contribute to disease and pest resistance, resiliency, and crop yield. "This not only helps with breeding, but also helps categorize gene functioning much more easily," says Qi. "Mostly, these studies have been done in human cells, and crops are lagging behind. I see that as one future aspect of where plant science can harness some of these different applications, and my lab has already been doing some of this work."

Qi's lab has published multiple original research papers this year that highlight some of the differences for CRISPR applications in human and plant cells. Earlier this year in Molecular Plant, Qi, his graduate student, interns, and collaborators published findings testing the targeting scope and specificity of multiple CRISPR Cas9 variants. Qi's team sought to prove or disprove claims made in humans about the fidelity and specificity of these tools in rice. "Not all claims that are made for CRISPR functionality in humans and animals are going to be true or applicable in plants, so we are looking at what works and what we can do to optimize these tools for crops."

Another recent paper in BMC Biology as part of a collaborative research effort investigated temperature as a method of improving efficiency of CRISPR Cas12a genome editing in rice, maize, and Arabidopsis, which was found to need higher than ambient temperatures to boost editing efficiency. "Human cells are always maintained at higher temperature which may be optimal for CRISPR to work, but is pretty hot for plants," says Qi. "We have to explore how that temperature plays a role for CRISPR applications in other species."

Qi also published the first ever book dedicated entirely to Plant Genome Editing with CRISPR Systems, highlighting cutting-edge methods and protocols for working with CRISPR in a variety of crops.

"This book is really gathering together specific applications for many different plant systems, such as rice, maize, soybeans, tomatoes, potatoes, lettuce, carrots - you name it - so that people working in their own plant of interest may find some chapters quite relevant. It is designed as a protocol book for use in the lab, so that anybody new to the field should be able to figure out how to work with CRISPR in their particular plant." Qi was contacted by the publisher in the United Kingdom, Humana Press, to produce and edit the book. It was released earlier this year as part of the Methods in Molecular Biology book series, a prominent and respected series in the field.

"How to feed the world down the road - that's what motivates me every day to come to work," says Qi. "We will have 10 billion people by 2050, and how can we sustain crop improvement to feed more people sustainably with climate change and less land? I really think that technology should play a big role in that."

Credit: 
University of Maryland

Coupled exploration of light and matter

image: White-light reflectivity spectra recorded around a filling factor of 2/3, revealing clear signatures of optical coupling to the quantum Hall state.

Image: 
ETH Zurich/D-PHYS Patrick Knüppel

The concept of 'quasiparticles' is a highly successful framework for the description of complex phenomena that emerge in many-body systems. One species of quasiparticles that in particular has attracted interest in recent years are polaritons in semiconductor materials. These are created by shining light onto a semiconductor, where the photons excite electronic polarization waves, called excitons. The creation process is followed by a period during which the dynamics of the system can be described as that of a particle-like entity that is neither light nor matter, but a superposition of the two. Only once those mixed light-matter quasiparticles decay -- typically on the timescale of picoseconds -- do the photons gain back their individual identity. Writing in the journal Nature, Patrick Knüppel and colleagues from the group of Professor Ataç Imamoglu in the Department of Physics at ETH Zurich now describe experiments in which the released photons reveal unique information about the semiconductor they have just left; at the same time the photons have been modified in ways that would not have been possible without interacting with the semiconductor material.

Teaching photons new tricks

Much of the recent interest in polaritons comes from the prospect that they open up intriguing new capabilities in photonics. Specifically, polaritons provide a means to let photons do something that photons cannot do on their own: interact with one another. Rays of light normally simply pass through each other. By contrast, photons that are bound in polaritons can interact through the matter part of the latter. Once that interaction can be made sufficiently strong, the properties of photons can be harnessed in new ways, for example for quantum information processing or in novel optical quantum materials. However, achieving interactions strong enough for such applications is no mean feat.

It starts with creating polaritons in the first place. The semiconductor material hosting the electronic system has to be placed in an optical cavity, to facilitate strong coupling between matter and light. Creating such structures is something Imamoglu's group has perfected over the years, in collaboration with others, in particular with the group of Professor Werner Wegscheider, also at the Department of Physics of ETH Zurich. A separate challenge is to make the interaction between polaritons strong enough that they have a sizeable effect during the short lifetime of the quasiparticles. How to achieve such strong polariton-polariton interaction is currently a major open problem in the field, hindering progress towards practical applications. And here Knüppel et al. have now made a substantial contribution with their latest work.

Hall-marks of strong interaction

The ETH physicists have found an unexpected way to enhance the interaction between polaritons, namely by suitably preparing the electrons with which the photons are about to interact. Specifically, they started with the electrons being initially in the so-called fractional quantum Hall regime, where electrons are confined to two dimensions and exposed to a high magnetic field, to form highly correlated states entirely driven by electron-electron interactions. For particular values of the applied magnetic field -- which determines the so-called filling factor characterising the quantum Hall state -- they observed that photons shone onto and reflected from the sample showed clear signatures of optical coupling to quantum Hall states (see the figure).

Importantly, the dependence of the optical signal on the filling factor of the electron system also appeared in the nonlinear part of the signal, a strong indicator that the polaritons have interacted with one another. In the fractional quantum Hall regime, the polariton-polariton interactions were up to a factor of ten stronger than in experiments with the electrons outside that regime. That enhancement by one order of magnitude is a significant advance relative to current capabilities, and might be enough to enable key demonstrations of 'polaritonics' (such as strong polariton blockade). This not least as in the experiments of Knüppel et al. the increase in interactions does not come at the expense of the polariton lifetime, in contrast to many previous attempts.

The power, and challenges, of nonlinear optics

Beyond the implications for manipulating light, these experiments also take the optical characterisation of many-body states of two-dimensional electron systems to a new level. They establish how to separate the weak nonlinear contribution to the signal from the dominant linear one. This has been made possible through a new type of experiment that the ETH researchers have developed. A major challenge was to deal with the requirement of having to illuminate the sample with relatively high-power light, to tweak out the weak nonlinear signal. To ensure that the photons impinging on the semiconductor do not cause unwanted modifications to the electron system -- in particular, ionization of trapped charges -- the Imamoglu-Wegscheider team designed a sample structure that has reduced sensitivity to light, and they performed experiments with pulsed rather than continuous excitation, to minimize exposure to light.

The toolset now developed to measure the nonlinear optical response of quantum Hall states should enable novel insight beyond what is possible with linear optical measurements or in the traditionally used transport experiments. This is welcome news for those studying the interplay between photonic excitations and two-dimensional electron systems -- a field in which there is no lack of open scientific problems.

Credit: 
ETH Zurich Department of Physics

Genetic study reveals metabolic origins of anorexia

A global study, led by researchers at King's College London and University of North Carolina at Chapel Hill, suggests that anorexia nervosa is at least partly a metabolic disorder, and not purely psychiatric as previously thought. The research was published in Nature Genetics today.

The large-scale genome-wide association study, undertaken by over 100 academics worldwide, identified eight genetic variants linked to anorexia nervosa. The results suggest that the genetic origins of the disorder are both metabolic and psychiatric.

Anorexia nervosa is a serious and potentially life-threatening illness. Symptoms of anorexia can include a dangerously low body weight, an intense fear of gaining weight, and a distorted body image. Anorexia nervosa affects between 1-2% of women and 0.2-0.4% of men and has the highest mortality rate of any psychiatric illness.

The researchers combined data collected by the Anorexia Nervosa Genetics Initiative and the Eating Disorders Working Group of the Psychiatric Genomics Consortium. The resulting dataset included 16,992 cases of anorexia nervosa and 55,525 controls, from 17 countries across North America, Europe, and Australasia.

The key findings of the study are:

The genetic basis of anorexia nervosa overlaps with metabolic (including glycemic), lipid (fats), and anthropometric (body measurement) traits, and the study shows that this is independent of genetic effects that influence body mass index (BMI).

The genetic basis of anorexia nervosa overlaps with other psychiatric disorders such as obsessive-compulsive disorder, depression, anxiety, and schizophrenia.

Genetic factors associated with anorexia nervosa also influence physical activity, which could explain the tendency for people with anorexia nervosa to be highly active.

Dr Gerome Breen, from the National Institute for Health Research (NIHR) Maudsley Biomedical Research Centre and the Institute of Psychiatry, Psychology & Neuroscience, at King's College London, who co-led the study, commented: "Metabolic abnormalities seen in patients with anorexia nervosa are most often attributed to starvation, but our study shows metabolic differences may also contribute to the development of the disorder. Furthermore, our analyses indicate that the metabolic factors may play nearly or just as strong a role as purely psychiatric effects."

Professor Janet Treasure, also from the Institute of Psychiatry, Psychology & Neuroscience, King's College London, said: "Over time there has been uncertainty about the framing of anorexia nervosa because of the mixture of physical and psychiatric features. Our results confirm this duality and suggest that integrating metabolic information may help clinicians to develop better ways to treat eating disorders."

Professor Cynthia Bulik, from the University of North Carolina, said: "Our findings strongly encourage us to shine the torch on the role of metabolism to help understand why some individuals with anorexia nervosa drop back to dangerously low weights, even after hospital-based refeeding."

The study concludes that anorexia nervosa may need to be thought of as a hybrid 'metabo-psychiatric disorder' and that it will be important to consider both metabolic and psychological risks factors when exploring new avenues for treating this potentially lethal illness.

Andrew Radford, Chief Executive of Beat, the eating disorder charity, said: "This is ground-breaking research that significantly increases our understanding of the genetic origins of this serious illness. We strongly encourage researchers to examine the results of this study and consider how it can contribute to the development of new treatments so we can end the pain and suffering of eating disorders."

Credit: 
King's College London

NASA's aqua satellite documents the brief life of tropical depression 4E

image: On July 14, 2019, NASA's Aqua satellite captured the short-lived Tropical Depression 04E weakening in the Eastern Pacific Ocean.

Image: 
NASA Worldview, Earth Observing System Data and Information System (EOSDIS)

The Eastern Pacific Ocean generated the fourth tropical cyclone of the hurricane season on July 13 and by the next day, it had already weakened into a remnant low pressure area.

Tropical Depression 4E formed around 11 a.m. EDT (1500 UTC) on Saturday, July 13. At that time the center of Tropical Depression Four-E was located near latitude 17.3 degrees and longitude 111.0 degrees west. That's about 395 miles (635 km) south of the southern tip of Baja California, Mexico. Maximum sustained winds were near 35 mph (55 kph).

On Sunday, July 14 at 5 a.m. EDT (0900 UTC), the center of Tropical Depression Four-E was located near latitude 18.2 degrees north and longitude 114.8 degrees west. The depression is moving toward the west-northwest near 14 mph (22 kph). A turn to the west is expected to occur later today or tonight. Maximum sustained winds are near 30 mph (45 kph) with higher gusts. The estimated minimum central pressure is 1007 millibars (29.74 inches). The depression weakened to a remnant low pressure area later in the day.

NASA's Aqua satellite passed over the Eastern Pacific Ocean later in the day and the Moderate Resolution Imaging Spectroradiometer or MODIS instrument aboard provided forecasters with a visible image of the depression. The image revealed an eastern quadrant devoid of clouds and rainfall, and the bulk of clouds were pushed northeast of the center. That's because the storm encountered southeasterly vertical wind shear (outside winds blowing at different speeds at various levels in the atmosphere) that helped weaken the storm.

On July 15, the remnant low pressure of Tropical depression Four-E was centered near 19 degrees north latitude and 118 degrees east longitude. The National Hurricane Center noted "This remnant low will dissipate today."

Credit: 
NASA/Goddard Space Flight Center

High-performance sodium ion batteries using copper sulfide

image: This is a schematic model demonstrating grain boundaries and phase interfaces formations.

Image: 
KAIST

Researchers presented a new strategy for extending sodium ion batteries' cyclability using copper sulfide as the electrode material. This strategy has led to high-performance conversion reactions and is expected to advance the commercialization of sodium ion batteries as they emerge as an alternative to lithium ion batteries.

Professor Jong Min Yuk's team confirmed the stable sodium storage mechanism using copper sulfide, a superior electrode material that is pulverization-tolerant and induces capacity recovery. Their findings suggest that when employing copper sulfide, sodium ion batteries will have a lifetime of more than five years with one charge per a day. Even better, copper sulfide, composed of abundant natural materials such as copper and sulfur, has better cost competitiveness than lithium ion batteries, which use lithium and cobalt.

Intercalation-type materials such as graphite, which serve as commercialized anode materials in lithium ion batteries, have not been viable for high-capacity sodium storage due to their insufficient interlayer spacing. Thus, conversion and alloying reactions type materials have been explored to meet higher capacity in the anode part. However, those materials generally bring up large volume expansions and abrupt crystallographic changes, which lead to severe capacity degradation.

The team confirmed that semi-coherent phase interfaces and grain boundaries in conversion reactions played key roles in enabling pulverization-tolerant conversion reactions and capacity recovery, respectively.

Most of conversion and alloying reactions type battery materials usually experience severe capacity degradations due to having completely different crystal structures and large volume expansion before and after the reactions. However, copper sulfides underwent a gradual crystallographic change to make the semi-coherent interfaces, which eventually prevented the pulverization of particles. Based on this unique mechanism, the team confirmed that copper sulfide exhibits a high capacity and high cycling stability regardless of its size and morphology.

Professor Yuk said, "Sodium ion batteries employing copper sulfide can advance sodium ion batteries, which could contribute to the development of low-cost energy storage systems and address the micro-dust issue"

Credit: 
The Korea Advanced Institute of Science and Technology (KAIST)

Fluorine speeds up two-dimensional materials growth

image: These are SEM images of graphene domains growing. They showed that 2 seconds was enough for a domain to grow to ~400 μm and that ~1 mm domains were formed after 5 seconds. The statistical growth rate is more than three orders of magnitude faster than typical graphene growth and three times faster than the previous record realized with a continuous oxygen supply.

Image: 
IBS

Back in 2004, the physics community was just beginning to recognize the existence of truly two-dimensional (2D) material, graphene. Fast forward to 2019, scientists explore a breadth of different 2D materials, expecting to uncover more of their fundamental properties. The frenzy behind these new 2D materials lies in their fascinating properties: materials thinned down to only a few atoms work very differently from their 3D version. Electrons packed into the thinnest-ever layer show distinctive characteristics apart from being in a "loose net". Also being flexible, 2D materials could feature distinctive electrical properties, opening up new applications for next-generation technologies, such as bendable and wearable devices.

Then, what is the catch? Many parameters such as temperature, pressure, precursor type, and flow rate need to be factored into the CVD synthesis of 2D materials. With multiple reactions involved, it is extremely difficult to optimize all these factors during the reactions and find their best combinations. That being said, 2D material synthesis is hardly controllable. Scientists have tried to accelerate the growth of 2D materials by adopting different substrates, feedstocks, or temperature. Still, only a few types of 2D materials can be synthesized into large area high quality films.

Scientists from the Center for Multidimensional Carbon Materials (CMCM), within the Institute for Basic Science (IBS) at the Ulsan National Institute of Science and Technology (UNIST), in cooperation with researchers at Peking University (PKU), and University of Electronic Science and Technology of China (UESTC), demonstrated that fluorine, having the strongest tendency to attract electrons (i.e. electronegativity) in all elements, can speed up the chemical reaction to grow three representative 2D materials; graphene, h-BN, and WS2. Fluorine requires only one electron to attain a high stability. Also, having seven electrons in the outermost orbit of an atom, the distance at which these valence electrons reside is the minimum compared with other elements. This means the valence electrons of fluorine are bound to the atom more strongly than any other atom making fluorine the most active element in the periodic table.

In fact, active gases such as hydrogen or oxygen are broadly used to tune the growth of graphene and other 2D materials. "Why not then the most active element, fluorine? The highest electronegativity allows fluorine to form bonds with nearly all the atoms in the periodic table, so it is expected to change the reaction routes of many chemical processes," said Professor Feng Ding, the corresponding author of this study.

Experimentally, it is not preferable to introduce fluorine during a material's growth as fluorine gets highly toxic in the reactor. To resolve the problem, instead of using fluorine gas directly, the scientists spatially confined the fluorine supply so that only the minimum amount of fluorine is consumed. They placed a metal fluoride substrate (MF2) below a Cu foil with a very narrow gap in between. At a high temperature, fluorine radicals are released from the fluoride surface and spatially trapped in the narrow gap between the Cu foil and the metal fluoride substrate. Surprisingly, such a simple change leads to a record growth rate of graphene at 12 mm per minute. To put this rate in perspective, this new approach reduces the time of growing a 10 cm2 graphene from 10 minutes with previous methods, now down to only 3 minutes.

The introduction of local fluorine entirely changes the methane decomposition route. As the fluorine released from the metal fluoride surface easily reacts with methane gas, there will be a sufficient amount of CH3F or CH2F2 molecules in the gap between Cu and BaF2 substrates. These molecules could decompose on a Cu surface much more easily than CH4 does. In other words, they feed the graphene growth better by supplying more active carbon radicals (i.e. CH3, CH2, CH and C).

Further experimental studies showed that the local fluorine supply strategy could greatly accelerate the growth of other 2D materials, such as h-BN and WS2, as well. The scientists investigated how spatially confined fluorine is able to accelerate the growth of 2D materials. Theoretical studies revealed that fluorine, being highly reactive readily interact with methane molecules. The existence of fluorine leads to the formation of CH3F or CH2F2 molecules. These highly active molecules can then be more easily decomposed on the Cu foil surface, which greatly accelerates the carbon supply for fast graphene growth.

Although the detailed mechanism of fluorine boosting the growth of h-BN and WS2 is not clear, the authors are confident that the presence of fluorine could significantly modify the reactions of 2D materials' growth. "We envision that this local fluorine supply will readily facilitate fast growth of broad 2D materials or enable the growth of new 2D materials which is very difficult to be realized by other methods." noted Professor Feng Ding. In addition to the fluoride, there are abundant kinds of substrates like sulphides, selenides, chlorides or bromides that might be used as local supply sources of different active materials, which provides wide enough platform to modulate the growth of broad 2D materials.

Credit: 
Institute for Basic Science

Maternal secrets of our earliest ancestors unlocked

image: Australopithecus africanus impression by Jose Garcia and Renaud Joannes-Boyau, Southern Cross University.

Image: 
Jose Garcia and Renaud Joannes-Boyau, Southern Cross University.

New research brings to light for the first time the evolution of maternal roles and parenting responsibilities in one of our oldest evolutionary ancestors

Australopithecus africanus mothers breastfed their infants for the first 12 months after birth, and continued to supplement their diets with breastmilk during periods of food shortage

Tooth chemistry analyses enable scientists to 'read' more than two-million-year-old teeth

Finding demonstrates why early human ancestors had fewer offspring and extended parenting role

Extended parental care is considered one of the hallmarks of human evolution. A stunning new research result published today in Nature reveals for the first time the parenting habits of one of our earliest extinct ancestors.

Analysis of more than two-million-year-old teeth from Australopithecus africanus fossils found in South Africa have revealed that infants were breastfed continuously from birth to about one year of age. Nursing appears to continue in a cyclical pattern in the early years for infants; seasonal changes and food shortages caused the mother to supplement gathered foods with breastmilk. An international research team led by Dr Renaud Joannes-Boyau of Southern Cross University, and by Dr Luca Fiorenza and Dr Justin W. Adams from Monash University, published the details of their research into the species in Nature today.

"For the first time, we gained new insight into the way our ancestors raised their young, and how mothers had to supplement solid food intake with breastmilk when resources were scarce," said geochemist Dr Joannes-Boyau from the Geoarchaeology and Archaeometry Research Group (GARG) at Southern Cross University.

"These finds suggest for the first time the existence of a long-lasting mother-infant bond in Australopithecus. This makes us to rethink on the social organisations among our earliest ancestors," said Dr Fiorenza, who is an expert in the evolution of human diet at the Monash Biomedicine Discovery Institute (BDI).

"Fundamentally, our discovery of a reliance by Australopithecus africanus mothers to provide nutritional supplementation for their offspring and use of fallback resources highlights the survival challenges that populations of early human ancestors faced in the past environments of South Africa," said Dr Adams, an expert in hominin palaeoecology and South African sites at the Monash BDI.

For decades there has been speculation about how early ancestors raised their offspring. With this study, the research team has opened a new window into our enigmatic evolutionary history.

Australopithecus africanus lived from about two to three million years ago during a period of major climatic and ecological change in South Africa, and the species was characterised by a combination of human-like and retained ape-like traits. While the first fossils of Australopithecus were found almost a century ago, scientists have only now been able to unlock the secrets of how they raised their young, using specialised laser sampling techniques to vaporise microscopic portions on the surface of the tooth. The gas containing the sample is then analysed for chemical signatures with a mass spectrometer- enabling researchers to develop microscopic geochemical maps which can tell the story of the diet and health of an individual over time. Dr Joannes-Boyau conducted the analyses at the Geoarchaeology and Archaeometry Research Group at Southern Cross University in Lismore NSW and at the Icahn School of Medicine at Mount Sinai in New York.

Teeth grow similarly to trees; they form by adding layer after layer of enamel and dentine tissues every day. Thus, teeth are particularly valuable for reconstructing the biological events occurring during the early period of life of an individual, simply because they preserve precise temporal changes and chemical records of key elements incorporated in the food we eat.

By developing micro geochemical maps, we are able to 'read' successive bands of daily signal in teeth, which provide insights into food consumption and stages of life. Previously the team had revealed the nursing behaviour of our closest evolutionary relatives, the Neanderthals. With this latest study, the international team has analysed teeth that are more than ten times older than those of Neanderthals.

"We can tell from the repetitive bands that appear as the tooth developed that the fall back food was high in lithium, which is believed to be a mechanism to reduce protein deficiency in infants more prone to adverse effect during growth periods," Dr Joannes-Boyau said.

"This likely reduced the potential number of offspring, because of the length of time infants relied on a supply of breastmilk. The strong bond between mothers and offspring for a number of years has implications for group dynamics, the social structure of the species, relationships between mother and infant and the priority that had to be placed on maintaining access to reliable food supplies," he said.

"This finding underscores the diversity, variability and flexibility in habitats and adaptive strategies these australopiths used to obtain food, avoid predators, and raise their offspring," Dr Adams emphasised.

"This is the first direct proof of maternal roles of one of our earliest ancestors and contributes to our understanding of the history of family dynamics and childhood," concluded Dr Fiorenza.

The team will now work on species that have evolved since, to develop the first comprehensive record of how infants were raised throughout history.

Credit: 
Monash University

Mastering a prickly problem in ferrofluids

video: KAUST researchers Dominik Michels and Libo Huang have performed computer simulations to accurately capture the behavior of ferrofluids.

To see the full simulation video produced by the group visit https://www.youtube.com/watch?v=GSaipfkGpVs.

© 2019 KAUST

Image: 
© 2019 KAUST

Ferrofluids, with their mesmeric display of shape-shifting spikes, are a favorite exhibit in science shows. These eye-catching examples of magnetic fields in action could become even more dramatic through computational work that captures their motion.

A KAUST research team has now developed a computer model of ferrofluid motion that could be used to design even grander ferrofluid displays. The work is a stepping stone to using simulation to inform the use of ferrofluids in broad range of practical applications, such as medicine, acoustics, radar-absorbing materials and nanoelectronics.

Ferrofluids were developed by NASA in the 1960s as a way to pump fuels in low gravity. They comprise nanoscale magnetic particles of iron-laden compounds suspended in a liquid. In the absence of a magnetic field, ferrofluids possess a perfectly smooth surface. But when a magnet is brought close to the ferrofluid, the particles rapidly align with the magnetic field, forming the characteristic spiky appearance. If a magnetic object is placed in the ferrofluid, the spikes will even climb the object before cascading back down.

Because ferrofluid behavior can be counter-intuitive, simulation is the ideal way to understand their complex motion. Until now, however, the models have had several limitations, says Libo Huang, a Ph.D. student in Dominik Michels's Computational Sciences Group within KAUST's Visual Computing Center.

The first challenge was to eliminate singularities in the magnetic field of existing models, Huang says. Previous models typically handled magnetic-field simulation using magnets that are infinitely small. The closer two magnets are brought together, the stronger the magnetic attraction--thus, if a magnet is infinitely small, the magnetic field strength can become infinitely large. "The center of an infinitely small magnet is called its singularity," Huang says. Not only is the magnetic field difficult to measure at the magnet's center, but if two singularities come close together, the forces become so large that the simulation can crash. "We derived formulas to eliminate the singularities and create much more robust numeric schemes," Huang says.

The team also found ways to increase computational efficiency by reducing the algorithmic complexity, allowing larger simulations to be run. When the team compared their model with wet lab experiments, it reproduced the true dynamic behavior of the ferrofluid, giving a good qualitative representation that will be useful for ferrofluid sculpture design. "This opens the door for further quantitative analysis," Huang says. Increasing the model's accuracy further would provide new insights into fundamental ferrofluid behavior and lead to many new uses, from electronic sensors and switches to deformable mirrors for advanced telescopes.

Credit: 
King Abdullah University of Science & Technology (KAUST)

NASA creates a flood proxy map of areas affected by tropical storm Barry

image: NASA's Advanced Rapid Imaging and Analysis (ARIA) team created this Flood Proxy Map (FPM) depicting areas of Louisiana that are likely flooded as a result of heavy rain and Tropical storm Barry, shown by light blue pixels. The map covers an area of 220 by 236 miles (355 by 380 kilometers), shown by the large red polygon.

Image: 
NASA JPL, Sang-Ho Yun and Jungkyo Jung

Even before Tropical Storm Barry made landfall in Louisiana on Saturday, July 13, it had already dropped a lot of rain on the state. Using satellite data, NASA created a map that shows areas that are likely flooded.

The Advanced Rapid Imaging and Analysis (ARIA) team at NASA's Jet Propulsion Laboratory (JPL) in Pasadena, Calif., created an ARIA Flood Proxy Map (FPM) on July 13, depicting areas of Louisiana that are likely flooded as a result of heavy rain and Tropical storm Barry," said Judy Lai, project manager for ARIA at NASA's JPL. Those areas appear in light blue pixels on the map.

The maps were created from synthetic aperture radar (SAR) data acquired on July 13, 2019 by the ALOS-2 satellite operated by the Japan Aerospace Exploration Agency (JAXA). The map covers an area of 220 by 236 miles (355 by 380 kilometers). Each pixel measures about 27 yards (25 meters) across. This flood proxy map can be used as guidance to identify areas that are likely flooded, and may be less reliable over urban and vegetated areas.

On July 15, the National Weather Service of New Orleans noted, "Heavy rain bands associated with Tropical Depression Barry will continue to impact the area today and tonight. Additional rainfall amounts of 1 to 3 inches will be possible in these heavy rain bands, especially north of the Interstate 10 and 12 corridors. Potential impacts include flash flooding due to excessive rainfall and rises on area rivers. If you live in a flood-prone area, take action to protect your property ahead of time. As always, have multiple ways to receive warnings and know where you will go if you need to move to higher ground."

At 11:14 a.m. EDT on July 15, there were many Flash Flood Warnings still in effect. They include:

Southeastern Rapides Parish in central Louisiana...

Northern Jefferson Davis Parish in southwestern Louisiana...

Beauregard Parish in southwestern Louisiana...

Evangeline Parish in central Louisiana...

Northwestern Acadia Parish in southwestern Louisiana...

Central Calcasieu Parish in southwestern Louisiana...

Avoyelles Parish in central Louisiana...

Northern St. Landry Parish in central Louisiana...

Allen Parish in southwestern Louisiana...

At 1:30 p.m. EDT on July 15, the flood warning continues for the Intracoastal Waterway at Bayou Sorrel Lock affecting Iberville Parish, Louisiana.

The original data to created that map was provided by JAXA. It was then analyzed by the NASA-JPL/Caltech ARIA team and carried out at JPL, funded by NASA Disasters Program.

At 11 a.m. EDT on July 15, Barry was located about 70 miles (115 km) west-northwest of Little Rock, Arkansas. Barry continues to track to the north near 12 mph (19 kph) and this motion is expected to become northeasterly on Tuesday and easterly on Wednesday.

Credit: 
NASA/Goddard Space Flight Center

Effectiveness of using natural enemies to combat pests depends on surroundings

image: A spined soldier bug nymph eats a cabbage looper larvae on a cabbage plant.

Image: 
Ricardo Perez-Alvarez

ITHACA, N.Y. - When cabbage looper moth larvae infest a field, sustainable growers will often try to control the pests by releasing large numbers of predators, such as ladybugs. That way they can avoid spraying expensive and environmentally harmful insecticides.

Still, farmers have mixed results when they supplement their fields with beetles or other predators.

A new study of cabbage crops in New York - a state industry worth close to $60 million in 2017, according to the USDA - reports for the first time that the effectiveness of releasing natural enemies to combat pests depends on the landscape surrounding the field.

"The landscape context can inform how to better use this strategy in field conditions," said Ricardo Perez-Alvarez, the paper's first author and a graduate student in the lab of co-author Katja Poveda, associate professor of entomology. Brian Nault, an entomology professor at Cornell AgriTech, is also a co-author.

The paper, "Effectiveness of Augmentative Biological Control Depends on Landscape Context," was published in the journal Nature Scientific Reports. It showed that releasing pest predators led to fewer pests, less plant damage and increased crop biomass on farms surrounded by more forest and natural areas and less agricultural land. But on farms predominantly surrounded by other farms, the reverse was true, with more pests and plant damage and reduced crop biomass in spite of added predators.

The reasons behind this phenomenon are complex, and depend on interactions between local predators and those that are added, which can vary on a case-by-case basis.

"Landscape composition influences how predator species interact with one another and thereby mediates the potential consequences for biological pest control," Perez-Alvarez said.

The study focused on cabbage crops and three cabbage pests (the larvae of the cabbage white butterfly, the diamondback moth and the cabbage looper moth), and their natural enemies. In central New York, there are 156 native predator species and seven parasitoid wasps that prey on these pests. Among these, two generalist predators are commonly used to augment fields with additional pest enemies: the spined soldier bug and the convergent ladybird beetle. These two generally complement each other well because soldier bugs feed on larvae and ladybugs feed on eggs.

In the study, the researchers set up experimental plots on 11 cabbage farms in central New York, which together represented a range of surrounding landscapes from agricultural lands to natural areas.

Each farm had two cabbage plots: one that was left alone so it was exposed to the naturally occurring predators, and another where soldier bugs and ladybugs were added. The researchers then collected a wide range of data that included surveys of pest and predator abundances, plant damage and final crop yields. They also conducted lab experiments to better understand the relationships between predators and how those interactions impact pest control.

Given how complex these predator-predator and predator-pest interactions and their relationships to pest control can be, more study is needed to make specific recommendations to growers. Still, the paper is a first step toward understanding how landscapes influence the effects of augmenting farms with predators for pest control.

Credit: 
Cornell University

New analysis reveals challenges for drought management in Oregon's Willamette River Basin

CORVALLIS, Ore. - In Oregon's fertile Willamette River Basin, where two-thirds of the state's population lives, managing water scarcity would be more effective if conservation measures were introduced in advance and upstream from the locations where droughts are likely to cause shortages, according to a new study.

The study, published today in the journal Nature Sustainability, illustrates how ineffective conservation measures may be when they can only be implemented in the wrong months or downstream of where the shortage is occurring, said the study's lead author, William Jaeger, an economist in Oregon State University's College of Agricultural Sciences.

The findings can be applied to other river basins and help policymakers make decisions about mitigating drought, Jaeger said.

"The results indicate that while the policies are effective at conserving water, they have limited ability to mitigate the shortages because timing and location of conservation responses do not match the timing and location of the shortages," he said. "It's a case of a mismatch in terms of where and when."

The study, funded by the National Science Foundation and National Oceanic and Atmospheric Administration, was based on results derived from a computer model called Willamette Envision, which represents the fine-level interactions between the Willamette River Basin's natural water supply and the human and natural system's water demands for farms, people, fish, and forests.

Models of this kind may help policymakers recognize when and where water scarcity and drought will arise, and to better understand the kinds of policy interventions that are most likely to mitigate drought, Jaeger said.

"This is a study that has relevance beyond the Willamette River Basin," Jaeger said. "There will be similarities in other basins. Being able to mitigate drought is going to depend on understanding these same factors and relationships."

The main stem of the Willamette River flows north for 187 miles between the Oregon Coast Range and Cascade Range. Northwestern Oregon's three largest cities - Portland, Salem and Eugene - are located in the basin.

The analysis focuses on one simulated near-term drought year characterized by early season low flows that was similar to what occurred in 2015, Jaeger said. Under the scenario, it is impossible to meet water demands for cities and farms.

And the critical shortage of water in the study's drought scenario manifests itself in there being insufficient water to fill federal reservoirs, which in turn makes it impossible for federal reservoir managers to release sufficient water to meet minimum instream flows mandated by the Endangered Species Act.

Policy actions - in the abstract - can be effective in conserving water in the basin, Jaeger said. Incentives to farmers and urban consumers, as well as reasonable adjustments to federal dam operations are examples.

But when these policies were introduced into the Willamette Envision model, water conservation does not occur in the right places and at the right times to offset where the shortages occur -- most importantly not being able to meet the minimum environmental flows to protect environmental species in the main stem of the river, Jaeger said.

"At some point the dams can't release any more water. So those main stem flows in the spring and summer drop below the minimum set by the Endangered Species Act," he said. "Those critical biological flows are in April, May and June. Most agricultural irrigation occurs in June and July. So conserving water in July doesn't help offset a shortfall in April. If you're going to offset a shortfall, you have to do it in the same place and at the same time where the shortfall occurs.

One of the study's conclusions was that while behavioral responsiveness by individual actors is important, one should not overlook institutional responses -- including institutions with built-in response contingencies for drought -- that can have a larger mitigation impact.

But the problem, the study authors point out, is in some sense circular: The federal reservoirs have led to the threatened status of salmon under the Endangered Species Act (ESA). Salmon are anadromous: they migrate from home streams to the ocean as juveniles and return a few years later as adults to spawn, so damming rivers can delay downstream migrations by juvenile salmon for months or years.

Since 1900, more than 15 large dams and many smaller ones have been built in the Willamette's drainage basin, 13 of which are operated by the U.S. Army Corp of Engineers.

"Were it not for the dams - built primarily for flood control but employed to store water and manage flows - the ESA-mandated minimum instream flows may not exist, having been put in place as a response to the impact of the dams on anadromous fish abundance," the study authors wrote.

Credit: 
Oregon State University

SwRI, UTSA researchers create innovative model for sCO2 power generation

SAN ANTONIO -- July 15, 2019 -- Southwest Research Institute and The University of Texas at San Antonio are collaborating to acquire data for a computational model for supercritical carbon dioxide (sCO2) energy generation. The work, led by Jacob Delimont of SwRI's Mechanical Engineering Division and Christopher Combs of UTSA's College of Engineering, is supported by a $125,000 grant from the Connecting through Research Partnerships (Connect) Program.

sCO2 is carbon dioxide held above a critical temperature and pressure, which causes it to act like a gas while having the density of a liquid. It's also nontoxic and nonflammable, and its supercritical state makes sCO2 a highly efficient fluid to generate power because small changes in temperature or pressure cause significant shifts in its density. Typically, current power plants use water as a thermal medium in power cycles. Replacing water with sCO2 increases efficiency by as much as 10 percent.

Because of the efficiency of sCO2 as a thermal medium, power plant turbomachinery can be one-tenth the size of conventional power plant components, providing the potential to shrink the environmental footprint as well as the construction cost of any new facilities.

Delimont and Combs plan to work with a direct-fired sCO2 cycle, which involves adding fuel and oxygen directly into the CO2 stream, causing it to combust, release heat, and create sCO2.This new type of power cycle allows for higher efficiency and lower greenhouse gas emissions.

"This power cycle allows for the capture of 100 percent of the CO2 emissions that would otherwise end up in our atmosphere," Delimont said. "The captured CO2 has many potential uses, including several applications in the oil and gas industry and even the carbonation in everyday soft drinks."

The challenge the team faces is that direct-fired sCO2 power generation is such a new technology that very little is known about the combustion process. To accomplish their goal, Delimont and Combs will collaborate on collecting data to validate a computational model for an sCO2 combustor.

"The data for the model doesn't exist, so first we're going to acquire it," Delimont said.

To visualize the burning of the sCO2 fuel, UTSA will supply optical lenses and laser systems as well as Combs' expertise in the optical techniques needed to visualize the flame in the direct-fire combustor.

"Once we can visualize the combustion process, we can use computational models to design the necessary combustion equipment to make this power generation process a reality," Delimont said.

Credit: 
Southwest Research Institute

NASA examines Tropical Storm Barry post-landfall

image: At 3:55 a.m. EDT (0755 UTC) on July 14, the MODIS instrument aboard NASA's Aqua satellite looked at Tropical Storm Barry in infrared light. MODIS found coldest cloud tops (light green) had temperatures near minus 80 degrees Fahrenheit (minus 62.2 degrees Celsius) around the center of the tropical storm which was offshore from south central Louisiana.

Image: 
NASA/NRL

Tropical Storm Barry made landfall mid-day on July 13, but infrared satellite imagery from NASA early on July 14 continued to show the heaviest rainmaking storms were still off-shore. NASA's Aqua satellite analyzed cloud top temperatures in the storm which gave an indication of the storm's strength.

Barry reached Category 1 hurricane strength for about three hours in the late morning and early afternoon on July 13. Barry made landfall around 2 p.m. EDT as a strong tropical storm about 5 miles (10 km) northeast of Intracoastal City, La. After making landfall, Barry weakened back to a tropical storm.

At 3:55 a.m. EDT (0755 UTC) on July 14, the MODIS or Moderate Resolution Imaging Spectroradiometer instrument aboard NASA's Aqua satellite looked at Tropical Strom Barry infrared light. MODIS found coldest cloud tops had temperatures near minus 80 degrees Fahrenheit (minus 62.2 degrees Celsius) still off-shore from south central Louisiana. Those storms were north of center of the tropical storm. Storms with temperatures that cold are indicative of strong storms and have been shown to have the capability to generate heavy rainfall.

The heavy rainfall exacerbated by the slow movement is creating flooding dangers. Life-threatening flash flooding and significant river flooding are still expected along Barry's path inland from Louisiana up through the lower Mississippi Valley, through at least Monday. Widespread rainfall of 4 inches or more is expected, with embedded areas of significantly heavier rain that will lead to rapid water rises.

Barry's Status on July 14, 2019

At 8 a.m. EDT (1200 UTC) on Sunday, July 14, 2019 the National Hurricane Center or NHC said the center of Tropical Storm Barry was located near latitude 31.4 degrees north and longitude 93.4 degrees west. That puts Barry's center just 5 miles (10 km) west of Peason Ridge, Louisiana. Barry is moving toward the north near 6 mph (10 km/h), and this general motion is expected to continue through Monday.

NOAA Doppler weather radar data and surface observations indicate that maximum sustained winds remain near 45 mph (75 kph) with higher gusts. These winds are occurring near the coast to the southeast of the center. Tropical-storm-force winds extend outward up to 175 miles (280 km) mainly over water to the southeast of the center.

The estimated minimum central pressure based on surface observations is 1005 mb (29.68 inches).

Warnings and Watches in Effect

The NHC posted many warnings and watches for Barry on July 14. A Tropical Storm Warning is in effect from Morgan City to Cameron, Louisiana and a Storm Surge Warning is in effect from Intracoastal City to the mouth of Atchafalaya River. A Tropical Storm warning means that tropical storm conditions are expected somewhere within the warning area.

Moving Further Inland

Weakening is expected as the center moves farther inland, and Barry is forecast to weaken to a tropical depression later today. Two computer models, the GFS and ECMWF models suggest that Barry should lose much of its deep convection and become a remnant low pressure area in 36 to 48 hours and dissipate entirely shortly after that over the Middle Mississippi Valley.

On the forecast track, the National Hurricane Center said the center of Barry will move across the western and northern portions of Louisiana today, and over Arkansas tonight and Monday.

Credit: 
NASA/Goddard Space Flight Center

More farmers, more problems: How smallholder agriculture is threatening the western Amazon

image: Jacob Socolar, lead author of the study, conducts a point-count in a smallholder farmer's banana cultivation.

Image: 
Jacob Socolar, Princeton University

A verdant, nearly roadless place, the Western Amazon in South America may be the most biologically diverse place in the world. There, many people live in near isolation, with goods coming in either by river or air. Turning to crops for profit or sustenance, farmers operate small family plots to make a living.

Unfortunately, these farmers and their smallholder agriculture operations pose serious threats to biodiversity in northeastern Peru, according to a team of researchers led by Princeton University.

After conducting a large-scale study of birds and trees, the researchers found that human activities are destroying this tropical forest wilderness -- and the problem will likely only get worse. Their findings were recently published in the journal Conservation Biology.

Many scientists have assumed the impacts of small-scale farming are not too harmful to wildlife, at least not compared with the wholesale clearing of forests for pastures and soybean fields, which is happening in the Eastern Amazon. But this study shows how the far less intrusive actions of small-scale farmers are nonetheless deadly to biodiversity.

"Smallholder agriculture turns out to be a very serious threat to biodiversity, closer in impact to clearing the forests for cattle pastures than we had imagined." said David Wilcove, a professor of ecology and evolutionary biology, public affairs, and the Princeton Environmental Institute. "What's worse is that smallholder agriculture is the dominant form of land-use change in Western Amazonia, and it is likely to get more widespread in the coming decades."

"We wanted to know how tropical biodiversity responds to smallholder agriculture across the wide range of different forest habitats that typify tropical forest landscapes, and the Western Amazon is a good place to ask these questions," said lead author Jacob Socolar, a 2016 graduate alumnus who conducted the work as a Ph.D. student in Wilcove's lab. He is now a postdoctoral researcher with the Norwegian University of Life Sciences and the University of Sheffield.

"We called the paper, 'overlooked biodiversity loss' because the situation at the landscape scale is worse than we would've guessed by studying one habitat at a time," he said.
Socolar and Wilcove teamed up with botanist Elvis Valderrama Sandoval from the Universidad Nacional de la Amazonía Peruana.

The team conducted their fieldwork in the Amazonian lowlands of Loreto Department in Peru. There, they focused on four habitats -- upland forests, floodplain forests, white-sand forests and river islands -- where slash-and-burn agriculture is taking place. They also looked at relatively untouched areas of forest as a basis for comparison.

They sampled birds and trees, two groups they felt would be complementary in how they would respond to changes across the land. Likewise, birds are Socolar's expertise, and Sandoval is a skilled botanist.

Sampling birds can be difficult, Socolar said, especially in this region of Peru, which harbors the greatest number of bird species per acre of anywhere on earth. In 10-minute increments, Socolar recorded all of the bird species in the area based on sight or sound. His final count was 455 bird species, making it among the richest single-observer point-count datasets ever assembled.

Trees are just as tricky, given there are well over 1,000 species in the Peruvian landscape. In their field experiment, the team was able to identify 751 tree species on their study sites.

Different patterns emerged for the birds and trees, creating a seeming contradiction that became a feature of the study, Socolar explained.

In the slashed-and-burned areas, the team found many species of birds. In fact, the farmed sites sometimes had more species than the comparable intact sites. When all sites were tallied, however, the intact sites turned out to have significantly more bird species than the disturbed ones, because all disturbed sites shared a limited pool of species, while intact sites varied in their species composition across different forest habitats.

Trees, on the other hand, exhibited a far less subtle pattern: there were simply far fewer tree species persisting on the cleared land than in the intact forests, and this held even after the scientists used statistical tests to account for the fact that disturbed sites have fewer individual trees than intact forests. With the reduction in the number of tree species in the disturbed sites, the scientists predict there will be fewer insects and other small animals as well, which could have major impacts to the ecosystem.

The results have significant conservation policy implications. First, this area of the Amazon will probably not remain relatively roadless forever, Socolar said, and with more roads will come more farmers. This means it's important to carefully manage remote, protected areas to ensure that ongoing infrastructure development does not cause them to be overrun by smallholders. Given the majority of these farmers are poor, there could also be opportunities to link conservation with efforts to improve rural development and decrease poverty.

"Even though smallholder agriculture supports high biodiversity at small spatial scales, we cannot lose vigilance regarding the overall threat posed by smallholder expansion," Socolar said. "If we do, we pay a price in extinctions. We want this study to serve as a larger warning. It's probably not just a fluke of the Amazon -- these findings could extend to other habitats. We're lucky to work in a place where there is still plenty of land to go around for both farming and conservation. Being proactive is possible, moral and reasonable."

Credit: 
Princeton School of Public and International Affairs

Turbo chip for drug development

image: Array of microdroplets with various reactants on the chemBIOS chip-based synthesis platform.

Image: 
Maximilian Benz, KIT

In spite of increasing demand, the number of newly developed drugs decreased continuously in the past decades. The search for new active substances, their production, characterization, and screening for biological effectiveness are very complex and costly. One of the reasons is that all three steps have been carried out separately so far. Scientists of Karlsruhe Institute of Technology (KIT) have now succeeded in combining these processes on a chip and, hence, facilitating and accelerating the procedures to produce promising substances. Thanks to miniaturization, also costs can be reduced significantly. The results are now published in Nature Communications (DOI 10.1038/s41467-019-10685-0).

Drug development is based on high-throughput screening of large compound libraries. However, the lack of miniaturized and parallelized methodologies for high-throughput chemical synthesis in the liquid phase and incompatibility of synthesis of bioactive compounds and screening for their biological effect have led to a strict separation of these steps so far. This makes the process expensive and inefficient. "For this reason, we have developed a platform that combines synthesis of compound libraries with biological high-throughput screening on a single chip," says Maximilian Benz of KIT's Institute of Toxicology and Genetics (ITG). This so-called chemBIOS platform is compatible with both organic solvents for synthesis and aqueous solutions for biological screenings. "We use the chemBIOS platform to perform 75 parallel three-component reactions for synthesis of a library of lipids, i.e. fats, followed by characterization using mass spectroscopy, on-chip formation of lipoplexes, and biological cell screening," Benz adds. Lipoplexes are nucleic acid-lipid complexes that can be taken up by eukaryotic, i.e. human and animal, cells. "The entire process from library synthesis to cell screening takes only three days and about 1 ml of total solution, demonstrating the potential of the chemBIOS technology to increase efficiency and accelerate screenings and drug development," Benz points out. Usually, such processes need several liters of reactants, solvents, and cell suspensions.

Recently, the chemBIOS technology was nominated for one of the first three places of KIT's NEULAND Innovation Prize 2019 by a jury of representatives of research and industry.

Drug Development: Big Effort and Few Hits

Due to the immense time expenditure and spatial and methodological separation of the synthesis of compounds, screening, and clinical studies, development of new drugs often takes more than 20 years and costs between two and four billion dollars.

The early phase of drug development is traditionally based on three areas of science: chemists synthesize a big library of various molecules. All compounds are produced, isolated, and characterized separately. Then, biologists analyze the molecule library for biological activity. Highly active compounds, so-called hits, are returned to chemistry. Based on this pre-selection, chemists synthesize further variations of these compounds. These secondary molecule libraries then contain optimized compounds. After this cycle has been repeated several times, a few promising compound candidates are transferred to the medical part of drug development, in which these compounds are tested in clinical studies. Of several then thousands of compounds subjected to first screenings, only one or sometimes no compound reaches the last step of drug development: approval of the new drug. This process is time-consuming and requires a large range of ma-terials and solvents. This makes development more expensive and slower and also limits the number of substances screened to a fea-sible number.

Credit: 
Karlsruher Institut für Technologie (KIT)