Tech

Embracing bioinformatics in gene banks

image: Scientists from the IPK have explored, within a perspective paper, the upcoming challenges and possibilities of the future of gene banks. They emphasise that the advancement of gene banks into bio-digital resource centres, which collate the germplasm as well as the molecular data of the samples, would be beneficial to scientists, plant breeders and society alike.

Image: 
Regina Devrient / IPK Gatersleben

The preservation of plant biodiversity is the task of the roughly 1,750 gene banks which are distributed around the world. So far, they store plant samples, and sometimes additional phenotypic or genetic information, of around 7,4 million accessions of plant species in total. It is expected that with the facilitated access to improved, quicker and cheaper sequencing and other omics technologies, the number of well-characterised accessions and the amount of detailed information that needs to be stored along with the biological material will be growing rapidly and continuously. A team of scientists from the Leibniz Institute of Plant Genetics and Crop Plant Research (IPK) in Gatersleben has now looked ahead into the upcoming challenges and possibilities of the future of gene banks by publishing a perspective paper in Nature Genetics.

In the early-to-mid twentieth century, it became increasingly apparent that crop landraces were slowly being replaced by modern crop varieties and were in danger of disappearing. In order to prevent loss of genetic diversity and biodiversity, the first gene banks were established, with the mission to preserve these plant genetic resources. Nowadays, gene banks function as biorepositories and safeguards of plant biodiversity but most importantly as libraries which turn the genetic plant information and plant material into a freely accessible but nonetheless valuable resource. As such, scientists, plant breeders or even anybody from around the world can request and use the data stored within more than 1,750 gene banks around the world for research or plant breeding purposes.

The Gene Bank of the Leibniz Institute of Plant Genetics and Crop Plant Research (IPK) in Gatersleben currently holds one of the world's most comprehensive collections of crop plants and their wild relatives, collating a total of 151,002 accessions from 2,933 species and 776 genera. The majority of the plant germplasm samples are stored as dry seed at -18°C, whilst accessions which are propagated vegetatively are permanently cultivated in the field (ex situ) or preserved in liquid Nitrogen at -196°C. The online portal of the IPK gene bank allows users to view and sift through the stored plant accessions and their corresponding "passport data", as well as to request plant material on a non-commercial scale. A new perspective paper authored by Dr. Martin Mascher and colleagues of the IPK now examines the current and upcoming challenges for gene banks but also the opportunities for their further advancement.

The scientists identified three major challenges for gene banks which will need attention. Two are caused by the basic demands of managing tens of thousands of seed lots, namely the tracking of the identity of accessions, and the need to avoid unnecessary duplications within and between gene banks. The third challenge is that of maintaining the genetic integrity of accessions, due to the inherent drawbacks of using ex situ conservation, such as differential survival, drift and genetic erosion in storage and regeneration.

However, the authors suggest that a stronger genomic-driven approach towards gene banks might help when taking on these challenges. For example, traditionally, the "passport data" of the gene bank material describe the taxonomy and provenance of accessions. By adding single-nucleotide polymorphisms (SNPs) as defining characteristics of an accession, this genotypic information could serve as molecular passport data to complement and correct traditional passport records, as well as assist with the cleansing and prevention of duplicates and improve the quality and integrity of the collections.

By implementing the shift towards bioinformatics and big data analytics in plant sciences, traditional gene banks, which focus on the preservation of germplasm collections, will be able to transform into bio-digital resource centres, which combine the storage and valorisation of plant materials with their genomic and molecular characterisation.

Current funding scenarios of gene banks do not yet allow for the systematic generating of molecular passport data for each submitted plant sample at gene banks. However, first steps into the direction of high-throughput genotyping of entire collections have already been taken. This was previously showcased by an international research consortium led by the IPK, which characterised a world collection of more than 22,000 barley varieties on a molecular level through genotyping-by-sequencing. Some of the authors of the perspective paper had also been involved in this case-study and had contributed to the creation of the web-information-portal BRIDGE as a result. BRIDGE, short for "Biodiversity informatics to bridge the gap from genome information to educated utilization of genetic diversity hosted in gene banks", is a data storage for the attained genomic barley information which links to the phenotypic information collated at the IPK hosted Federal Ex situ Gene Bank for Agricultural and Horticultural Crop Species.

Whilst BRIDGE is already paving the way towards evolving the Gaterslebener Gene Bank into a "one stop shop for facilitated and informed utilisation of crop plant biodiversity", international collaborations, such as the organisation DivSeek, are building the international framework for enabling gene banks, plant breeders and researchers globally to more efficiently process and mobilise plant genetic diversity, thus starting to bridge the gaps between bioinformaticians, geneticists and gene bank curators. Hence, a worldwide network of bio-digital resource centres, sharing data freely and thus help fostering research progress in plant science and plant breeding may become a reality in the near future.

Credit: 
Leibniz Institute of Plant Genetics and Crop Plant Research

Low-cost retinal scanner could help prevent blindness worldwide

video: Adam Wax, professor of biomedical engineering at Duke University, explains and demonstrates his new optical coherence tomography (OCT) scanner that is 15 times lighter and smaller than current commercial systems and is made from parts costing less than a tenth the retail price of commercial systems.

Image: 
Lumedica

DURHAM, N.C. -- Biomedical engineers at Duke University have developed a low-cost, portable optical coherence tomography (OCT) scanner that promises to bring the vision-saving technology to underserved regions throughout the United States and abroad.

Thanks to a redesigned, 3D-printed spectrometer, the scanner is 15 times lighter and smaller than current commercial systems and is made from parts costing less than a tenth the retail price of commercial systems -- all without sacrificing imaging quality.

In its first clinical trial, the new OCT scanner produced images of 120 retinas that were 95 percent as sharp as those taken by current commercial systems, which was sufficient for accurate clinical diagnosis.

The results appear online on June 28 in Translational Vision Science & Technology, an ARVO journal.

In use since the 1990s, OCT imaging has become the standard of care for the diagnosis of many retinal diseases including macular degeneration and diabetic retinopathy, as well as for glaucoma. However, OCT is rarely included as part of a standard screening exam since machines can cost more than $100,000 -- meaning that usually only larger eye centers have them.

"Once you have lost vision, it's very difficult to get it back, so the key to preventing blindness is early detection," said Adam Wax, professor of biomedical engineering at Duke. "Our goal is to make OCT drastically less expensive so more clinics can afford the devices, especially in global health settings."

OCT is the optical analogue of ultrasound, which works by sending sound waves into tissues and measuring how long they take to come back. But because light is so much faster than sound, measuring time is more difficult. To time the light waves bouncing back from the tissue being scanned, OCT devices use a spectrometer to determine how much their phase has shifted compared to identical light waves that have travelled the same distance but have not interacted with tissue.

The primary technology enabling the smaller, less expensive OCT device is a new type of spectrometer designed by Wax and his former graduate student Sanghoon Kim. Traditional spectrometers are made mostly of precisely cut metal components and direct light through a series of lenses, mirrors and diffraction slits shaped like a W. While this setup provides a high degree of accuracy, slight mechanical shifts caused by bumps or even temperature changes can create misalignments.

Wax's design, however, takes the light on a circular path within a housing made mostly from 3D-printed plastic. Because the spectrometer light path is circular, any expansions or contractions due to temperature changes occur symmetrically, balancing themselves out to keep the optical elements aligned. The device also uses a larger detector at the end of the light's journey to make misalignments less likely.

Traditional OCT machines weigh more than 60 pounds, take up an entire desk and cost anywhere between $50,000 and $120,000. The new OCT device weighs four pounds, is about the size of a lunch box and, Wax expects, will be sold for less than $15,000.

"Right now OCT devices sit in their own room and require a PhD scientist to tweak them to get everything working just right," said Wax. "Ours can just sit on a shelf in the office and be taken down, used and put back without problems. We've scanned people in a Starbucks with it."

In the new study, J. Niklas Ulrich, retina surgeon and associate professor of ophthalmology at the University of North Carolina School of Medicine, put the new OCT scanner to the test against a commercial instrument produced by Heidelberg Engineering. He performed clinical imaging on both eyes of 60 patients, half from healthy volunteers and half with some sort of retinal disease.

To compare the images produced by both machines, the researchers used contrast-to-noise ratio -- a measure often used to determine image quality in medical imaging. The results showed that, by this metric, the low-cost, portable OCT scanner provided useful contrast that was only 5.6 percent less than that of the commercial machine, which is still good enough to allow for clinical diagnostics.

"I have been very impressed by the quality of images from the low-cost device -- it is absolutely comparable to our standard commercial machines," said Ulrich. "It obviously lacks some bells and whistles of our $100k+ OCT scanners, but allows for accurate diagnosis of structural retinal disease as well as monitoring of treatment success. The setup is quick and easy with a small footprint, allowing the device to perform well in smaller satellite offices. I hope that the development of this low-cost OCT will improve patients' access to OCT technology and contribute to saving sight in North Carolina as well as nationally and worldwide."

Wax is commercializing the device through a startup company called Lumedica, which is already producing and selling first-generation instruments for research applications. The company hopes to secure venture backing in the near future and is also negotiating potential licensing deals with outside companies.

"There's a lot of interest from people who want to take OCT to new parts of the globe as well as to underserved populations right here in the U.S.," said Wax. "With the growing number of cases of diabetic retinopathy in places like the United States, India and China, we hope we can save a lot of people's sight by drastically increasing access to this technology."

Credit: 
Duke University

Confirmation of old theory leads to new breakthrough in superconductor science

image: Graphic showing van der Waals BSCCO device. (a) Optical image of Hall bar device, (b) Cross-sectional view of a typical device in scanning TEM. Columns of atoms are visible as dark spots; black arrows point to bismuth oxide layers (darkest spots), while gray arrows show their extrapolated positions. (c) Resistivity as a function of temperature for devices of a different thickness.

Image: 
Argonne National Laboratory

Phase transitions occur when a substance changes from a solid, liquid or gaseous state to a different state — like ice melting or vapor condensing. During these phase transitions, there is a point at which the system can display properties of both states of matter simultaneously. A similar effect occurs when normal metals transition into superconductors — characteristics fluctuate and properties expected to belong to one state carry into the other.

Scientists at Harvard have developed a bismuth-based, two-dimensional superconductor that is only one nanometer thick. By studying fluctuations in this ultra-thin material as it transitions into superconductivity, the scientists gained insight into the processes that drive superconductivity more generally. Because they can carry electric currents with near-zero resistance, as they are improved, superconducting materials will have applications in virtually any technology that uses electricity.

The Harvard scientists used the new technology to experimentally confirm a 23-year-old theory of superconductors developed by scientist Valerii Vinokur from the U.S. Department of Energy’s (DOE) Argonne National Laboratory.

“Sometimes you discover something new and exotic, but sometimes you just confirm that you do, after all, understand the behavior of the every-day thing that is right in front of you.” — Valerii Vinokur, Argonne Distinguished Fellow, Materials Science division.

One phenomenon of interest to scientists is the complete reversal of the well-studied Hall effect when materials transition into superconductors. When a normal, non-superconducting material carries an applied current and is subjected to a magnetic field, a voltage is induced across the material. This normal Hall effect has the voltage pointing in a specific direction depending on the orientation of the field and current.

Interestingly, when materials become superconductors, the sign of the Hall voltage reverses. The “positive” end of the material becomes the “negative.” This is a well-known phenomenon. But while the Hall effect has long been a major tool that scientists use to study the types of electronic properties that make a material a good superconductor, the cause of this reverse Hall effect has remained mysterious to scientists for decades, especially in regard to high-temperature superconductors for which the effect is stronger.

In 1996, theorist Vinokur, an Argonne Distinguished Fellow, and his colleagues presented a comprehensive description of this effect (and more) in high-temperature superconductors. The theory took into account all of the driving forces involved, and it included so many variables that testing it experimentally seemed unrealistic — until now.

“We believed we had really solved these problems,” said Vinokur, “but the formulas felt useless at the time, because they included many parameters that were difficult to compare with experiments using the technology that existed then.”

Scientists knew that the reverse Hall effect results from magnetic vortices that crop up in the superconducting material placed in the magnetic field. Vortices are points of singularity in the liquid of superconducting electrons — Cooper pairs — around which Cooper pairs flow, creating circulating superconducting micro-currents that bring novel features in the physics of the Hall effect in the material.

Normally, distribution of electrons in the material causes the Hall voltage, but in superconductors, vortices move under the applied current, which creates electronic pressure differences that are mathematically similar to those that keep an airplane in flight. These pressure differences change the course of the applied current like the wings of an airplane change the course of the air passing by, uplifting the plane. The vortex motion redistributes electrons differently, changing the direction of the Hall voltage to the opposite of the usual purely electronic Hall voltage.

The 1996 theory quantitatively described the effects of these vortices, which had only been qualitatively understood. Now, with a novel material that took Harvard scientists five years to develop, the theory was tested and confirmed.

The bismuth-based thin material is virtually only one atomic layer thick, making it essentially two-dimensional. It is one of the only of its kind, a thin-film high-temperature superconductor; production of the material alone is a technological breakthrough in superconductor science.

“By reducing the dimensions from three to two, the fluctuations of the properties in the material become much more apparent and easier to study,” said Philip Kim, a lead scientist in the Harvard group. “We created an extreme form of the material that allowed us to quantitatively address the 1996 theory.”

One prediction of the theory was that the anomalous reverse Hall effect could exist outside of the temperatures at which the material is a superconductor. This study offered a quantitative description of the effect that perfectly matched the theoretical predictions.

“Before we were sure of the role vortices play in the reverse Hall effect, we couldn’t use it reliably as a measuring tool,” said Vinokur. “Now that we know we were correct, we can use the theory to study other fluctuations in the transition phase, ultimately leading to better understanding of superconductors.”

Although the material in this study is two-dimensional, the scientists believe that the theory applies to all superconductors. Future research will include deeper study of the materials — the behavior of the vortices even has application in mathematical research.

Vortices are examples of topological objects, or objects with unique geometrical properties. They are currently a popular topic in mathematics because of the ways they form and deform and how they change the properties of a material. The 1996 theories used topology to describe the behavior of the vortices, and topological properties of matter could carry a lot of new physics.

“Sometimes you discover something new and exotic,” said Vinokur about the research, “but sometimes you just confirm that you do, after all, understand the behavior of the every-day thing that is right in front of you.”

Credit: 
DOE/Argonne National Laboratory

Nonnative pear trees are showing up in US forests

image: University of Cincinnati biology professor Theresa Culley is finding more Callery pear trees growing wild in Ohio forests. Some states like Ohio are phasing out the sale of the trees.

Image: 
Joseph Fuqua II/UC Creative Services

A popular imported tree that became a neighborhood favorite in the 1990s now threatens to crowd out native trees in some Eastern forests.

University of Cincinnati biologist Theresa Culley warns that for some parts of Ohio, it might be too late to stop the spread of the Callery pear. But she is urging other states to be vigilant before the invasive trees begin taking over their forests, too.

Culley presented her findings to botanists from around the world during the Society of Economic Botany conference this year hosted at UC's McMicken College of Arts and Sciences.

Like a lot of invasive species, the Callery pear, also known as the Bradford pear, was a problem of our own making, she said. Named for China historian Joseph-Marie Callery, who sent samples back to Europe in the 1800s, the tree captured the interest of American botanists a century later. 

"At one point it was named urban street tree of the year," she said. "In the mid-1990s they started to show up in the wild. Now people have recognized they're starting to spread and it's a problem."

The original strain of pear tree was "self-incompatible," incapable of reproducing with other trees of its kind. They became a best-seller because they grew quickly, regardless of climate or soil conditions. The trees bloom pretty white flowers in the spring and have leaves that turn a vibrant purple in the autumn.

For years, the pears were a welcome addition to tree-lined streets. 

"I call them the lollipop tree because they have this perfect shape," Culley said.

But homeowners soon learned the mature trees had weak trunks that split easily and toppled in heavy snow or strong winds.

"The Bradford pear was a beautiful tree with a service life of about 15 years," said David Listerman, an Ohio landscaping broker and consultant. "But the canopy would get so heavy that in a windstorm, it would break in half."

The solution for growers was to introduce sturdier varieties. But besides their sturdy trunks, the replacement trees had something else: genetic diversity. Suddenly, the old pear trees could now cross-pollinate with the new varieties. Their fertilized flowers began producing fruit that birds carried off into the forest. A monster was born.

Today, Listerman says the wild newcomers represent an existential threat to native trees. Their fallen leaves leach chemicals into the ground that can kill native rivals.

"Most invasives tend to have advantages over native plants. The Callery pear holds its leaves late into the fall -- right up to Christmas. It grows another month to two months longer,  outcompeting native species that go dormant in October," he said. "They encroach along interstates where they need to have visual clearance. So highway departments wind up having to do more mowing or treating."

And while some birds will eat their fruit, pear trees don't provide as much benefit to wildlife as many native species, he said.

Listerman serves with UC's Culley on the Ohio Invasive Plants Council, which helps the Ohio Department of Agriculture identify potentially harmful invasive species.

Listerman said pear trees aren't just popping up along roadsides. Forest surveys are finding them in unexpected places far from the nearest subdivision. The wild pears grow in thick stands with unpleasant spur-like thorns.

On a recent field trip, Culley picked her way through the woods at the Harris Benedict Nature Preserve to show how resilient pear trees have become. UC owns the forested preserve about 15 miles north of the Uptown campus and uses it for some of its biological sciences programs and research. It's adjacent to the Johnson Preserve in Montgomery, Ohio.

The trail is lined with several memorials to victims of a 1999 F4 tornado that killed four people and destroyed nearly 130 nearby homes and businesses. The storm cut a 10-mile long swath of destruction that toppled hundreds of trees across two counties.

Where many of the stately older trees fell during the storm, Callery pears now thrive. Culley stopped under the canopy of their leafy, green foliage. The understory was covered in Amur honeysuckle, another invasive plant from Asia that has exploded in Eastern forests.

UC conducts plant surveys in the preserve every three years. The ongoing study suggests the pear trees may be replacing many of the ash trees that were killed by emerald ash borers, another invasive species.

"In the past, pears would be found along forest edges like along highways. We're now seeing pears invading the center of forests," Culley said. "Once they're established, it's really difficult to remove them. Their roots are deep so you have to cut them and spray them with glyphosate. A lot of land managers simply can't afford to get rid of them."

Invasive species are a huge problem around the world. The United States spends more than $260 million fighting aquatic invaders alone, according to the Government Accountability Office. Nearly 1 in 3 species protected under the federal Endangered Species Act are at risk to invasive species, according to the U.S. Fish and Wildlife Service. Competition and predation from invasives is a leading cause of species extinction.

"A lot of states are concerned about losing the economic benefit of the Callery pear," Culley said. "So how do we reconcile the economic value of the Callery pear and its expense as an invasive?"

Ohio's Department of Agriculture is working with nursery growers to phase out the Callery pear in the Buckeye State. Ohio lawmakers in 2018 passed a bill banning the sale or distribution of Callery pears by 2023 to give growers a chance to plant alternatives.

Ohio grower Kyle Natorp of Natorp's Nursery in Mason said homeowners have many alternatives to the pear. Fewer customers are asking for pear trees these days, he said.

"There are dogwoods, crabapples and cherry trees. As a business, we try to grow what the customer wants rather than try to build a market from scratch," Natorp said.

His nursery encourages planting a diversity of trees instead of the same kind, which some builders prefer for consistency.

"They're all the same shape and color and flower. It looks nice but it's not a good horticultural practice," he said. "One disease can take them all out."

Culley said foresters should be vigilant about the proliferation of pear trees before they become established.

"We're warning people in the northern part of the state that they're spreading," she said. "These trees were introduced with the best of intentions. They've sort of gone crazy now and we need to deal with it and learn from our mistakes."

Credit: 
University of Cincinnati

X-ray imaging provides clues to fracture in solid-state batteries

image: Matthew McDowell, an assistant professor in the George W. Woodruff School of Mechanical Engineering and the School of Materials Science and Engineering, examines batteries in a cycling station.

Image: 
Rob Felt

Solid-state batteries - a new battery design that uses all solid components - have gained attention in recent years because of their potential to hold much more energy while simultaneously avoiding the safety challenges of their liquid-based counterparts.

But building a long-lasting solid-state battery is easier said than done. Now, researchers at the Georgia Institute of Technology have used X-ray computed tomography (CT) to visualize in real time how cracks form near the edges of the interfaces between materials in the batteries. The findings could help researchers find ways to improve the energy storage devices.

"Solid-state batteries could be safer than lithium-ion batteries and potentially hold more energy, which would be ideal for electric vehicles and even electric aircraft," said Matthew McDowell, an assistant professor in the George W. Woodruff School of Mechanical Engineering and the School of Materials Science and Engineering. "Technologically, it's a very fast moving field, and there are a lot of companies interested in this."

In a typical lithium-ion battery, energy is released during the transfer of lithium ions between two electrodes - a cathode and an anode - through a liquid electrolyte.

For the study, which was published June 4 in the journal ACS Energy Letters and was sponsored by the National Science Foundation, the research team built a solid-state battery in which a solid ceramic disc was sandwiched between two pieces of solid lithium. The ceramic disc replaced the typical liquid electrolyte.

"Figuring out how to make these solid pieces fit together and behave well over long periods of time is the challenge," McDowell said. "We're working on how to engineer these interfaces between these solid pieces to make them last as long as possible."

In collaboration with Christopher Saldana, an assistant professor in the George W. Woodruff School of Mechanical Engineering at Georgia Tech and an expert in X-ray imaging, the researchers placed the battery under an X-ray microscope and charged and discharged it, looking for physical changes indicative of degradation. Slowly over the course of several days, a web-like pattern of cracks formed throughout the disc.

Those cracks are the problem and occur alongside the growth of an interphase layer between the lithium metal and solid electrolyte. The researchers found that this fracture during cycling causes resistance to the flow of ions.

"These are unwanted chemical reactions that occur at the interfaces," McDowell said. "People have generally assumed that these chemical reactions are the cause the degradation of the cell. But what we learned by doing this imaging is that in this particular material, it's not the chemical reactions themselves that are bad - they don't affect the performance of the battery. What's bad is that the cell fractures, and that destroys the performance of the cell."

Solving the fracturing problem could be one of the first steps to unlocking the potential of solid state batteries, including their high energy density. The deterioration observed is likely to affect other types of solid-state batteries, the researchers noted, so the findings could lead to the design of more durable interfaces.

"In normal lithium-ion batteries, the materials we use define how much energy we can store," McDowell said. "Pure lithium can hold the most, but it doesn't work well with liquid electrolyte. But if you could use solid lithium with a solid electrolyte, that would be the holy grail of energy density."

Credit: 
Georgia Institute of Technology

Utrafast magnetism: Electron-phonon interactions examined at BESSY II

image: When illuminated by the synchrotron light, nickel emits x-rays itself due to the decay of valence electrons. The number of emitted photons reduces when increasing the temperature from room temperature (left) to 900°C (right).

Image: 
HZB

Interactions between electrons and phonons are regarded as the microscopic driving force behind ultrafast magnetization or demagnetization processes (spin-flips). However, it was not possible until now to observe such ultrafast processes in detail due to the absence of suitable methods.

Now, a team headed by Prof. Alexander Föhlisch has developed an original method to determine experimentally for the first time the electron-phonon driven spin-flip scattering rate in two model systems: ferromagnetic Nickel and nonmagnetic copper.

They used X-ray emission spectroscopy (XES) at BESSY II to do this. X-rays excited core electrons in the samples (Ni or Cu) to create the so-called core-holes, which were then filled by the decay of valence electrons. This decay results in the emission of light, which can then be detected and analyzed. The samples were measured at different temperatures to observe the effects of lattice vibrations (phonons) increasing from room temperature to 900 degrees Celsius.

As the temperature increased, ferromagnetic nickel showed a strong decrease in emissions. This observation fits well with the theoretical simulation of processes in the electronic band structure of nickel after excitations: by increasing the temperature and thus, the phonon population, the rate of scattering between electrons and phonons increases. Scattered electrons are no more available for decay, which results in a waning of the light emission. As expected, in the case of diamagnetic copper, the lattice vibrations had hardly any influence on the measured emissions.

"We believe that our article is of high interest not only to specialists in the fields of magnetism, electronic properties of solids and X-ray emission spectroscopy, but also to a broader readership curious about the latest developments in this dynamic field of research," says Dr. Régis Decker, first author and postdoctoral scientist in the Föhlisch team. The method can also be used for the analysis of ultrafast spin flip processes in novel quantum materials such as graphene, superconductors or topological insulators.

Credit: 
Helmholtz-Zentrum Berlin für Materialien und Energie

Take two E. coli and call me in the morning

Millions of people take capsules of probiotics with the goal of improving their digestion, but what if those bacteria were also able to detect diseases in the gut and indicate when something is awry? New research from the Wyss Institute at Harvard University and Harvard Medical School (HMS) has created an effective, non-invasive way to quickly identify new bacterial biosensors that can recognize and report the presence of various disease triggers in the gut, helping set the stage for a new frontier of digestive health monitoring and treatment. The paper is published in mSystems.

"Our understanding of how the human gut microbiome behaves is still in its early stages, which has hindered large-scale research into creating biosensors out of living bacteria," said David Riglar, Ph.D., a former postdoc at the Wyss Institute and HMS who now leads a research group as a Sir Henry Dale Fellow at Imperial College London. "This work provides a high-throughput platform for identifying genetic elements in bacteria that respond to different signals in the gut, putting us one step closer to engineering complex signaling pathways in bacteria that allow them to detect and even treat diseases long-term."

The new platform builds on previous work from the lab of Wyss Founding Core Faculty member Pamela Silver, Ph.D. that designed a genetic circuit consisting of a "memory element" derived from a virus and a synthetic "trigger element" that together can detect and record the presence of a given stimulus - originally, a deactivated version of the antibiotic tetracycline. The synthetic circuit was integrated into the genomes of E. coli bacteria, which were introduced into live mice that were then given tetracycline. The antibiotic caused the trigger element in the bacterial circuit to activate the memory element, which "flipped" like a switch that remained "on" for up to a week so that the bacteria "remembered" the presence of the tetracycline. The "on" signal was then easily read by non-invasively analyzing the animals' excrement.

The team next demonstrated that the circuit could be tweaked to detect and report tetrathionate (a naturally occurring molecule that indicates the presence of inflammation) in the intestine of living mice for up to six months after being introduced into the animals, showing that their system could be used to monitor signals that would be useful for diagnosing disease states in the gut long-term.

But tetrathionate is just one molecule; in order to develop new bacteria-based diagnostics, the researchers needed a way to rapidly test different potential trigger elements to see if they could respond to more disease signals.

First, they modified the genetic circuit by adding an antibiotic-resistance gene that is activated when the memory element is flipped into its "on" state, allowing bacteria that "remember" a trigger to survive exposure to the antibiotic spectinomycin. To test their updated circuit against a wide variety of molecular signals, they created a library of different strains of E. coli that each contained the memory element and a unique trigger element in its genome. This library of bacterial strains was then introduced into the guts of live mice to see if any of the trigger elements were activated by substances in the mice's intestines. When they grew bacteria from mouse fecal samples in a medium laced with spectinomycin, they found that a number of strains grew, indicating that their memory elements had been turned on during passage through the mice. Two of the strains in particular showed consistent activation, even when given to mice in isolation, indicating that they were activated by conditions inside the mice's gut and could serve as sensors of gut-specific signals.

The researchers repeated the experiment using a smaller library of E. coli strains whose trigger elements were genetic sequences thought to be associated with inflammation, ten of which were activated during transit through the mice. When this library was administered to mice that had intestinal inflammation, one particular strain displayed a stronger memory response in mice with inflammation compared to healthy mice, confirming that it was able to successfully record the presence of inflammatory biomolecules in the mouse gut and thus could serve as a living monitor of gastrointestinal health.

"The beauty of this method is that it allows us to identify biosensors that already exist in nature that we wouldn't be able to design ourselves, because so much of the function and regulation of bacterial genomes is still unknown," said first author Alexander Naydich, who recently completed his Ph.D. in the Silver lab. "We're really taking advantage of the incredible genetic diversity of the microbiome to home in on potential solutions quickly and effectively."

Additional features of the system include the ability to record signals that occur either chronically or transiently in the gut, as well as adjustable sensitivity in the form of synthetic ribosome binding site (RBS) sequences engineered into the trigger elements that can control the rate at which the promoters can induce an "on" memory state in response to a signal. These capabilities allow for the fine-tuning of bacterial biosensors to detect specific conditions within the gut over a long time span.

"We have been able to advance this technology from a tool that tests for one thing to a tool that can test for multiple things concurrently, which is not only useful for identifying new potential biosensors, but could one day be developed into a probiotic-like pill containing sophisticated collections of bacteria that sense and record several signals at once, allowing clinicians to 'fingerprint' a disease and have greater confidence in making a diagnosis," said Pamela Silver, who is also the Elliot T. and Onie H. Adams Professor of Biochemistry and Systems Biology at HMS.

"The continuing advances made by the Silver team in the development of living cellular devices based on the genetic reengineering of microbiome represents an entirely new approach to low-cost diagnostics. This approach has the potential to radically transform how we interact with and control biological systems, including our own bodies," said Wyss Founding Director Donald Ingber, M.D., Ph.D., who is also the Judah Folkman Professor of Vascular Biology at HMS and the Vascular Biology Program at Boston Children's Hospital, as well as Professor of Bioengineering at Harvard's John A. Paulson School of Engineering and Applied Sciences.

Credit: 
Wyss Institute for Biologically Inspired Engineering at Harvard

Botox cousin can reduce malaria in an environmentally friendly way

image: The newly discovered neurotoxin that specifically targets malaria mosquitoes. The structure of the part of PMP1 that 'recognizes' the malaria mosquito.

Image: 
Geoffrey Masuyer/Stockholm University

Researchers at the universities in Stockholm and Lund, in collaboration with researchers from the University of California, have found a new toxin that selectively targets mosquitos. This can lead to innovative and environmentally friendly approaches to reduce malaria. The results are presented in an article published in Nature Communications.

Botox (Botulinum neurotoxins) and the toxin causing tetanus belong to the same family of proteins and are among the most toxic substances known. Previously this family of toxins has been believed to only target vertebrates such as humans, mice and birds. But now, researchers have found a toxin which targets the group of mosquitoes that are responsible for transmitting malaria.

"We have discovered a neurotoxin, PMP1, that selectively targets malaria mosquitos, demonstrating that this family of toxins have a much broader host spectrum than previously believed", says Pål Stenmark of Stockholm University and Lund University. He leads the joint research group from the two Swedish universities that has discovered the new neurotoxin in close collaboration with Sarjeet Gill's research group at the University of California.

"PMP1 makes it possible to reduce the prevalence of malaria in a new and environmentally friendly way. Because these toxins are proteins, they do not leave any artificial residues as they decompose. PMP1 may also be developed into biological insecticides designed to target other selected disease vectors or pests", Pål Stenmark says.

Today, insecticides and mosquito nets treated with insecticides are the main means of combating the spread of malaria, but new methods of combating malaria mosquitoes must be developed constantly as mosquitoes become resistant to most toxins over time.

"We found PMP1 in a bacterium from two threatened habitats: a mangrove swamp in Malaysia and the forest floor in Brazil. It shows just how important it is to protect these treasure chests of biological diversity", Pål Stenmark says.

Credit: 
Stockholm University

Toxic substances found in the glass and decoration of alcoholic beverage bottles

Bottles of beer, wine and spirits contain potentially harmful levels of toxic elements, such as lead and cadmium, in their enamelled decorations, a new study shows.

Researchers at the University of Plymouth analysed both the glass and enamelled decorations on a variety of clear and coloured bottles readily available in shops and supermarkets.

They showed that cadmium, lead and chromium were all present in the glass, but at concentrations where their environmental and health risks were deemed to be of low significance.

However, the enamels were of greater concern, with cadmium concentrations of up to 20,000 parts per million in the decorated regions on a range of spirits, beer and wine bottles, and lead concentrations up to 80,000ppm in the décor of various wine bottles. The limit for lead in consumer paints is 90ppm.

The study also showed the elements had the potential to leach from enamelled glass fragments, and when subjected to a standard test that simulates rainfall in a landfill site, several fragments exceeded the US Model Toxins in Packaging Legislation and could be defined as "hazardous".

Published in Environmental Science and Technology, the research was carried out by Associate Professor (Reader) in Aquatic Geochemistry and Pollution Science, Dr Andrew Turner.

He has previously shown that the paint or enamel on a wide variety of items - including playground equipment, second hand toys and drinking glasses - can feature levels of toxic substances that are potentially harmful to human health.

Dr Turner said: "It has always been a surprise to see such high levels of toxic elements in the products we use on a daily basis. This is just another example of that, and further evidence of harmful elements being unnecessarily used where there are alternatives available. The added potential for these substances to leach into other items during the waste and recycling process is an obvious and additional cause for concern."

For the current research, bottles of beer, wine and spirits were purchased from local and national retail outlets between September 2017 and August 2018, with the sizes ranging from 50 ml to 750 ml.

They were either clear, frosted, green, ultraviolet-absorbing green (UVAG) or brown with several being enamelled over part of the exterior surface with images, patterns, logos, text and/or barcodes of a single colour or multiple colours.

Out of the glass from 89 bottles and fragments analysed using x-ray fluorescence (XRF) spectrometry, 76 were positive for low levels of lead and 55 positive for cadmium. Chromium was detected in all green and UVAG bottles, but was only in 40% of brown glass and was never in clear glass.

Meanwhile, the enamels of 12 products out of 24 enamelled products tested were based wholly or partly on compounds of either or both lead and cadmium.

Dr Turner added: "Governments across the world have clear legislation in place to restrict the use of harmful substances on everyday consumer products. But when we contacted suppliers, many of them said the bottles they use are imported or manufactured in a different country than that producing the beverage. This poses obvious challenges for the glass industry and for glass recycling and is perhaps something that needs to be factored in to future legislation covering this area."

Credit: 
University of Plymouth

Sea slugs use algae's bacterial 'weapons factory' in three-way symbiotic relationship

video: Researchers have found that a sea slug that acquires defensive chemicals from eating algae has a surprising twist. The toxic chemicals, kahalalides, are actually made by bacteria that live inside the algae. The slug, algae and bacteria form a three-way symbiotic relationship.

Image: 
Mohamed Donia, Princeton University

Delicate yet voracious, the sea slug Elysia rufescens grazes cow-like on bright green tufts of algae, rooting around to find the choicest bits.

But this inch-long marine mollusk gains not only a tasty meal -- it also slurps up the algae's defensive chemicals, which the slug can then deploy against its own predators.

In a new study, a Princeton-led team has discovered that these toxic chemicals originate from a newly identified species of bacteria living inside the algae. The team found that the bacteria have become so dependent on their algal home that they cannot survive on their own. In turn, the bacteria devote at least a fifth of their metabolic efforts to making poisonous molecules for their host.

The intertwined story of these three characters -- the sea slug E. rufescens, marine algae of the genus Bryopsis, and the newly identified bacteria -- form a three-way symbiotic relationship. A symbiotic relationship is one in which several organisms closely interact. In this example, the slug gets food and defensive chemicals, the algae get chemicals, and the bacteria get a home and free meals for life in the form of nutrients from their algae host.

"It's a complicated system and a very unique relationship among these three organisms," said Mohamed Donia, assistant professor of molecular biology at Princeton University and senior author on the study. "The implications are big for our understanding of how bacteria, plants and animals form mechanistic dependencies, where biologically active molecules transcend the original producer and end up reaching and benefitting a network of interacting partners."

Researchers at Princeton and the University of Maryland Center for Environmental Science's Institute of Marine and Environmental Technology unwound this tale using powerful genomic techniques to decipher who does what in the relationship. They sequenced the collective genomic information of the slugs, algae and their microbiomes, which are the bacteria that live inside these organisms. Then they used computer algorithms to figure out which genes belonged to which organism. Through this method they identified the new bacterial species and linked it to the production of the toxins.

The team found that the bacterial species, which they named Candidatus Endobryopsis kahalalidefaciens, produces about 15 or so different toxins, known as kahalalides. These chemicals are known to act as a deterrent to surrounding fish and other marine animals. At least one of the kahalalides has been evaluated as a potential cancer drug because of its potent toxicity.

The researchers also discovered that the bacteria have permanently sacrificed their independence for a life of security, as they no longer possess the genes required for survival outside the algae. Instead, about a fifth of the bacteria's genome is directed toward pumping out toxic molecules that stop predators from eating the bacterium's home.

One predator that can eat the toxins is the slug E. rufescens. The slug stores them, building up a chemical arsenal that is ten times more concentrated than the toxins in the algae.

One of the questions the team asked was whether the slug acquires not just the chemicals but also the factory -- the bacteria -- itself. But they found that the slug doesn't retain the ingested bacteria but rather digests them as food, keeping just the chemicals.

Elysia rufescens, named for its reddish hue, lives in warm shallow waters in various locations including Hawaii, where the researchers collected the slugs. Elysia belongs to a family of "solar-powered slugs," so named because they sequester, along with the defensive chemicals, the algae's energy-making photosynthetic machinery, making them some of the few animals in the world that create their own nutrients from sunlight.

Donia became interested in how algae make chemical defenses because several other marine organisms -- such as sponges and tunicates -- use bacterial symbionts to make toxins. He decided to look at the chemical structures of the toxins and found that their structure suggested they were made by bacteria or fungi.

For assistance he turned to Russell Hill, professor at the University of Maryland Center for Environmental Science and world's expert in marine ecology, including this system. Hill and his then-graduate student Jeanette Davis assisted Donia and Princeton postdoctoral researchers Jindong Zan, Zhiyuan Li and Maria Diarey Tianero in collecting the algae and slugs in Hawaii. Zan and Li share co-first-authorship on the study.

"Our collaboration, building on the work of colleagues and under the leadership of Mohamed, has finally solved the long-standing mystery of the true producer of the kahalalide compounds," Hill said. "It is so satisfying to now understand the remarkable bacterium and its pathways that synthesize these complex compounds."

The team compared the bacteria to a factory because the organism consumes raw materials in the form of amino acids supplied from the algae and releases a finished product in the form of toxic chemicals.

This theme of specialized bacterial symbionts that have evolved to perform one function -- to make defensive molecules for the host in exchange for a protected living space -- appears to be surprisingly common in the marine environment, from algae to tunicates to sponges, Donia said.

This is the second such relationship the team has identified. Their previous study, published April 1 in the journal Nature Microbiology, identified a bacterium that lives in symbiosis with marine sponges and produces toxins that protect the sponge from predation.

"The weirdest thing is that the sponge has actually evolved a specialized type of cells, which we called 'chemobacteriocytes,' dedicated entirely to housing and maintaining a culture of this bacterium," Donia said. "This is very strange, given the small number of specialized sponge cells in general. Again, the bacterium cannot produce the substrates and cannot live on its own."

Credit: 
Princeton University

NJIT conducts the largest-ever simulation of the Deepwater Horizon spill

image: A team of New Jersey Institute of Technology (NJIT) researchers is conducting oil dispersion experiments at the 600-ft.-long salt water wave tank at the US Department of the Interior's Ohmsett facility on the Jersey Shore.

Image: 
NJIT

In a 600-ft.-long saltwater wave tank on the coast of New Jersey, a team of New Jersey Institute of Technology (NJIT) researchers is conducting the largest-ever simulation of the Deepwater Horizon spill to determine more precisely where hundreds of thousands of gallons of oil dispersed following the drilling rig's explosion in the Gulf of Mexico in 2010.

Led by Michel Boufadel, director of NJIT's Center for Natural Resources (CNR), the initial phase of the experiment involved releasing several thousand gallons of oil from a one-inch pipe dragged along the bottom of the tank in order to reproduce ocean current conditions.

"The facility at Ohmsett allows us to simulate as closely as possible the conditions at sea, and to thus observe how droplets of oil formed and the direction and distance they traveled," Boufadel said.

Later this summer, his team will conduct the second phase of the experiment, when they will apply dispersants to the oil as it shoots into the tank to observe the effects on droplet formation and trajectory.

His team's research, conducted at the U.S. Department of the Interior's Ohmsett facility at Naval Weapons Station Earle in Leonardo, N.J., was detailed in a recent article, "The perplexing physics of oil dispersants," in the Proceedings of the National Academy of Sciences (PNAS).

"These experiments are the largest ever conducted by a university in terms of the volume of oil released and the scale," he noted. "The data we obtained, which has not been published yet, is being used by other researchers to calibrate their models."

The team expects to come away from these experiments with insights they can apply to a variety of ocean-based oil releases.

"Rather than limiting ourselves to a forensic investigation of the Deepwater Horizon release, we are using that spill to explore spill scenarios more generally," Boufadel said. "Our goal is not to prepare for the previous spill, but to broaden the horizons to explore various scenarios."

More than nine years after the Deepwater Horizon drilling rig exploded, sending up to 900,000 tons of oil and natural gas into the Gulf of Mexico, there are, however, lingering questions about the safety and effectiveness of a key element of the emergency response: injecting chemicals a mile below the ocean surface to break up oil spewing from the ruptured sub-sea wellhead to prevent it from reaching environmentally sensitive regions.

To date, spill cleanups have focused primarily on removing or dispersing oil on the ocean surface and shoreline, habitats deemed more important ecologically. Knowledge of the deep ocean is in general far murkier, and at the time of the accident, BP's drilling operation was the deepest in the world.

Two years ago, Boufadel and collaborators from the Woods Hole Oceanographic Institution, NJIT, Texas A&M University and the Swiss Federal Institute of Aquatic Science and Technology pooled their scientific and technical expertise to provide some of the first answers to these controversial policy questions.

The team began by developing physical models and computer simulations to determine the course the oil and gas took following the eruption, including the fraction of larger, more buoyant droplets that floated to the surface and the amount of smaller droplets entrapped deep below it due to sea stratification and currents. Boufadel and Lin Zhao, a postdoctoral researcher in the CNR, developed a model that predicted the size of droplets and gas bubbles emanating from the wellhead during the sub-surface blowout; they then factored in water pressure, temperature and oil properties into the model, and employed it to analyze the effects of the injected dispersants on this stream.

"Among other tests of our model, we studied the hydrodynamics of various plumes of oil jetting into different wave tanks," Zhao noted. Researchers at Texas A&M in turn created a model to study the movement of pollutants away from the wellhead.

The researchers determined that the use of dispersants had a substantial impact on air quality in the region of the spill by reducing the amount of toxic compounds such as benzene that reached the surface of the ocean, thus protecting emergency workers on the scene from the full brunt of the pollution. Their study was published in PNAS.

"Government and industry responders were faced with an oil spill of unprecedented size and sea depth, pitting them in a high-stakes battle against big unknowns," Christopher Reddy, a senior scientist at Woods Hole Oceanographic Institution, and Samuel Arey, a senior researcher at the Swiss Federal Institute of Aquatic Science and Technology, wrote in Oceanus magazine.

"Environmental risks posed by deep-sea petroleum releases are difficult to predict and assess due to the lack of prior investigations," Boufadel noted. "There is also a larger debate about the impact of chemical dispersants. There is a school of thought that says all of the oil should be removed mechanically."

Boufadel added that the water-soluble and volatile compounds that did not reach the surface were entrapped in a water mass that formed a stable intrusion at 900 to 1,300 meters below the surface.

"These predictions depend on local weather conditions that can vary from day to day. However, we predict that clean-up delays would have been much more frequent if subsurface dispersant injection had not been applied," Reddy and Arey said, adding, "But this is not the final say on the usage of dispersants."

The current experiment is an attempt to provide more definitive answers.

Credit: 
New Jersey Institute of Technology

G20 leaders: Achieving universal health coverage should top your agenda

SEATTLE - G20 leaders meeting in Japan this week should focus on fulfilling their obligations to improve and expand their nations' health care systems.

In a commentary published today, 20 health data, financing, and policy experts contend that funding for low- and middle-income nations must be increased to address the growing impacts of climate change, wars and conflicts, and a global political trend toward nationalism. They also argue that increased domestic funding is needed to achieve the United Nations' Sustainable Development Goals (SDGs), including universal health coverage.

"Achieving universal health coverage should be at the top of the agenda for this meeting of world leaders," said Dr. Christopher Murray, Director of the Institute for Health Metrics and Evaluation at the University of Washington's School of Medicine. "The G20 leaders should assess how to encourage channeling resources to improve primary health care, as well as prevention and treatment of non-communicable diseases and to strengthen and support leadership, governance and accountability across all levels of health systems. We've witnessed a decade of plateaued funding and with the deadline to meet the SDGs just 11 years away, the world is watching."

Dr. Murray and other authors examined trends in spending for international development between 2012 and 2017, and are urging the G20 leaders to address three questions:

How do you allocate funds to deliver equitable health improvement in people's lives?

How do you deliver those funds to strengthen health systems?

How do you support domestic spending in poor countries and create more effective partnerships to deliver universal health coverage?

"The landscape of development assistance for health is evolving, and therefore ripe for any desired realignment," the authors write in the commentary, which was published in the international medical journal The Lancet. "Reductions in child poverty and fertility throughout the world mean that many countries are undergoing demographic and epidemiological transitions, with their populations living longer and enduring a more diverse set of ailments."

The commentary notes that from 2000 to 2010, development assistance for health grew at a rate of 10% annually, though since 2010, funding has plateaued at 1.3% annually. In 2018, $38.9 billion (USD) was provided, with 65.2% coming from G20 members. This $38.9 billion represents 0.05% of the G20 nations' combined economies.

In addition, the commentary calls out the G20 nations for their levels of funding, the annual rate of change in funding provided between 2012 and 2017, and the health sectors for which those funds were earmarked. Among the highlights:

India (43.4%), Brazil (37.2%), and Indonesia (30.9%) had the highest percentage increases in development assistance provided between 2012 and 2017.

The greatest decreases in development assistance provided between 2012 and 2017 were in Saudi Arabia (-19.4%), Australia (-16.0%), and Russia (-10.1%).

The US' increase over the same time period was 0.9%, while the UK's was 2.6%.

South Africa was the lowest G20 contributor with $5.2 million spent in 2017, while the US was the highest at $14.4 billion.

"The global health challenges and expansive set of global health goals in the SDGs require a new approach to address pending questions about how development assistance for health can better prioritize equity, efficiency, and sustainability, particularly through domestic resource use and mobilization and strategic partnerships," the authors write.

Credit: 
Institute for Health Metrics and Evaluation

Artificial intelligence controls robotic arm to pack boxes and cut costs

image: An automation workspace with a Kuka robotic arm and a bin containing a pile of objects that need to be tightly packed into a shipping order box. The Rutgers robotic packing system is designed to overcome errors during packing.

Image: 
Rahul Shome/Rutgers University-New Brunswick

Rutgers computer scientists used artificial intelligence to control a robotic arm that provides a more efficient way to pack boxes, saving businesses time and money.

"We can achieve low-cost, automated solutions that are easily deployable. The key is to make minimal but effective hardware choices and focus on robust algorithms and software," said the study's senior author Kostas Bekris, an associate professor in the Department of Computer Science in the School of Arts and Sciences at Rutgers University-New Brunswick.

Bekris, Abdeslam Boularias and Jingjin Yu, both assistant professors of computer science, formed a team to deal with multiple aspects of the robot packing problem in an integrated way through hardware, 3D perception and robust motion.

The scientists' peer-reviewed study was published recently at the IEEE International Conference on Robotics and Automation, where it was a finalist for the Best Paper Award in Automation. The study coincides with the growing trend of deploying robots to perform logistics, retail and warehouse tasks. Advances in robotics are accelerating at an unprecedented pace due to machine learning algorithms that allow for continuous experiments.

This YouTube video shows a Kuka robotic arm tightly packing objects from a bin into a shipping order box (five times actual speed).

Tightly packing products picked from an unorganized pile remains largely a manual task, even though it is critical to warehouse efficiency. Automating such tasks is important for companies' competitiveness and allows people to focus on less menial and physically taxing work, according to the Rutgers scientific team.

The Rutgers study focused on placing objects from a bin into a small shipping box and tightly arranging them. This is a more difficult task for a robot compared with just picking up an object and dropping it into a box.

The researchers developed software and algorithms for their robotic arm. They used visual data and a simple suction cup, which doubles as a finger for pushing objects. The resulting system can topple objects to get a desirable surface for grabbing them. Furthermore, it uses sensor data to pull objects toward a targeted area and push objects together. During these operations, it uses real-time monitoring to detect and avoid potential failures.

Since the study focused on packing cube-shaped objects, a next step would be to explore packing objects of different shapes and sizes. Another step would be to explore automatic learning by the robotic system after it's given a specific task.

Credit: 
Rutgers University

Solar energy could turn the Belt and Road Initiative green

image: This visual abstract summarizes how with huge and imbalanced regional solar potential, cooperation and interconnection by way of the BRI offers an opportunity to decouple future economic growth from increasing carbon emissions.

Image: 
Zhan Wang

The region covered by the Belt and Road Initiative (BRI) has significant potential to be powered by solar energy, researchers report June 27 in the journal Joule. Less than 4 percent of the maximum solar potential of the region could meet the BRI's electricity demand for 2030. The research suggests a possible solution to reduce BRI countries' need for fossil fuels as they develop. This is the first time the renewable energy potential of the region is quantified.

The Chinese government launched the BRI in 2013, aiming to promote regional development and connectivity. "Belt" represents the Silk Road Economic Belt that echoes the ancient Silk Road, which linked Asia to Europe. "Road" refers to the 21st Century Maritime Silk Road that connects China to South East Asia, South Asia, and North Africa. So far, more than 120 countries in Asia, Europe, Africa, North America, South America, and Oceania are involved.

Constructing hard infrastructure, such as railways, buildings, and power plants, is a main focus of the initiative. However, most of the projects use large amounts of energy, resulting in high emissions. In addition, most countries involved in the BRI are developing countries. A proportion of their population doesn't have access to electricity. As the region develops under the initiative, the need for power is projected to increase.

"If we continue to rely on fossil fuels for energy, it can add significantly more CO2 to the atmosphere, not just this year, but for the next few decades," says co-author Xi Lu at Tsinghua University. "This is not sustainable. If we want to achieve the emission reduction goal set by the Paris Agreement, we need renewable energy."

Many BRI countries, especially those in West and South Asia, have high sun exposure, so Lu and his colleagues decided to assess the region's solar resource. The team selected 66 BRI countries that are connected geographically and built an integrative spatial model to calculate their solar power potential with high-resolution data.

The team first identified areas suitable for building solar farms. These areas would receive sufficient solar radiation and have lower land value otherwise--places like forests and agriculture land are excluded. Then they computed the spacing and packing density of solar panels, which absorb sunlight and generate energy, that would maximize power yield for each area. Finally, they calculated the areas' energy outputs in each hour after considering limiting factors like shading and temperature, which affects the performance of solar panels.

"Our model provides a comprehensive analysis of the region's solar energy potential by taking into account many influencing factors," Lu says. "We also calculated the solar energy outputs on an hourly basis, which is more accurate than previous estimates that use monthly data."

The team found that these countries can generate as much as 448.9 petawatt hour of energy, which is about 41 times the demand for electricity in these countries in 2016. Their 2030 electricity need could be satisfied by converting only 3.7 percent of the region's solar energy. To achieve this, it would require an investment of $11.2 trillion and a land area of 88,426 square kilometers.

"The money is very large," says co-author Michael McElroy at Harvard University. "but if you make that commitment, the energy is free. Plus, the cost of building solar farms is coming down very dramatically because of the technological advances. We project it to become similar to fossil fuels within a decade."

The analysis also reveals a mismatch between the energy potential and the electricity demand. Countries with 70.7% of the potential consume only 30.1% of regional electricity. Therefore, cross-border power transmission grids can be utilized to maximize the benefits from solar energy through exporting surpluses of solar electricity to meet shortages in supplies of electricity elsewhere. To put such a project in action, international cooperation is essential.

"It would be challenging, because different countries have different priorities when it comes to development," Lu says. "But the BRI is an opportunity as it sets up a framework for collaborations between countries, associations, and industries to happen. There are also funds and banks committed to promoting green development of the BRI, which provides financial support."

Because BRI countries span multiple time zones and various climate conditions, such cross-border grids would also help reduce the impact when sunlight isn't available in certain areas.

"This advantage coincides with the 'Facilities Connectivity' concept, which is one of the five cooperation priorities of the BRI," says the first author Shi Chen at Tsinghua University. "In the context of Global Energy Interconnection (GEI), solar power generation is bound to usher in a new development opportunity in the wave of trans-national and even trans-regional power interconnection."

"The solar potential and cooperation opportunities revealed in this analysis is a chance for the BRI countries to leapfrog from their carbon-intensive trajectories to low-carbon futures," says co-author Jiming Hao at Tsinghua University. "The opportunity to decouple future economic growth from increasing carbon emissions does exist."

"Our hope here is that this paper can influence the greening of BRI, so we can try to do the initiative in a better way," says McElroy. "And I'm optimistic about that."

Credit: 
Cell Press

'Mother-of-pearl' inspired glass shatters the impact performance of alternatives

Inspired by the properties of nacre - the opalescent biological composite found inside seashells - researchers have engineered a new glass that's ductile yet tough, and highly impact-resistant. The mother-of-pearl-bioinspired material more than doubles the impact tolerance of widely used tempered and laminated glass, whilst maintaining all the unique qualities that make glass one of the most ubiquitous materials of our everyday lives. Prized for its optical, thermal, electrical, chemical and mechanical properties, glass is found in a wide variety of everything, from high-rise buildings to mobile phones. However, for all these strengths, glass is an inherently brittle material. While tempered and laminated glasses can be more impact-resistant than conventional glass, these materials, too, generally lack the toughness required for applications in which material failure begets high consequences. To build a better glass, Zhen Yin and colleagues looked to nacre, the naturally impact-resistant material incorporated into the shells of mollusks, which shields their soft bodies from strong predator jaws. The key to the material's natural toughness lies in its unique construction. Built like a brick wall, nacre is composed of stacked layers of microscopic mineral tablets bonded by biopolymers, which can slide past one another when under stress. Yin et al. engineered a laminated glass with similar capabilities using borosilicate glass sheets that were layered and bonded using a synthetic ethylene-vinyl acetate. By mimicking the "tablet sliding mechanism" of nacre, large amounts of applied mechanical energy - which would otherwise cause glass to shatter - can be dissipated. According to the results, the nacre-like glass is two to three times more impact-resistant than tempered and laminated glass. In a related Perspective, Kyriaki Datsiou discuss the limits of the new glass and avenues in which it could be improved.

Credit: 
American Association for the Advancement of Science (AAAS)