Tech

Short term low carbohydrate diet linked to remission of type 2 diabetes

Patients with type 2 diabetes who follow a strict low carbohydrate diet for six months may experience greater rates of remission compared with other recommended diets without adverse effects, suggests a study published by The BMJ today.

The researchers acknowledge that most benefits diminished at 12 months, but say doctors might consider short term strict low carbohydrate diets for managing type 2 diabetes, while actively monitoring and adjusting diabetes medication as needed.

Type 2 diabetes is the most common form of diabetes worldwide and diet is recognised as an essential part of treatment. But uncertainty remains about which diet to choose and previous studies have reported mixed results.

To address this evidence gap, a team of international researchers set out to assess the effectiveness and safety of low carbohydrate diets (LCDs) and very low carbohydrate diets (VLCDs) for people with type 2 diabetes, compared with (mostly low fat) control diets.

Their findings are based on analysis of published and unpublished data from 23 randomised trials involving 1,357 participants.

LCDs were defined as less than 26% daily calories from carbohydrates and VLCDs were defined as less than 10% daily calories from carbohydrates for at least 12 weeks in adults (average age 47 to 67 years) with type 2 diabetes.

Outcomes were reported at six and 12 months and included remission of diabetes (reduced blood sugar levels with or without the use of diabetes medication), weight loss, adverse events and health related quality of life.

Although the trials were designed differently, and were of varying quality, the researchers were able to allow for this in their analysis.

Based on low to moderate certainty evidence, the researchers found that patients on LCDs achieved higher diabetes remission rates at six months compared with patients on control diets, without adverse events.

For example, based on moderate certainty evidence from 8 trials with 264 participants, those following a LCD experienced, on average, a 32% absolute risk reduction (28 fewer cases per 100 followed) in diabetes remission at 6 months.

LCDs also increased weight loss, reduced medication use, and improved body fat (triglyceride) concentrations at six months.

However, most of these benefits diminished at 12 months, a finding consistent with previous reviews, and some evidence showed worsening of quality of life and cholesterol levels at 12 months.

This study used robust methods to increase the precision and overall certainty of the effect estimates. But the authors acknowledge some limitations, such as the ongoing debate around what constitutes remission of diabetes, and uncertainty over the longer term effectiveness and safety of LCDs.

They also stress that their results are based on moderate to low certainty evidence.

As such, they suggest clinicians "might consider short term LCDs for management of type 2 diabetes, while actively monitoring and adjusting diabetes medication as needed."

"Future long term, well designed, calorie controlled randomised trials are needed to determine the effects of LCD on sustained weight loss and remission of diabetes, as well as cardiovascular mortality and major morbidity," they conclude.

Credit: 
BMJ Group

Melting icebergs key to sequence of an ice age, scientists find

Scientists claim to have found the 'missing link' in the process that leads to an ice age on Earth.

Melting icebergs in the Antarctic are the key, say the team from Cardiff University, triggering a series of chain reactions that plunges Earth into a prolonged period of cold temperatures.

The findings have been published today in Nature from an international consortium of scientists from universities around the world.

It has long been known that ice age cycles are paced by periodic changes to Earth's orbit of the sun, which subsequently changes the amount of solar radiation that reaches the Earth's surface.

However, up until now it has been a mystery as to how small variations in solar energy can trigger such dramatic shifts in the climate on Earth.

In their study, the team propose that when the orbit of Earth around the sun is just right, Antarctic icebergs begin to melt further and further away from Antarctica, shifting huge volumes of freshwater away from the Southern Ocean and into the Atlantic Ocean.

As the Southern Ocean gets saltier and the North Atlantic gets fresher, large-scale ocean circulation patterns begin to dramatically change, pulling CO2 out of the atmosphere and reducing the so-called greenhouse effect.

This in turn pushes the Earth into ice age conditions.

As part of their study the scientists used multiple techniques to reconstruct past climate conditions, which included identifying tiny fragments of Antarctic rock dropped in the open ocean by melting icebergs.

The rock fragments were obtained from sediments recovered by the International Ocean Discovery Program (IODP) Expedition 361, representing over 1.6 million years of history and one of the longest detailed archives of Antarctic icebergs.

The study found that these deposits, known as Ice-Rafted Debris, appeared to consistently lead to changes in deep ocean circulation, reconstructed from the chemistry of tiny deep-sea fossils called foraminifera.

The team also used new climate model simulations to test their hypothesis, finding that huge volumes of freshwater could be moved by the icebergs.

Lead author of the study Aidan Starr, from Cardiff University's School of Earth and Environmental Sciences, said: "We were astonished to find that this lead-lag relationship was present during the onset of every ice age for the last 1.6 million years. Such a leading role for the Southern Ocean and Antarctica in global climate has been speculated but seeing it so clearly in geological evidence was very exciting".

Professor Ian Hall, co-author of the study and co-chief scientist of the IODP Expedition, also from the School of Earth and Environmental Sciences, said: "Our results provide the missing link into how Antarctica and the Southern Ocean responded to the natural rhythms of the climate system associated with our orbit around the sun."

Over the past 3 million years the Earth has regularly plunged into ice age conditions, but at present is currently situated within an interglacial period where temperatures are warmer.

However, due to the increased global temperatures resulting from anthropogenic CO2 emissions, the researchers suggest the natural rhythm of ice age cycles may be disrupted as the Southern Ocean will likely become too warm for Antarctic icebergs to travel far enough to trigger the changes in ocean circulation required for an ice age to develop.

Professor Hall believes that the results can be used to understand how our climate may respond to anthropogenic climate change in the future.

"Likewise as we observe an increase in the mass loss from the Antarctic continent and iceberg activity in the Southern Ocean, resulting from warming associated with current human greenhouse-gas emissions, our study emphasises the importance of understanding iceberg trajectories and melt patterns in developing the most robust predictions of their future impact on ocean circulation and climate," he said.

Professor Grant Bigg, from the University of Sheffield's Department of Geography, who contributed to the iceberg model simulations, said: "The groundbreaking modelling of icebergs within the climate model is crucial for identifying and supporting the ice-rafted debris hypothesis of Antarctic iceberg meltwater impacts which are leading glacial cycle onsets."

Credit: 
Cardiff University

Scientists reveal mechanism that causes irritable bowel syndrome

image: About 20% of people suffer from IBS.

Image: 
Polina Zimmerman

KU Leuven researchers have identified the biological mechanism that explains why some people experience abdominal pain when they eat certain foods. The finding paves the way for more efficient treatment of irritable bowel syndrome and other food intolerances. The study, carried out in mice and humans, was published in Nature.

Up to 20% of the world's population suffers from the irritable bowel syndrome (IBS), which causes stomach pain or severe discomfort after eating. This affects their quality of life. Gluten-free and other diets can provide some relief, but why this works is a mystery, since the patients are not allergic to the foods in question, nor do they have known conditions such as coeliac disease.

"Very often these patients are not taken seriously by physicians, and the lack of an allergic response is used as an argument that this is all in the mind, and that they don't have a problem with their gut physiology," says Professor Guy Boeckxstaens, a gastroenterologist at KU Leuven and lead author of the new research. "With these new insights, we provide further evidence that we are dealing with a real disease."

Histamine

His team's laboratory and clinical studies reveal a mechanism that connects certain foods with activation of the cells that release histamine (called mast cells), and subsequent pain and discomfort. Earlier work by Professor Boeckxstaens and his colleagues showed that blocking histamine, an important component of the immune system, improves the condition of people with IBS.

In a healthy intestine, the immune system does not react to foods, so the first step was to find out what might cause this tolerance to break down. Since people with IBS often report that their symptoms began after a gastrointestinal infection, such as food poisoning, the researchers started with the idea that an infection while a particular food is present in the gut might sensitise the immune system to that food.

They infected mice with a stomach bug, and at the same time fed them ovalbumin, a protein found in egg white that is commonly used in experiments as a model food antigen. An antigen is any molecule that provokes an immune response. Once the infection cleared, the mice were given ovalbumin again, to see if their immune systems had become sensitised to it. The results were affirmative: the ovalbumin on its own provoked mast cell activation, histamine release, and digestive intolerance with increased abdominal pain. This was not the case in mice that had not been infected with the bug and received ovalbumin.

A spectrum of food-related immune diseases

The researchers were then able to unpick the series of events in the immune response that connected the ingestion of ovalbumin to activation of the mast cells. Significantly, this immune response only occurred in the part of the intestine infected by the disruptive bacteria. It did not produce more general symptoms of a food allergy.

Professor Boeckxstaens speculates that this points to a spectrum of food-related immune diseases. "At one end of the spectrum, the immune response to a food antigen is very local, as in IBS. At the other end of the spectrum is food allergy, comprising a generalised condition of severe mast cell activation, with an impact on breathing, blood pressure, and so on."

The researchers then went on to see if people with IBS reacted in the same way. When food antigens associated with IBS (gluten, wheat, soy and cow milk) were injected into the intestine wall of 12 IBS patients, they produced localised immune reactions similar to that seen in the mice. No reaction was seen in healthy volunteers.

The relatively small number of people involved means this finding needs further confirmation, but it appears significant when considered alongside the earlier clinical trial showing improvement during treatment of IBS patients with anti-histaminics. "This is further proof that the mechanism we have unravelled has clinical relevance," Professor Boeckxstaens says.

A larger clinical trial of the antihistamine treatment is currently under way. "But knowing the mechanism that leads to mast cell activation is crucial, and will lead to novel therapies for these patients," he goes on. "Mast cells release many more compounds and mediators than just histamine, so if you can block the activation of these cells, I believe you will have a much more efficient therapy."

Credit: 
KU Leuven

Robotic swarm swims like a school of fish

image: These fish-inspired robots can synchronize their movements without any outside control. Based on the simple production and detection of LED light, the robotic collective exhibits complex self-organized behaviors, including aggregation, dispersion and circle formation.

Image: 
Image courtesy of Self-organizing Systems Research Group

Schools of fish exhibit complex, synchronized behaviors that help them find food, migrate and evade predators. No one fish or team of fish coordinates these movements nor do fish communicate with each other about what to do next. Rather, these collective behaviors emerge from so-called implicit coordination -- individual fish making decisions based on what they see their neighbors doing.

This type of decentralized, autonomous self-organization and coordination has long fascinated scientists, especially in the field of robotics.

Now, a team of researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and the Wyss Institute for Biologically Inspired Engineering have developed fish-inspired robots that can synchronize their movements like a real school of fish, without any external control. It is the first time researchers have demonstrated complex 3D collective behaviors with implicit coordination in underwater robots.

"Robots are often deployed in areas that are inaccessible or dangerous to humans, areas where human intervention might not even be possible," said Florian Berlinger, a PhD Candidate at SEAS and Wyss and first author of the paper. "In these situations, it really benefits you to have a highly autonomous robot swarm that is self-sufficient. By using implicit rules and 3D visual perception, we were able to create a system that has a high degree of autonomy and flexibility underwater where things like GPS and WiFi are not accessible."

The research is published in Science Robotics.

The fish-inspired robotic swarm, dubbed Blueswarm, was created in the lab of Radhika Nagpal, the Fred Kavli Professor of Computer Science at SEAS and Associate Faculty Member at the Wyss Institute. Nagpal's lab is a pioneer in self-organizing systems, from their 1,000 robot Kilobot swarm to their termite-inspired robotic construction crew.

However, most previous robotic swarms operated in two-dimensional space. Three-dimensional spaces, like air and water, pose significant challenges to sensing and locomotion.

To overcome these challenges, the researchers developed a vision-based coordination system in their fish robots based on blue LED lights. Each underwater robot, called a Bluebot, is equipped with two cameras and three LED lights. The on-board, fish-lens cameras detect the LEDs of neighboring Bluebots and use a custom algorithm to determine their distance, direction and heading. Based on the simple production and detection of LED light, the researchers demonstrated that the Blueswarm could exhibit complex self-organized behaviors, including aggregation, dispersion and circle formation.

"Each Bluebot implicitly reacts to its neighbors' positions," said Berlinger. "So, if we want the robots to aggregate, then each Bluebot will calculate the position of each of its neighbors and move towards the center. If we want the robots to disperse, the Bluebots do the opposite. If we want them to swim as a school in a circle, they are programmed to follow lights directly in front of them in a clockwise direction. "

The researchers also simulated a simple search mission with a red light in the tank. Using the dispersion algorithm, the Bluebots spread out across the tank until one comes close enough to the light source to detect it. Once the robot detects the light, its LEDs begin to flash, which triggers the aggregation algorithm in the rest of the school. From there, all the Bluebots aggregate around the signaling robot.

"Our results with Blueswarm represent a significant milestone in the investigation of underwater self-organized collective behaviors," said Nagpal. "Insights from this research will help us develop future miniature underwater swarms that can perform environmental monitoring and search in visually-rich but fragile environments like coral reefs. This research also paves a way to better understand fish schools, by synthetically recreating their behavior."

Credit: 
Harvard John A. Paulson School of Engineering and Applied Sciences

Shine on: Avalanching nanoparticles break barriers to imaging cells in real time

image: At left: Experimental PASSI (photon avalanche single-beam super-resolution imaging) images of thulium-doped avalanching nanoparticles separated by 300 nanometers. At right: PASSI simulations of the same material.

Image: 
Berkeley Lab and Columbia University

Since the earliest microscopes, scientists have been on a quest to build instruments with finer and finer resolution to image a cell's proteins - the tiny machines that keep cells, and us, running. But to succeed, they need to overcome the diffraction limit, a fundamental property of light that long prevented optical microscopes from bringing into focus anything smaller than half the wavelength of visible light (around 200 nanometers or billionths of a meter) - far too big to explore many of the inner-workings of a cell.

For over a century, scientists have experimented with different approaches - from intensive calculations to special lasers and microscopes - to resolve cellular features at ever smaller scales. And in 2014, scientists were awarded the Nobel Prize in Chemistry for their work in super-resolution optical microscopy, a groundbreaking technique that bypasses the diffraction limit by harnessing special fluorescent molecules, unusually shaped laser beams, or sophisticated computation to visualize images at the nanoscale.

Now, as reported in a cover article in the journal Nature, a team of researchers co-led by the U.S. Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) and Columbia University's Fu Foundation School of Engineering and Applied Science (Columbia Engineering) has developed a new class of crystalline material called avalanching nanoparticles (ANPs) that, when used as a microscopic probe, overcomes the diffraction limit without heavy computation or a super-resolution microscope.

The researchers say that the ANPs will advance high-resolution, real-time bio-imaging of a cell's organelles and proteins, as well as the development of ultrasensitive optical sensors and neuromorphic computing that mimics the neural structure of the human brain, among other applications.

"These nanoparticles make every simple scanning confocal microscope into a real-time superresolution microscope, but what they do isn't exactly superresolution. They actually make the diffraction limit much lower," but without the process-heavy computation of previous techniques, said co-author Bruce Cohen, a staff scientist in Berkeley Lab's Molecular Foundry and Molecular Biophysics & Integrated Bioimaging Division. Scanning confocal microscopy is a technique that produces a magnified image of a specimen, pixel by pixel, by scanning a focused laser across a sample.

A surprise discovery

The photon avalanching nanoparticles described in the current study are about 25 nanometers in diameter. The core contains a nanocrystal doped with the lanthanide metal thulium, which absorbs and emits light. An insulating shell ensures that the part of the nanoparticle that's absorbing and emitting light is far from the surface and doesn't lose its energy to its surroundings, making it more efficient, explained co-author Emory Chan, a staff scientist in Berkeley Lab's Molecular Foundry.

A defining characteristic of photon avalanching is its extreme nonlinearity. This means that each doubling of the laser intensity shone to excite a microscopic material more than doubles the material's intensity of emitted light. To achieve photon avalanching, each doubling of the exciting laser intensity increases the intensity of emitted light by 30,000-fold.

But to the researchers' delight, the ANPs described in the current study met each doubling of exciting laser intensity with an increase of emitted light by nearly 80-million-fold. In the world of optical microscopy, that is a dazzling degree of nonlinear emission. And since the study's publication, "we actually have some better ones now," Cohen added.

The researchers might not have considered thulium's potential for photon avalanching if it weren't for Chan's study in 2016, which calculated the light-emitting properties of hundreds of combinations of lanthanide dopants when stimulated by 1,064-nanometer near-infrared light. "Surprisingly, thulium-doped nanoparticles were predicted to emit the most light, even though conventional wisdom said that they should be completely dark," noted Chan.

According to the researchers' models, the only way that thulium could be emitting light is through a process called energy looping, which is a chain reaction in which a thulium ion that has absorbed light excites neighboring thulium ions into a state that allows them to better absorb and emit light.

Those excited thulium ions, in turn, make other neighboring thulium ions more likely to absorb light. This process repeats in a positive feedback loop until a large number of thulium ions are absorbing and emitting light.

"It's like placing a microphone close to a speaker - the feedback caused by the speaker amplifying its own signal blows up into an obnoxiously loud sound. In our case, we are amplifying the number of thulium ions that can emit light in a highly nonlinear way," Chan explained. When energy looping is extremely efficient, it is called photon avalanching since a few absorbed photons can cascade into the emission of many photons, he added.

At the time of the 2016 study, Chan and colleagues hoped that they might see photon avalanching experimentally, but the researchers weren't able to produce nanoparticles with sufficient nonlinearity to meet the strict criteria for photon avalanching until the current study.

To produce avalanching nanoparticles, the researchers relied on the Molecular Foundry's nanocrystal-making robot WANDA (Workstation for Automated Nanomaterial Discovery and Analysis) to fabricate many different batches of nanocrystals doped with different amounts of thulium and coated with insulating shells. "One of the ways we were able to achieve such great photon-avalanching performance with our thulium nanoparticles was by coating them with very thick, nanometer-scale shells," said Chan, who co-developed WANDA in 2010.

Growing the shells is an exacting process that can take up to 12 hours, he explained. Automating the process with WANDA allowed the researchers to perform other tasks while ensuring a uniformity of thickness and composition among the shells, and to fine-tune the material's response to light and resolution power.

Harnessing an avalanche at the nanoscale

Scanning confocal microscopy experiments led by co-author P. James Schuck, an associate professor of mechanical engineering at Columbia Engineering who was a senior scientist in Berkeley Lab's Molecular Foundry, showed that nanoparticles doped with moderately high concentrations of thulium exhibited nonlinear responses greater than expected for photon avalanching, making these nanoparticles one of the most nonlinear nanomaterials known to exist.

Changhwan Lee, a graduate student in Schuck's lab, performed a battery of optical measurements and calculations to confirm that the nanoparticles met the strict criteria for photon avalanching. This work is the first time all the criteria for photon avalanching have been met in a single nanometer-sized particle.

The extreme nonlinearity of the avalanching nanoparticles allowed Schuck and Lee to excite and image single nanoparticles spaced closer than 70 nanometers apart. In conventional "linear" light microscopy, many nanoparticles are excited by the laser beam, which has a diameter of greater than 500 nanometers, making the nanoparticles appear as one large spot of light.

The authors' technique - called photon avalanche single-beam super-resolution imaging (PASSI) - takes advantage of the fact that a focused laser beam spot is more intense in its center than on its edges, Chan said. Since the emission of the ANPs steeply increases with laser intensity, only the particles in the 70-nanometer center of the laser beam emit appreciable amounts of light, leading to the exquisite resolution of PASSI.

The current study, the researchers say, immediately opens new applications in ultrasensitive infrared photon detection and conversion of near-infrared light into higher energies for super-resolution imaging with commercially available scanning confocal optical microscopes, and improved resolution in state-of-the-art super-resolution optical microscopes.

"That's amazing. Usually in optical science, you have to use really intense light to get a large nonlinear effect - and that's no good for bioimaging because you're cooking your cells with that power of light," said Schuck, who has continued his collaborative research at the Molecular Foundry as a user. "But with these thulium-doped nanoparticles, we've shown that they don't require that much input intensity to get a resolution that's less than 70 nanometers. Normally, with a scanning confocal microscope, you'd get 300 nanometers. That's a pretty good improvement, and we'll take it, especially since you're getting super-resolution images essentially for free."

Now that they have successfully lowered the diffraction limit with their photon avalanching nanoparticles, the researchers would like to experiment with new formulations of the material to image living systems, or detect changes in temperature across a cell's organelle and protein complex.

"Observing such highly nonlinear phenomena in nanoparticles is exciting because nonlinear processes are thought to pattern structures like stripes in animals and to produce periodic, clocklike behavior," Chan noted. "Nanoscale nonlinear processes could be used to make tiny analog-to-digital converters, which may be useful for light-based computer chips, or they could be used to concentrate dim, uniform light into concentrated pulses."

"These are such unusual materials, and they're brand new. We hope that people will want to try them with different microscopes and different samples, because the great thing about basic science discoveries is that you can take an unexpected result and see your colleagues run with it in exciting new directions," Cohen said.

Credit: 
DOE/Lawrence Berkeley National Laboratory

Limits of atomic nuclei predicted

Atomic nuclei are held together by the strong interaction between neutrons and protons. About ten percent of all known nuclei are stable. Starting from these stable isotopes, nuclei become increasingly unstable as neutrons are added or removed, until neutrons can no longer bind to the nucleus and "drip" out. This limit of existence, the so-called neutron "dripline", has so far been discovered experimentally only for light elements up to neon. Understanding the neutron dripline and the structure of neutron-rich nuclei also plays a key role in the research program for the future accelerator facility FAIR at the GSI Helmholtz Centre for Heavy Ion Research in Darmstadt.

In a new study, "Ab Initio Limits of Nuclei," published in the journal Physical Review Letters as an Editors' Suggestion with an accompanying synopsis in APS Physics, Professor Achim Schwenk of TU Darmstadt and a Max Planck Fellow at the MPI for Nuclear Physics in Heidelberg, together with scientists from the University of Washington, TRIUMF and the University of Mainz, succeeded in calculating the limits of atomic nuclei using innovative theoretical methods up to medium-mass nuclei. The results are a treasure trove of information about possible new isotopes and provide a roadmap for nuclear physicists to verify them.

The new study is not the first attempt to theoretically explore the extremely neutron-rich region of the nuclear landscape. Previous studies used density functional theory to predict bound isotopes between helium and the heavy elements. Professor Schwenk and colleagues, on the other hand, explored the chart of nuclides for the first time based on ab initio nuclear theory. Starting from microscopic two- and three-body interactions, they solved the many-particle Schrödinger equation to simulate the properties of atomic nuclei from helium to iron. They accomplished this by using a new ab initio many-body method - the In-Medium Similarity Renormalization Group -, combined with an extension that can handle partially filled orbitals to reliably determine all nuclei.

Starting from two- and three-nucleon interactions based on the strong interaction, quantum chromodynamics, the researchers calculated the ground-state energies of nearly 700 isotopes. The results are consistent with previous measurements and serve as the basis for determining the location of the neutron and proton driplines. Comparisons with experimental mass measurements and a statistical analysis enabled the determination of theoretical uncertainties for their predictions, such as for the separation energies of nuclei and thus also for the probability that an isotope is bound or does not exist (see figure).

The new study is considered a milestone in understanding how the chart of nuclides and the structure of nuclei emerges from the strong interaction. This is a key question of the DFG-funded Collaborative Research Center 1245 "Nuclei: From Fundamental Interactions to Structure and Stars" at the TU Darmstadt, within which this research was conducted. Next, the scientists want to extend their calculations to heavier elements in order to advance the input for the simulation of the synthesis of heavy elements. This proceeds in neutron-rich environments in the direction of the neutron dripline and occurs in nature when neutron stars merge or in extreme supernovae.

Credit: 
Technische Universitat Darmstadt

Seawater as an electrical cable !? Wireless power transfers in the ocean

image: Underwater drone (top left), power supply station (bottom left), drone parked on the power supply station installed on the ocean floor for battery charging (right)

Image: 
COPYRIGHT (C) TOYOHASHI UNIVERSITY OF TECHNOLOGY. ALL RIGHTS RESERVED.

Overview:

Associate professor Masaya Tamura, Kousuke Murai (who has completed the first term of his master's program), and their research team from the Department of Electrical and Electronic Information Engineering at Toyohashi University of Technology have successfully transferred power and data wirelessly through seawater by using a power transmitter/receiver with four layers of ultra-thin, flat electrodes. In the field of wireless power transfers, seawater behaves as a dielectric with extremely high loss, and achievement through capacitive coupling is difficult. Up until now, it had been thought that wireless power transfers could only be achieved through magnetic coupling. This time, with a focus on the high-frequency properties of seawater, a third method for conductive coupling was devised, and a power transmitter/receiver was developed to achieve highly-efficient power transfers.

Details:

The number of people involved in the Japanese fishing industry continues to decrease yearly as the average age increases. One reason for this is the large amount of high-intensity manual labor which has to rely on human hands. To improve this situation, automation is advancing through the use of robots that clean aquaculture nets, etc. In the future, it is expected that robots (underwater drones) will be developed that are stationed in seawater so that water quality and environmental management, checks on fish growth, etc., can all be managed robotically. However, because these drones are battery-powered, it is necessary to pull them out of the water, charge them, and have them dive back into the water repeatedly. Additionally, the data collected underwater must be collected at this time. The key to this problem is the development of technology to wirelessly transfer power and data in seawater through a power supply station. In particular, as these drones are lightweight, and because increasing the weight and volume makes controlling buoyancy and orientation difficult, technology that is lightweight and space-conserving must be realized. Associate professor Masaya Tamura and his team of researchers have developed a new type of electric transmitter/receiver that achieves highly-efficient wireless power transfers even in seawater.

The efficiency of the wireless power transfer depends on the kQ product, which is the product of the coupling coefficient k between the power transmitter and receiver and the Q-factor of the power transmitter/receiver loss including the influence of the surrounding environment. Efficiency improves as k approaches 1 and the Q-factor increases. However, high-frequency electric current flows in highly conductive dielectrics like seawater, making it difficult to discuss k and the Q-factor in isolation. However, because the principle whereby efficiency improves as the kQ product increases does not change, key parameters were identified for improving efficiency from an equivalent circuit that focused on the conductivity of seawater from the viewpoint of the kQ product. A design theory was then established wherein the kQ product indicated the maximum value, and the power transmitter/receiver was designed. Based on this, an RF-RF power transfer efficiency of 94.5% at a transfer distance of 2 cm and of at least 85% at a transfer distance of 15 cm was achieved across a wide band. A power transfer efficiency of at least 90% can even be maintained at a transfer distance of 2 cm with 1 kW of electric power. Moreover, high-speed transfers can be achieved to maintain high efficiency across a wide band. The team was successful in using the electric transmitter/receiver that they developed to charge a capacitor and to use this power to drive a camera module that transferred video in real-time through the same electrical transmitter/receiver. The transfer speed this time was about 90 Mbps, but higher speeds are possible. Experiments to transfer power and data to a small underwater drone, with the expectation that the drone would park on the power supply station, were also successful. The total weight of the electrical receiver and electrical power circuit mounted on the drone at this time was very light at around 270 g.

Development Background:

The leader of the research team, associate professor Masaya Tamura, stated: "In ion-rich seawater, it was expected that high-frequency electric current would flow with minimal loss. When researching wireless power transfers in freshwater, and when analyzing the way power transfer efficiency changed with the salinity of the water, we encountered a phenomenon where efficiency would decrease by a few percent as the salinity increased, but from a certain salinity, the efficiency would recover and be maintained at about 20%. I firmly believed that this was evidence substantiating my prediction, and we developed an operational theory from an equivalent circuit for a power transmitter/receiver to investigate and clarify these results in detail. We then designed the structure of the power transmitter/receiver based on that theory, and after creating the prototype and performing the measurements, we obtained results for a power transfer in seawater with an efficiency of at least 90%. To prevent chemical changes to the surface of the electrodes that occur in seawater when large amounts of power are supplied, an insulated coating was applied. We were surprised to achieve efficiency of at least 90% even under these conditions."

Future Outlook:

The research team believes that these research results will allow drones to transfer data and be recharged in seawater without significant design changes to underwater drones, and that they will contribute to rapid improvements in operational efficiency. The power transmitter/receiver that was developed is very simple and lightweight, meaning that the increase in weight for underwater drones can be minimized. Their ultimate goal is to contribute to the development of underwater drone systems that can be entirely managed on land. The results of this research are planned to be announced in the future in publication, at academic conferences, etc.

Credit: 
Toyohashi University of Technology (TUT)

How will we achieve carbon-neutral flight in future?

It is politically agreed and necessary for climate protection reasons that our entire economy becomes climate-neutral in the coming decades - and that applies to air travel, too. This is a technically feasible goal, and there are numerous ways to achieve it. ETH Professor Marco Mazzotti and his team have now compared the options that appear to be the easiest to implement in the short and medium term and evaluated them according to factors such as cost-effectiveness.

The ETH researchers conclude that the most favourable option is to continue powering aircraft with fossil fuels in future, but then remove the associated CO2 emissions from the atmosphere using CO2 capture plants and store that CO2 permanently underground (carbon capture and storage, CCS). "The necessary technology already exists, and underground storage facilities have been operating for years in the North Sea and elsewhere," says Viola Becattini, a postdoc in Mazzotti's group and the study's first author.

"The approach may become a cost-competitive mitigation solution for air travel in case, for example, a carbon tax or a cap-and-trade system were imposed on emissions from fossil jet fuels, or if governments were to provide financial incentives for deploying CCS technologies and achieving climate goals," says ETH professor Mazzotti.

Directly or indirectly from the air

Basically, there are two ways to capture CO2: either directly from the air or indirectly at a site where organic material is burned, for example in a waste incineration plant. "Roughly speaking, half of the carbon in the waste burned in municipal incinerators comes from fossil sources, such as plastic that has been produced from petroleum. The other half is organic material, such as wood or wood products like paper and cardboard," Mazzotti says.

From a climate action perspective, capturing and storing the share of carbon that has fossil origin is a zero-sum game: it simply sends carbon that originated underground back to where it came from. As to the share of carbon from organic sources, this was originally absorbed from the air as CO2 by plants, so capturing and storing this carbon is an indirect way to remove CO2 from the air. This means CCS is a suitable method for putting carbon from fossil aviation fuels back underground - and effectively making air travel carbon-neutral.

In their study, the ETH scientists were able to show that indirect carbon capture from waste incineration gases costs significantly less than direct carbon capture from the air, which is also already technically feasible.

Synthetic fuels more expensive

As a further option, the scientists investigated producing synthetic aviation fuel from CO2 captured directly or indirectly from the air (carbon capture and utilisation, CCU). Because the chemical synthesis of fuel from CO2 is energy-intensive and therefore expensive, this approach is in any case less economical than using fossil fuel and CCS. Regardless of whether the CO2 is captured directly or indirectly, CCU is about three times more expensive than CCS.

ETH Professor Mazzotti also points out one of CCU's pitfalls: depending on the energy source, this approach may even be counterproductive from a climate action perspective, namely if the electricity used to produce the fuel is from fossil fuel-fired power plants. "With Switzerland's current electricity mix or with France's, which has a high proportion of nuclear power, energy-intensive CCU is already more harmful to the climate than the status quo with fossil aviation fuels - and even more so with the average electricity mix in the EU, which has a higher proportion of fossil fuel-fired power plants," Mazzotti says. The only situation in which CCU would make sense from a climate action perspective is if virtually all the electricity used comes from carbon-neutral sources.

More profitable over time

"Despite this limitation and the fundamentally high cost of CCU, there may be regions of the world where it makes sense. For example, where a lot of renewable electricity is generated and there are no suitable CO2 storage sites," Becattini says.

The ETH researchers calculated the costs of the various options for carbon-neutral aviation not only in the present day, but also for the period out to 2050. They expect CCS and CCU technologies to become less expensive both as technology advances and through economies of scale. The price of CO2 emissions levied as carbon taxes is likely to rise. Because of these two developments, the researchers expect CCS and CCU to become more profitable over time.

Infrastructure required

The researchers emphasise that there are other ways to make air travel carbon-neutral. For instance, there is much research underway into aircraft that run on either electricity or hydrogen. Mazzotti says that while these efforts should be taken seriously, there are drawbacks with both approaches. For one thing, electrically powered aircraft are likely to be unsuitable for long-haul flights because of how much their batteries will weigh. And before hydrogen can be used as a fuel, both the aircraft and their supply infrastructure will have to be completely developed and built from scratch. Because these approaches are currently still in the development stage, with many questions still open, the ETH scientists didn't include them in their analysis and instead focused on drop-in liquid fuels.

However, the researchers emphasise that CCS, too, requires infrastructure. The places where CO2 can be captured efficiently and where it can be stored may be far apart, making transport infrastructure for CO2 necessary. Science, industry and politics will have to work hard in the coming years to plan and build this infrastructure - not only for CO2 from aviation, but also for emissions from other carbon-intensive sectors such as chemicals or cement.

Credit: 
ETH Zurich

Lipid biomarkers in urine can determine the type of asthma

image: Sven-Erik Dahlén, professor at the Institute of Environmental Medicine, Karolinska Institutet.

Image: 
Mattias Ahlm

In a new study, researchers at Karolinska Institutet in Sweden have used a urine test to identify and verify a patient's type of asthma. The study, which has been published in the American Journal of Respiratory and Critical Care Medicine, lays the foundation for a more personalized diagnosis and may result in improved treatment of severe asthma in the future.

About 10 percent of the Swedish population suffers from asthma, a disease that has become increasingly widespread over the past 50 years, with annual global mortality of around 400,000 according to the World Health Organization. Asthma is characterized by chronic inflammation in the airways, which can result in symptoms including coughing, mucous formation and shortness of breath.

There are many types of asthma, and symptoms can vary between individuals, from mild to severe. Currently, in order to make an asthma diagnosis, a wide-ranging investigation is conducted that can consist of multiple elements including patient interviews, lung function tests, blood tests, allergy investigations and x-rays.

"There are no simple methods to determine what type of asthma an individual has, knowledge that is particularly important in order to better treat patients suffering from the more severe types of the disease," says Craig Wheelock, associate professor at the Department of Medical Biochemistry and Biophysics, Karolinska Institutet, and the last author of the study.

In this new study, research groups at Karolinska Institutet have made an important discovery, which can offer a simple but clear contribution to a correct diagnosis.

Using a mass spectrometry-based methodology developed in the Wheelock laboratory, they were able to measure urinary metabolite levels of certain prostaglandins and leukotrienes --eicosanoid signalling molecules that are known mediators of asthmatic airway inflammation.

"We discovered particularly high levels of the metabolites of the mast cell mediator prostaglandin D2 and the eosinophil product leukotriene C4 in asthma patients with what is referred to as Type 2 inflammation," says Johan Kolmert, postdoctoral researcher at the Institute of Environmental Medicine, Karolinska Institutet, and first author of the study. "Using our methodology, we were able to measure these metabolites with high accuracy and link their levels to the severity and type of asthma."

The study is based on data from the U-BIOPRED study (Unbiased BIOmarkers in PREDiction of respiratory disease outcomes), which was designed to investigate severe asthma. The study included 400 participants with severe asthma, which often requires treatment with corticosteroid tablets, nearly 100 individuals with milder forms of asthma and 100 healthy control participants.

In addition to the increased eicosanoid metabolite levels associated with asthma type and severity, the study shows that measurement using a urine test provides improved accuracy relative to other measurement methods, for example certain kinds of blood tests.

"Another discovery was that levels of these metabolites were still high in patients who were seriously ill, despite the fact that they were being treated with corticosteroid tablets. This highlights the need for alternative treatments for this group of patients," explains Johan Kolmert.

The researchers were also able to replicate the discovery in urine samples from a study of schoolchildren with asthma, that was conducted by the paediatricians Gunilla Hedlin, Jon Konradsen and Björn Nordlund at Karolinska Institutet.

"We could see that those children who had asthma with Type 2 inflammation were displaying the same profiles of metabolites in the urine as adults," says Sven-Erik Dahlén, professor at the Institute of Environmental Medicine, Karolinska Institutet, who led the work together with Craig Wheelock.

According to the researchers, this study of severe asthma may be the largest evaluation of eicosanoid urinary metabolites conducted worldwide, and may be an important step towards future biomarker-guided precision medicine.

Treatment with steroid inhalers is often sufficient for patients with mild asthma, but for those with severe asthma it may be necessary to supplement with corticosteroid tablets. Corticosteroids are associated with several side-effects, such as high blood pressure, diabetes and harm to the eyes and bones.

"To replace corticosteroid tablets, in recent times several biological medicines have been introduced to treat patients with Type 2 inflammation characterised by increased activation of mast cells and eosinophils," Sven-Erik Dahlén says. "However, these treatments are very expensive, so it is an important discovery that urine samples may be used to identify precisely those patients who will benefit from the Type 2 biologics."

Credit: 
Karolinska Institutet

High-sensitivity nanophotonic sensors with passive trapping of analyte molecules in hot-spots

image: Top: schematic of the optical sensor design with trapped molecules. Bottom: schematic showing the process of concentrating and trapping molecules in a solution.

Image: 
by Xianglong Miao, Lingyue Yan, Yun Wu and Peter Q. Liu

Optical sensors can quantitatively analyze chemical and biological samples by measuring and processing the optical signals produced by the samples. Optical sensors based on infrared absorption spectroscopy can achieve high sensitivity and selectivity in real time, and therefore play a crucial role in a variety of application areas such as environmental sensing, medical diagnostics, industrial process control and homeland security.

In a new paper published in Light: Science & Applications, a team of scientists, led by Dr. Peter Q. Liu from the Department of Electrical Engineering, the State University of New York at Buffalo, have demonstrated a new type of high-performance optical sensor which can utilize the surface tension of liquid to concentrate and trap analyte molecules at the most sensitive locations of the device structure, and hence significantly enhance the sensitivity performance. Based on a metal-insulator-metal sandwich structure which also features nanometer scale trenches, the sensor can passively retain and concentrate an analyte solution in these tiny trenches as the solution gradually evaporates on the sensor surface, and eventually trap the precipitated analyte molecules inside these trenches. As the light intensity is also highly enhanced in these trenches by design, the interaction between light and the trapped analyte molecules is drastically enhanced, leading to a readily detectable optical signal (i.e. changes in the light absorption spectrum) even at picogram level of analyte mass.

In general, different molecular species absorb infrared light at different frequencies, and therefore one can identify and quantify the detected molecules by analyzing the observed absorption lines in the spectrum. Although such molecular absorption is intrinsically weak, optical sensors can drastically enhance the molecular absorption by employing suitable nanostructures on the device surface to confine light into very small volumes (so called hot-spots), which leads to very large light intensity. In doing so, each molecule in the hot-spots can absorb much more light in a given time interval than a molecule outside the hot-spots, which makes it possible to measure very low quantity of chemical or biological substances with high reliability, if enough molecules are located in the hot-spots. This general approach is also called surface enhanced infrared absorption (SEIRA).

However, a key issue for most SEIRA optical sensors is that the hot-spots only occupy a tiny portion of the entire device surface area. On the other hand, the analyte molecules are usually randomly distributed on the device surface, and hence only a small fraction of all analyte molecules are located in the hot-spots and contribute to the enhanced light absorption. "The SEIRA signal would be much larger if most of the analyte molecules can be delivered into the hot-spots of an optical sensor. This is the key motivation of our optical sensor design." Dr. Liu said.

"There are techniques, such as optical tweezers and dielectrophoresis, which can manipulate small particles or even molecules and deliver them to target locations such as the hot-spots. However, these techniques requires significant amount of energy input and are also complicated to utilize." Dr. Liu added, "What we set out to explore is a device structure that can trap analyte molecules precipitated out of a solution into the hot-spots in a passive (requiring no energy input) and effective way, and we realized that we can make use of the surface tension of liquid to achieve this goal."

In additional to the demonstration of high-sensitivity biomolecule sensing, the team also conducted another set of experiments, which showed that the same type of device structure also achieved effective trapping of liposome particles (~100nm characteristic dimension) in the tiny trenches. This means such optical sensors can be optimized for detecting and analyzing nano-objects such as viruses or exosomes, which have similar sizes as the liposomes used in the experiments.

The scientists believe that the demonstrated SEIRA optical sensor design strategy can be applied to other types of optical sensors as well. Besides sensing applications, such device structures can also be used for manipulating nanoscale objects including exosomes, viruses and quantum dots.

Credit: 
Light Publishing Center, Changchun Institute of Optics, Fine Mechanics And Physics, CAS

Northern lakes at risk of losing ice cover permanently, impacting drinking water

image: A northern lake

Image: 
York University Postdoctoral Fellow Alessandro Filazzola

TORONTO, Jan. 13, 2021 - Close to 5,700 lakes in the Northern Hemisphere may permanently lose ice cover this century, 179 of them in the next decade, at current greenhouse gas emissions, despite a possible polar vortex this year, researchers at York University have found.

Those lakes include large bays in some of the deepest of the Great Lakes, such as Lake Superior and Lake Michigan, which could permanently become ice free by 2055 if nothing is done to curb greenhouse gas emissions or by 2085 with moderate changes.

Many of these lakes that are predicted to stop freezing over are near large human populations and are an important source of drinking water. A loss of ice could affect the quantity and quality of the water.

"We need ice on lakes to curtail and minimize evaporation rates in the winter," says lead researcher Sapna Sharma, an associate professor in the Faculty of Science. "Without ice cover, evaporation rates would increase, and water levels could decline. We would lose freshwater, which we need for drinking and everyday activities. Ice cover is extremely important both ecologically and socio-economically."

The researchers, including Postdoctoral Fellows Kevin Blagrave and Alessandro Filazzola, looked at 51,000 lakes in the Northern Hemisphere to forecast whether those lakes would become ice-free using annual winter temperature projections from 2020 to 2098 with 12 climate change scenarios.

Watch video: https://youtu.be/Y7JSfTJzBFQ

"With increased greenhouse gas emissions, we expect greater increases in winter air temperatures, which are expected to increase much more than summer temperatures in the Northern Hemisphere," says Filazzola. "It's this warming of a couple of degrees, as result of carbon emissions, that will cause the loss of lake ice into the future."

The most at-risk lakes are those in southern and coastal regions of the Northern Hemisphere, some of which are amongst the largest lakes in the world.

"It is quite dramatic for some of these lakes, that froze often, but within a few decades they stop freezing indefinitely," says Filazzola. "It's pretty shocking to imagine a lake that would normally freeze no longer doing so."

The researchers found that when the air temperature was above -0.9 C, most lakes no longer froze. For shallow lakes, the air temperature could be zero or a bit above. Larger and deeper lakes need colder temperatures to freeze - some as cold as -4.8 C ? than shallow lakes.

"Ice cover is also important for maintaining the quality of our freshwater," says Sharma. "In years where there isn't ice cover or when the ice melts earlier, there have been observations that water temperatures are warmer in the summer, there are increased rates of primary production, plant growth, as well as an increased presence of algal blooms, some of which may be toxic."

To preserve lake ice cover, more aggressive measures to mitigate greenhouse gas emissions are needed now, says Sharma. "I was surprised at how quickly we may see this transition to permanent loss of ice cover in lakes that had previously frozen near consistently for centuries."

Credit: 
York University

Medication shows promise for weight loss in patients with obesity, diabetes

SILVER SPRING, Md.--A new study confirms that treatment with Bimagrumab, an antibody that blocks activin type II receptors and stimulates skeletal muscle growth, is safe and effective for treating excess adiposity and metabolic disturbances of adult patients with obesity and type 2 diabetes.

"These exciting results suggest that there may be a novel mechanism for achieving weight loss with a profound loss of body fat and an increase in lean mass, along with other metabolic benefits," said Steve Heymsfield, MD, FTOS, past president of The Obesity Society and corresponding author of the study. Heymsfield is professor and director of the Metabolism and Body Composition Laboratory at the Pennington Biomedical Research Center in Baton Rouge, La.

A total of 75 patients with type 2 diabetes, body mass index between 28 and 40 and glycated hemoglobin A1c levels between 6.5 percent and 10 percent were selected for the phase 2 randomized clinical trial. Patients were injected with either Bimagrumab or a placebo (a dextrose solution) every 4 weeks for 48 weeks. Both groups received diet and exercise counseling. The research took place at nine sites in the United States and the United Kingdom from February 2017 to May 2019.

At the end of the 48-week study, researchers found a nearly 21 percent decrease in body fat in the Bimagrumab group compared to 0.5 percent in the placebo group. The results also revealed the Bimagrumab group gained 3.6 percent of lean mass compared with a loss of 0.8 percent in the placebo group. The combined loss in total body fat and gain in lean mass led to a net 6.5 percent reduction in body weight in patients receiving Bimagrumab compared with 0.8 percent weight loss in their counterparts receiving the placebo.

The sample size of 75 participants was a limitation of the study. There was also a gender imbalance across the groups with more women randomized to Bimagrumab and more men to the placebo.

Partial results of this study were presented during a research forum titled "Emerging Pharmacological Anti-obesity Therapies" at ObesityWeek® 2019 in Las Vegas, Nev.

Credit: 
The Obesity Society

Flashing plastic ash completes recycling

image: Rice University chemists turned otherwise-worthless pyrolyzed ash from plastic recycling into graphene through a Joule heating process. The graphene could be used to strengthen concrete and toughen plastics used in medicine, energy and packaging applications.

Image: 
Tour Group/Rice University

HOUSTON - (Jan. 13, 2021) - Pyrolyzed plastic ash is worthless, but perhaps not for long.

Rice University scientists have turned their attention to Joule heating of the material, a byproduct of plastic recycling processes. A strong jolt of energy flashes it into graphene.

The technique by the lab of Rice chemist James Tour produces turbostratic graphene flakes that can be directly added to other substances like films of polyvinyl alcohol (PVA) that better resist water in packaging and cement paste and concrete, dramatically increasing their compressive strength.

The research appears in the journal Carbon.

Like the flash graphene process the lab introduced in 2019, pyrolyzed ash turns into turbostratic graphene. That has weaker attractive interactions between the flakes, making it easier to mix them into solutions.

Last October, the Tour lab reported on a process to convert waste plastic into graphene. The new process is even more specific, turning plastic that is not recovered by recycling into a useful product.

"This work enhances the circular economy for plastics," Tour said. "So much plastic waste is subject to pyrolysis in an effort to convert it back to monomers and oils. The monomers are used in repolymerization to make new plastics, and the oils are used in a variety of other applications. But there is always a remaining 10% to 20% ash that's valueless and is generally sent to landfills.

"Now we can convert that ash into flash graphene that can be used to enhance the strength of other plastics and construction materials," he said.

Pyrolysis involves heating a material to break it down without burning it. The products of pyrolyzed, recycled plastic include energy-rich gases, fuel oils, waxes, naphtha and virgin monomers from which new plastic can be produced.

But the rest -- an estimated 50,000 metric tons in the United States per year -- is discarded.

"Recyclers do not turn large profits due to cheap oil prices, so only about 15% of all plastic gets recycled," said Rice graduate student Kevin Wyss, lead author of the study. "I wanted to combat both of these problems."

The researchers ran a pair of experiments to test the flashed ash, first mixing the resulting graphene with PVA, a biocompatible polymer being investigated for medical applications, fuel cell polymer electrolyte membranes and environmentally friendly packaging. It has been held back by the base material's poor mechanical properties and vulnerability to water.

Adding as little as 0.1% of graphene increases the amount of strain the PVA composite can handle before failure by up to 30%, they reported. It also significantly improves the material's resistance to water permeability.

In the second experiment, they observed significant increases in compressive strength by adding graphene from ash to Portland cement and concrete. Stronger concrete means less concrete needs to be used in structures and roads. That curtails energy use and cuts pollutants from its manufacture.

Credit: 
Rice University

Wetland methane cycling increased during ancient global warming event

Wetlands are the dominant natural source of atmospheric methane, a potent greenhouse gas which is second only to carbon dioxide in its importance to climate change. Anthropogenic climate change is expected to enhance methane emissions from wetlands, resulting in further warming. However, wetland methane feedbacks were not fully assessed in the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report, posing a challenge to meeting the global greenhouse gas mitigation goals set under the Paris Agreement.

To understand how wetland methane cycling may evolve and drive climate feedbacks in the future, scientists are increasingly looking to Earth's past.

"Ice core records indicate that atmospheric methane is very sensitive to climate, but we cannot measure atmospheric methane concentrations beyond them, prior to about 1 million years ago," said Dr Gordon Inglis, lead author and Royal Society Dorothy Hodgkin Fellow at the University of Southampton.

"Instead, we must rely on indirect 'proxies' preserved within the sedimentary record. Proxies are surrogates for climate variables that cannot be measured directly, including geochemical data stored in fossils, minerals or organic compounds."

The study, which was published in Geology, is the first to directly resolve the relationship between temperature and wetland methane cycling during the Paleocene-Eocene Thermal Maximum (PETM), an ancient warming event that could offer a glimpse into the future.

The authors used a geochemical tool developed at the University of Bristol to analyse organic compounds made by microbes living in ancient soils and peats. During the PETM, they found the ratio of two carbon isotopes changed in these compounds - a change that was likely due to an increased amount of methane in the microbes' diet.

"We show that the PETM was associated with an increase in wetland methane cycling; if some of this methane escaped into the atmosphere, it would have led to additional planetary warming. Crucially, this could foreshadow changes that the methane cycle will experience in the future due to anthropogenic emissions," said Dr Gordon Inglis.

"Our colleagues have previously shown the inclusion of methane emissions in climate model simulations is critical for interpreting past warmth. However, until recently, there have been no tools to test these predictions. This study confirms that methane cycling increased during the PETM, and perhaps during other warming events in Earth history," said Professor Rich Pancost, Head of the School of Earth Sciences at the University of Bristol.

Intriguingly, proxies for temperature and methane cycling are only coupled at the onset of this ancient warming event, with the methane proxies rapidly returning to pre-event values even though temperatures remain high for the duration of the PETM. This suggests it is the onset of rapid global warming that is particularly disruptive to methane cycling in wetlands, a finding that is particularly concerning given the rapid global warming we are experiencing now.

Credit: 
University of Bristol

Spilling the beans on coffee's true identity

People worldwide want their coffee to be both satisfying and reasonably priced. To meet these standards, roasters typically use a blend of two types of beans, arabica and robusta. But, some use more of the cheaper robusta than they acknowledge, as the bean composition is difficult to determine after roasting. Now, researchers reporting in ACS' Journal of Agricultural and Food Chemistry have developed a new way to assess exactly what's in that cup of joe.

Coffee blends can have good quality and flavor. However, arabica beans are more desirable than other types, resulting in a higher market value for blends containing a higher proportion of this variety. In some cases, producers dilute their blends with the less expensive robusta beans, yet that is hard for consumers to discern. Recently, methods involving chromatography or spectroscopy were developed for coffee authentication, but most of these are labor- and time-intensive, or use chloroform for the extraction, which limits the types of compounds that can be detected. In some studies, researchers used nuclear magnetic resonance (NMR) spectroscopy to monitor the amount of 16-O-methylcafestol (16-OMC) in coffee, but its concentrations vary depending on geographic location and cultivar. So, Fabrice Berrué and colleagues wanted to build on their previous work with NMR to assess the chemical make-up of each coffee bean variety and confirm the blends of real samples.

The researchers extracted compounds from a test set of pure coffee and known blends with methanol and identified the compounds with NMR. The team found 12 compounds with measurable concentrations, and two had significantly different amounts between the coffee varieties. Elevated concentrations of 16-OMC were unique to robusta, while high concentrations of kahewol -- a compound previously found in coffee beans by other researchers -- were distinct in arabica. There was a direct, reproducible relationship between 16-OMC and kahewol concentrations found in the blends of the two varieties. The team then measured 16-OMC and kahewol levels, in addition to other flavor molecules, in 292 samples from producers around the world. They could successfully authenticate pure coffee, even with relatively low concentrations of the two indicator compounds. For samples in which the composition of blends was known, the team's predictions were within 15% of the actual ratio. The new method results in a more robust and reliable way to verify unadulterated coffee and predict blends than previously reported approaches, the researchers say.

Credit: 
American Chemical Society